report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
Congress funds NNSA’s modernization efforts through various programs and activities within the Weapons Activities appropriations account that generally address the following four areas: The stockpile area includes weapons refurbishments through LEPs and other major weapons alterations and modifications; surveillance efforts to evaluate the condition, safety, and reliability of stockpiled weapons; maintenance efforts to perform certain minor weapons alterations or to replace components that have limited lifetimes; and core activities to support these efforts, such as maintaining base capabilities to produce uranium and plutonium components. NNSA allocates funds to activities that directly support the stockpile area through Directed Stockpile Work within the Weapons Activities appropriation account. The infrastructure area includes government-owned, leased, and permitted physical infrastructure and facilities supporting weapons activities. NNSA’s 2016 nuclear security budget materials include information on two major types of infrastructure activities: (1) Infrastructure and Safety and (2) Readiness in Technical Base and Facilities, which includes two major construction projects. First, the Uranium Processing Facility is a construction project to replace enriched uranium capabilities currently located in the aging Building 9212 at the Y-12 National Security Complex. This project is part of a larger strategy to maintain NNSA’s enriched uranium capability by relocating enriched uranium operations performed in Building 9212 into other existing buildings by 2025 and by constructing a series of smaller buildings. Second, the Chemistry and Metallurgy Research Replacement construction project at Los Alamos National Laboratory, which is part of NNSA’s broader plutonium infrastructure strategy, is composed of subprojects to move analytical chemistry and materials characterization capabilities into two existing facilities. NNSA’s broader plutonium infrastructure strategy also includes the construction of at least two additional modular structures that the Fiscal Year 2016 Stockpile Stewardship and Management Plan reports will achieve operating capacity by 2027. The Uranium Processing Facility and the Chemistry and Metallurgy Research Replacement construction projects are both part of NNSA’s major modernization efforts. The research, development, testing, and evaluation area is composed of programs that are technically challenging, multiyear, multifunctional efforts to develop and maintain critical science and engineering capabilities. These capabilities enable the annual assessment of the safety and reliability of the stockpile, improve understanding of the physics and materials science associated with nuclear weapons, and support the development of code-based models that replace underground testing. The other weapons activities area includes budget estimates associated with nuclear weapon security and transportation, as well as legacy contractor pensions, among other things. The four areas are interconnected. For example, experiments funded under the research, development, testing, and evaluation program area can contribute to the design and production of refurbished weapons, which is funded under the stockpile program area. The infrastructure program area offers critical support to both the stockpile and the research, development, testing, and evaluation program areas by providing a suitable environment for their various activities, such as producing weapons components and performing research and experimentation activities. The U.S. nuclear weapons stockpile is composed of seven different weapons types, including air-delivered bombs, ballistic missile warheads, and cruise missile warheads (see table 1). NNSA’s 2016 budget estimates for modernization total $297.6 billion over 25 years, which is a slight increase from the 2015 estimates of $293.4 billion; however, for certain program areas or individual programs, budget estimates changed more significantly. The overall increase was moderated by a shift of two counterterrorism programs to another area of NNSA’s budget. Program areas increased by as much as 13.2 percent or decreased by as much as 18.1 percent. Within the stockpile program area, which experienced the biggest increase, budget estimates for some LEPs and an alteration increased significantly because of changes in production schedules and scope, among other things. According to the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA’s estimates for the next 25 years total $297.6 billion for modernization activities—an increase of approximately $4.2 billion, or 1.4 percent (in nominal, or current dollar, values), from the $293.4 billion NNSA reported in the 2015 plan. These budget estimates, which are for activities in the Weapons Activities area, are provided in the four areas discussed above: stockpile; infrastructure; research, development, testing, and evaluation; and other weapons activities. The overall increase was moderated by the shift of two counterterrorism programs from the Weapons Activities budget into NNSA’s separate Defense Nuclear Nonproliferation budget. The two counterterrorism programs that were moved out of the Weapons Activities budget together totaled approximately $8 billion. According to NNSA’s 2016 budget justification, this realignment is intended to provide greater clarity regarding the total funding and level of activity in the counterterrorism area. The realignment of these programs, along with other smaller decreases in the other weapons activities category, together accounted for an 18.1 percent decrease in the other weapons activities category during the 25-year period covered by the plan. Without the realignment of the two counterterrorism programs, the increase in NNSA’s overall Weapons Activities budget in the 2016 plan would have been considerably larger, totaling approximately $12.3 billion, or 4.2 percent, over the 2015 Weapons Activities budget. Table 2 details the changes in NNSA’s 25-year budget estimates from 2015 to 2016 for the four main areas in which modernization efforts are funded under Weapons Activities. In addition, budget estimates changed significantly for certain program areas and individual programs. Notably, the 2016 budget materials estimate that during the next 25 years, $117.2 billion will be needed for the stockpile area, which is an increase of $13.7 billion, or 13.2 percent, over the prior year’s budget materials. Part of this increase resulted from the addition of approximately $3 billion to support the Domestic Uranium Enrichment program, as well as increases in estimates for weapons refurbishment activities, particularly LEPs, as discussed later in this report. The 2016 budget materials indicate a decrease of approximately $1.8 billion for infrastructure activities during the next 25 years, compared with the 2015 estimates, in part because of reductions in recapitalization and site operation budget estimates. The 2016 budget materials increased proposed spending on research, development, testing, and evaluation activities by approximately $900 million during the same period. This increase resulted in part from an increase in estimates for the Inertial Confinement Fusion Ignition and High Yield program. Budget estimates in the Fiscal Year 2015 Stockpile Stewardship and Management Plan cover 2015 to 2039, while those in the 2016 plan cover 2016 to 2040. We compared the two sets of estimates by summing up the current dollar values for each, which is how NNSA reports the estimates. The total from the 2016 plan is different from the 2015 plan’s total in that the former includes the year 2040 and excludes the year 2015. Because of the effect of inflation, this comparison could make the difference between the 2016 projection and the 2015 projection appear higher than it would be in the case of a comparison of the two series in real dollar values or in a comparison that looks strictly at the years that overlap from each plan. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, estimates for some major modernization projects increased significantly from those in 2015. Specifically, regarding the weapons refurbishment efforts—which are captured within the stockpile category in the budget— the 2016 budget materials indicate that during the next 25 years, $49.8 billion will be needed to support LEPs and other weapons alteration activities, which is an increase of $8.2 billion, or 19.6 percent, compared with the prior year’s estimate of $41.7 billion. This increase resulted partly from the change in the scope and schedule for some programs, as discussed below. The W88 Alteration 370 effort expanded to include a conventional high explosive replacement while retaining the original schedule for a first production unit in 2020. To support this replacement, NNSA shifted planned spending for other programs—including $15.1 million originally planned for the W76-1 LEP—toward this effort. The Fiscal Year 2016 Stockpile Stewardship and Management Plan reported that the agency also shifted planned spending intended for surveillance of B61 and B83 bombs into the conventional high explosive replacement effort. The Fiscal Year 2016 Stockpile Stewardship and Management Plan estimated the total cost for the W88 Alteration 370 at $2 billion over the 25-year period covered by the plan, while the 2015 plan estimated the total cost at $1.2 billion, for an increase of approximately $0.8 billion. The cruise missile warhead LEP (renamed the W80-4 LEP) now has a first production unit planned for 2025—2 years earlier than the first production unit in the 2015 plan. This shift in schedule is intended to align with revised Air Force plans for the carrier missile. The Fiscal Year 2016 Stockpile Stewardship and Management Plan estimated the total cost for the LEP at $8.2 billion over 25 years, while the 2015 plan estimated the total cost at $6.8 billion, for an increase of approximately $1.5 billion. The Fiscal Year 2016 Stockpile Stewardship and Management Plan included a budget estimate for the B61-13 LEP that did not appear in the 2015 plan. This LEP, which NNSA officials stated is intended to replace the B61-12 LEP, is currently planned to begin in 2038, with an estimated cost of approximately $1.2 billion from 2038 through 2040. Budget estimates for the three interoperable warhead LEPs—the IW- 1, 2, and 3—together accounted for an increase of $5.6 billion over 25 years when compared with the Fiscal Year 2015 Stockpile Stewardship and Management Plan budget estimates. According to the plan, this increase resulted from updated estimates developed through an expanded methodology that incorporated additional stakeholder input into the process that NNSA used to arrive at the estimates, and which resulted in a better understanding of schedule and cost uncertainty. NNSA officials stated that they continue to use stakeholder input to update and assess the cost estimate methodology. The budget estimates for the B61-12 and W76-1 LEPs together accounted for a decrease of almost $1 billion when compared with 2015 estimates. NNSA officials stated that this decrease is the result of the LEPs’ costs winding down as the programs come to an end. Table 3 shows the changes in budget estimates for the weapons refurbishment activities under way during the 25-year period covered by the Fiscal Year 2016 Stockpile Stewardship and Management Plan. Milestone dates for most major modernization projects generally remained the same in the 2016 plan compared with the previous year. The 2010 Nuclear Posture Review included discussion of a number of planned major modernization efforts for NNSA, while other efforts have been identified in later versions of the Stockpile Stewardship and Management Plan and in the 2011 update to the DOD-DOE joint report. Table 4 shows key milestone dates for LEPs and major construction efforts as they have changed since 2010. Estimates for the two major construction projects we reviewed—the Uranium Processing Facility and the Chemistry and Metallurgy Research Replacement construction project—did not change or saw a reduction in estimates along with a recategorization of costs. These projects, included in the infrastructure category in NNSA’s budget materials, support NNSA’s uranium and plutonium strategies, respectively. The Uranium Processing Facility project budget line in the Fiscal Year 2016 Stockpile Stewardship and Management Plan stayed the same as reported in the 2015 plan, with a total estimated budget of $5.2 billion from 2015 through the project’s planned completion in 2025. The 2016 budget estimates for the Chemistry and Metallurgy Research Replacement construction project decreased, and in comparison to the 2015 budget materials, these estimates also shifted from one budget category to another. The Fiscal Year 2015 Stockpile Stewardship and Management Plan included a line for budget estimates for this project; however, the estimates were zero for each year except for 2012. The 2015 plan included budget estimates that totaled $3.1 billion in the program readiness subcategory under the infrastructure category, which NNSA officials stated were ultimately intended for the Chemistry and Metallurgy Research Replacement construction project. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA shifted $1.7 billion in planned spending out of program readiness and into the construction project’s line item, also under the infrastructure category. This shift appears to be an increase in the total amount for major construction activities in the 2016 budget materials. However, as noted above, the overall total for infrastructure declined slightly, in part because NNSA officials said that they determined that the remainder of the $3.1 billion from program readiness is not required to support the project. Nevertheless, the $1.7 billion reported in the Fiscal Year 2016 Stockpile Stewardship and Management Plan is $214 million lower than the total estimates that NNSA reported in its 2016 congressional budget justification, which included a more detailed construction project data sheet for the project. An NNSA official confirmed that this amount should have been included in the plan and its omission was the result of a data entry error. Consequently, the amount for the project in the construction line item should be approximately $1.9 billion. The Fiscal Year 2016 Stockpile Stewardship and Management Plan includes a goal to stop the growth of the agency’s deferred maintenance backlog. The plan notes that there has been limited availability for capital and maintenance funding in recent years, but NNSA officials stated that they are working to ensure that there is no increase in deferred maintenance relative to the level at the end of 2015. In August 2015, we found that NNSA’s infrastructure budget estimates were not adequate to address its deferred maintenance backlog and that the backlog would continue to grow. We recommended that in instances where budget estimates do not achieve DOE benchmarks for maintenance and recapitalization investment over the 5-year budget estimates, NNSA identify in the budget materials the amount of the shortfall and the effects, if any, on the deferred maintenance backlog. We also recommended that until improved data about the importance of facilities and infrastructure to mission are available, NNSA clarify in the budget materials for the Future-Years Nuclear Security Program the amount of the deferred maintenance backlog that is associated with facilities that have little to no effect on programmatic operations and is therefore low priority to be addressed. NNSA concurred with our recommendations. Specifically, NNSA agreed to include more information on maintenance, recapitalization, and deferred maintenance on excess facilities and stated that it will address them in the 2017 budget request or budget support materials as appropriate. Similarly, NNSA officials agreed that until improved data about the importance of facilities and infrastructure to the mission are available, they plan to clarify in the budget materials for the Future-Years Nuclear Security Program the amount of the deferred maintenance backlog that is associated with facilities that have little to no effect on programmatic operations and is therefore low priority to be addressed. The estimates in NNSA’s 2016 nuclear security budget materials may not align with plans for some major modernization efforts for several reasons. In particular, the Fiscal Year 2016 Stockpile Stewardship and Management Plan includes several major modernization efforts that may require more funding in some years than the plan reflects, raising questions about the alignment of NNSA’s modernization plans with potential future budgets. In addition, for some nuclear weapon refurbishment programs, the low end of NNSA’s internally developed cost ranges exceeds the estimates included in the budget materials. Further, some costs, such as those for certain infrastructure upgrades, are not included in NNSA’s budget estimates, and dependency on other NNSA programs could lead to increases in program costs. NNSA officials provided various reasons for the discrepancies, which they said could be addressed in future planning. The Fiscal Year 2016 Stockpile Stewardship and Management Plan’s estimates for Weapons Activities are $4.4 billion higher than the out-year projections for funding levels in the President’s budget provided in the DOD-DOE joint report. Specifically, for the years 2021 through 2025—the 5 years after the 2016 Future-Years Nuclear Security Program—the Fiscal Year 2016 Stockpile Stewardship and Management Plan’s Weapons Activities budget estimates total $56.6 billion. However, these budget estimates exceed a set of out-year projections for nuclear modernization and sustainment activities over the same time period. Specifically, the DOD-DOE joint report included additional information on out-year projections in the 2016 President’s budget for Weapons Activities through 2025. These out-year projections total $52.2 billion from 2021 to 2025, or $4.4 billion less than DOE’s budget estimates over the same time period (see table 5). This misalignment between the Fiscal Year 2016 Stockpile Stewardship and Management Plan and the estimates described as out-year projections in the President’s budget in the DOD-DOE joint report corresponds to a challenging period for NNSA modernization efforts, as the agency plans to simultaneously execute at least four LEPs along with several major construction projects, including efforts to modernize NNSA’s uranium and plutonium capabilities. The differences between these two sets of numbers raise questions about the alignment of NNSA’s modernization plans with potential future budgets. NNSA notes this issue in the Fiscal Year 2016 Stockpile Stewardship and Management Plan and states that it will need to be addressed as part of fiscal year 2017 programming. According to an NNSA official from the office that coordinated production of the Fiscal Year 2016 Stockpile Stewardship and Management Plan, the additional line of out-year projections in the 2016 President’s budget was included in the 2016 DOD-DOE joint report at the request of the Office of Management and Budget. This official told us that the out-year projections included in the DOD-DOE joint report represent DOE’s evaluation of what modernization activities will cost for these years based on current plans and available information. NNSA officials also stated that the President’s budget information was included in the 2016 DOD-DOE joint report to show that the administration has not yet agreed to fund these activities beyond the Future-Years Nuclear Security Program at the level reflected in NNSA’s budget estimates. In addition, NNSA officials stated that there is a high level of uncertainty in the budget estimates beyond the Future-Years Nuclear Security Program, which makes planning beyond 5 years difficult. On the basis of our analysis of NNSA’s internally developed cost ranges for certain major weapon modernization efforts, we found that the low end of these ranges sometimes exceeded the estimates that NNSA included for those programs in its budget materials. We analyzed NNSA’s budget estimates for nuclear weapon refurbishments over the 25 years covered in the Fiscal Year 2016 Stockpile Stewardship and Management Plan— the W76-1, the B61-12, the B61-13, the W80-4, and the IW-1, 2, and 3 LEPs, as well as the W88 Alteration 370. The Directed Stockpile Work category in the plan and in the 2016 Future-Years Nuclear Security Program contain detailed budget information on weapon refurbishment efforts that includes specific budget estimates for each effort as well as high and low cost ranges that NNSA developed for them. For each effort, we assessed the extent to which the budget estimates aligned with its high-low cost estimates. Specifically, we examined instances where the low end of the cost range estimates was greater than the budget estimates. We found that the annual budget estimates are generally consistent with NNSA’s internal cost estimates; that is, in most years, the annual budget estimates for each weapon refurbishment effort fall within the high and low cost ranges that NNSA developed for each program. However, in some years, NNSA’s budget estimates for some refurbishment efforts may not align with modernization plans. Specifically, for some years, the low end of cost ranges that NNSA developed for some LEPs exceeds the budget estimates. This indicates potential misalignment between plans and budget estimates for those programs in those years, or the possible need for NNSA to increase budget estimates for those programs in the future. For instance, see the following: The B61-12 LEP’s budget estimates during the 5-year period covered by the Future-Years Nuclear Security Program align with plans. However, the low cost range estimate of $195 million for the final year of production in 2025 exceeds the budget estimate of $64 million. NNSA officials said that this difference is not a concern because this misalignment occurs during the final year of the LEP effort and this estimate may overstate costs for the end of B61-12 program. The W88 Alteration 370’s low cost range estimate exceeds its budget estimate for 2020. The budget materials report that the program’s budget estimate that year is $218 million; however, the low point of the cost range is $247 million. NNSA officials stated that this is not a concern because there is flexibility to address possible misalignments in future programming cycles. NNSA officials also stated that the total estimates for this program are above the total of the midpoint cost estimates for 2016 through 2020 and that funding for 2016 to 2019 is fungible and could be carried over to cover any potential shortfall in 2020. The W80-4 LEP’s low range cost estimate of $476 million exceeds its budget estimates of $459 million for 2020. NNSA officials stated that because the budget estimates for this LEP are above the low point of its estimated cost range during other years, the misalignment in 2020 represents a small incongruity in an otherwise sound LEP profile. The budget estimates for the IW-1 LEP are within the high and low estimated cost ranges for most years. However, the IW-1’s low cost range estimate of $175 million exceeds its budget estimate of $113 million in 2020, which is its first year of funding. NNSA officials said that by shifting funding projected for 2021 to 2020, the IW-1 budget estimates would still be within the cost ranges. For the W76-1 LEP, we compared the budget estimates in the 2016 Future-Years Nuclear Security Program and the Fiscal Year 2016 Stockpile Stewardship and Management Plan with internal cost estimates NNSA developed for the LEP. We found that the budget estimates for all years within the Future-Years Nuclear Security Program, except for 2018, are below NNSA’s internal cost estimates for that program, raising questions about whether the budget for the LEP is aligned with anticipated costs. According to NNSA officials, the W76-1 LEP is nearing completion, and the model used to develop internal cost estimates for the W76-1 is predicting the LEP’s end-of-program costs in a way that may not reflect the rate at which the program winds down. For more information on the LEPs and their budget estimates and cost ranges in the Fiscal Year 2016 Stockpile Stewardship and Management Plan, see appendix II. NNSA officials stated that the intent in providing budget estimates and cost range estimates for each weapon refurbishment effort is to show general agreement between the two sets of estimates. Notwithstanding the differences we identified between budget estimates and low-end cost range estimates for certain efforts in certain years, NNSA officials stated that the budget estimates and the cost range estimates are in general agreement for each LEP and alteration in terms of total costs and trend. In addition, NNSA officials stated that there is some flexibility in the funding for these efforts, and that the programs may carry over some funds from one year to the next if needed to cover costs, depending on the reason for the misalignment, among other things. In our August 2015 report on NNSA’s nuclear security budget materials, we found that not including information that identifies potential misalignments in LEP budget estimates compared with the LEP internal cost estimates can potentially pose risks to the achievement of program objectives and goals, such as increase in program costs and schedule delays. NNSA agreed with our recommendation from that report to provide more transparency with regard to shortfalls in its budget materials. Specifically, NNSA said that it would include, as appropriate, statements in future Stockpile Stewardship and Management Plans on the effect of funding an LEP effort at less than suggested by a planning estimate cost range. NNSA officials also said that the agency plans to incorporate this recommendation, among others, into its 2017 budget materials. We identified instances where certain modernization costs were not included in budget estimates or may be underestimated. For example, see the following: The budget estimates for the W88 Alteration 370 with a conventional high explosive replacement—or “refresh”—are understated, according to NNSA officials. The budget estimates for the refresh reported in the 2016 budget materials are roughly $300 million less than the refresh requires. Officials told us that the initial budget planning for the refresh contained a cost of approximately $500 million. However, NNSA found that this estimate was incorrect and increased it to approximately $800 million. NNSA officials stated that this project is still in the process of establishing a new, official baseline, which officials expect to complete in 2016. The 2016 budget materials may not contain all necessary costs for NNSA’s efforts to maintain its enriched uranium capability, which include relocating select operations performed in Building 9212 to other existing buildings and constructing a series of smaller buildings. Specifically, NNSA officials stated that the budget estimates in the 2016 budget materials for these efforts do not include the costs associated with infrastructure upgrades (such as ceiling repairs and heating, air conditioning, and other controls systems) in two existing buildings at the Y-12 site. NNSA officials stated that the scope to maintain operations in the existing facilities is being developed and prioritized into a multiyear effort among multiple programs, separate from the Uranium Processing Facility project. According to another NNSA official, these costs were still under development, but the official estimated that the upgrades may cost tens of millions of dollars for each building. The costs of the plutonium infrastructure strategy—in which NNSA is currently preparing to move analytical chemistry and materials characterization capabilities into existing facilities as part of the Chemistry and Metallurgy Research Replacement construction project while also considering constructing new modular buildings under a separate project—are also uncertain and possibly underestimated. This uncertainty is due to the fact that NNSA has not yet determined the number of additional modular buildings that may be required, although the Fiscal Year 2016 Stockpile Stewardship and Management Plan calls for at least two. NNSA officials also stated that estimated costs for these efforts have not yet been baselined and that the cost of such a project cannot be estimated with any certainty until it has proceeded further into the planning process and established a baseline. In addition to some costs not being included in budget estimates, the estimates for some NNSA modernization efforts could increase in the future because of their dependency on successful execution of other NNSA programs. Specifically, NNSA managers for the LEPs stated that some of these programs could incur future cost increases or schedule delays because of other NNSA programs supporting the LEPs. For instance, NNSA officials told us that the W80-4 LEP will require a new insensitive high explosive to support the system. This is because the B61- 12 LEP is consuming the currently available stocks of insensitive high explosive. As a result, NNSA is developing a new insensitive high explosive to meet the needs of the W80-4 LEP. However, NNSA officials told us that the performance of the new explosive currently being produced is not comparable to the quality of existing explosive being consumed by the B61-12 LEP. Consequently, these officials stated that the costs of the W80-4 LEP could rise because of additional funding that may be required to further develop the new explosive. The Fiscal Year 2016 Stockpile Stewardship and Management Plan notes that as design options are down selected, the budget estimate for the W80-4 may shift in response. An NNSA official also stated that the IW-1 LEP budget estimates in the 2016 budget materials are predicated on NNSA successfully modernizing its plutonium pit production capacity. The official stated that if there are delays in the current plutonium infrastructure strategy, the IW-1 LEP will bear costs that are greater than currently estimated to produce the number of additional plutonium pits it needs to support the program. The Fiscal Year 2016 Stockpile Stewardship and Management Plan notes that estimates for programs in their earlier stages, such as the IW-1 LEP, are subject to uncertainty. We previously found that NNSA has experienced significant cost increases and schedule delays in its earlier strategies to modernize its plutonium pit production support facilities at Los Alamos National Laboratory. We have ongoing work examining the Chemistry and Metallurgy Research Replacement construction project in more detail. We provided a draft of this report to DOE and NNSA for their review and comment. NNSA provided written comments, reproduced in appendix III, in which it stated that it will continue to enhance information on potential funding levels in future budget supporting materials. NNSA also provided technical comments separately, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to assess (1) the extent to which the National Nuclear Security Administration’s (NNSA) budget estimates and plans for modernization activities reflected in its fiscal year 2016 nuclear security budget materials differ, if at all, from those in its fiscal year 2015 budget materials and (2) the extent to which the fiscal year 2016 nuclear security budget materials align with modernization plans as presented in the Stockpile Stewardship and Management Plan. We limited the scope of our review to NNSA’s Weapons Activities appropriations account, because NNSA’s activities in the Stockpile Stewardship and Management Plan are funded by this account. This scope is consistent with that of our August 2015 review. We focused our review on major modernization efforts—that is, the refurbishment of nuclear weapons through life extension programs (LEP) and alterations and major construction efforts to replace existing, aging facilities for plutonium and uranium. The budget projections in the 2015 and 2016 Stockpile Stewardship and Management Plans each contain budget dollar figures for 25 years, presented in current dollar values. Our report presents all figures in current, or nominal, dollars, which include projected inflation, unless otherwise noted. Further, all years noted in our report refer to fiscal years, unless otherwise noted. To determine the extent to which NNSA’s budget estimates and plans for modernization activities differed from those in the 2015 nuclear security budget materials, we compared the information in the 2016 materials with the information in the 2015 materials. NNSA’s nuclear security budget materials are composed of two key policy documents that are issued annually: the agency’s budget justification, which contains estimates for the 5-year Future-Years Nuclear Security Program, and the Stockpile Stewardship and Management Plan, which provides budget estimates over the next 25 years. Specifically, we (1) compared differences between the 2016 and 2015 budget materials in the four broad modernization areas—stockpile; infrastructure; research, development, testing, and evaluation; and other weapons activities—and (2) compared differences between the 2016 and 2015 budget materials for specific weapons refurbishment activities and major construction projects. We interviewed knowledgeable officials from NNSA about changes we identified between the 2016 and 2015 budget materials. We also reviewed a third, integrated document on plans for the nuclear deterrent that includes information on the Department of Defense (DOD) and Department of Energy’s (DOE) modernization budget estimates. This annual report that DOD and DOE are required to submit jointly to the relevant Senate and House committees and subcommittees is referred to as the section 1043 report; in our report, we refer to it as the DOD-DOE joint report. We compared the information in the 2016 DOD-DOE joint report with that in the Fiscal Year 2016 Stockpile Stewardship and Management Plan. To determine the extent to which NNSA’s budget materials align with its modernization plans, we compared information on the budget estimates in the 2016 budget materials with the information on modernization plans in the materials as well as the DOD-DOE joint report, reviewed prior GAO reports to provide context for the concerns we identified, and interviewed NNSA officials to obtain further information on changes to modernization plans and discussed any perceived misalignments with them. For weapons refurbishment efforts under way during the 25 years covered by the Fiscal Year 2016 Stockpile Stewardship and Management Plan, we analyzed NNSA’s budget estimates for all those to be conducted over the 25-year period by comparing them against NNSA’s internally developed cost ranges for each LEP. According to DOE officials, for all LEPs besides the W76-1, DOE uses two different approaches to estimate the costs of LEPs. Under the first approach, according to officials, DOE develops specific budget estimates by year through a “bottom-up” process. DOE officials describe this as a detailed approach to developing the LEP budget estimates, which, among other things, integrates resource and schedule information from site participants. Under the second approach, which DOE refers to as a “top-down” process, DOE uses historical LEP cost data and complexity factors to project high and low cost ranges for each LEP distributed over the life of the program using an accepted cost distribution method. Officials noted that the values in these cost ranges reflect idealized funding profiles and do not account for the practical constraints of the programming and budgeting cycle. For the W76-1 LEP, DOE has developed specific budget estimates by year. Because the W76-1 LEP is the basis of DOE’s top-down model, DOE does not develop high and low cost ranges for it. Instead, DOE published the W76-1 LEP estimates in the Fiscal Year 2016 Stockpile Stewardship and Management Plan as a comparison between the Future-Years Nuclear Security Program request and a single LEP model line. For the W76-1 LEP, we compared the budget estimates with the LEP model line. For all LEPs besides the W76-1, we assessed the extent to which the specific bottom-up budget estimates were aligned with the high-low cost ranges developed through the top-down model. Specifically, we examined where the specific budget estimates were under the low end of the cost range predicted by the top-down model. We did this by reviewing charts in the Fiscal Year 2016 Stockpile Stewardship and Management Plan and the underlying data for those charts. In instances where the low cost range exceeded the budget estimates, we followed up with NNSA officials for additional information. To assess the reliability of the data underlying NNSA’s budget estimates, we reviewed the data to identify missing items, outliers, or obvious errors; interviewed NNSA officials knowledgeable about the data; and compared the figures in the congressional budget justification with those in the Fiscal Year 2016 Stockpile Stewardship and Management Plan to assess the extent to which they were consistent. We determined that the data were sufficiently reliable for our purposes, which were to report the total amount of budget estimates and those estimates dedicated to certain programs and budgets and to compare them to last year’s estimates. We conducted this performance audit from May 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The National Nuclear Security Administration (NNSA) has developed budget estimates for its nuclear weapons life extension programs (LEP) and major alterations: the B61-12, the W76-1, the W80-4, the IW-1, the IW-2, the IW-3, and the B61-13 LEPs, as well as for the W88 Alteration 370. The estimates include NNSA’s internally developed high and low cost ranges for each program. The budget estimates appear as bars for each year, while the high and low cost ranges are represented by lines across the figures. The following figures present budget estimates for each LEP and alteration. Similar figures also appear in the Fiscal Year 2016 Stockpile Stewardship and Management Plan. B61-12: The B61 bomb is one of the oldest nuclear weapons in the stockpile. The B61-12 LEP will consolidate and replace the B61-3, -4, -7, and -10 bombs. According to the Fiscal Year 2016 Stockpile Stewardship and Management Plan, this consolidation will enable a reduction in the number of gravity bombs, which is consistent with the objectives of the 2010 Nuclear Posture Review. The first production unit of the B61-12 is planned for 2020; the program is scheduled to end in 2026. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the B61-12 LEP will require a total of $5.7 billion from 2016 to 2026. See figure 1 for an illustration of budget estimates against projected cost ranges. W76-1: The W76 warhead was first introduced into the stockpile in 1978 and is deployed with the Trident II D5 missile on the Ohio-class nuclear ballistic missile submarines. The W76-1 LEP is intended to extend the original warhead service life and address aging issues, among other things. The first production unit was completed in September 2008, and the program will end in calendar year 2020. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that approximately $847 million will be required for this program from 2016 to 2021. See figure 2 for an illustration of budget estimates against projected cost ranges. W80-4: The W80-4 LEP is intended to provide a warhead for a future long-range standoff missile that will replace the Air Force’s current air- launched cruise missile. The first production unit is planned for 2025, and the program is scheduled to end in 2032. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the W80-4 LEP will require approximately $8.2 billion from 2016 to 2032. See figure 3 for an illustration of budget estimates against projected cost ranges. W88 Alteration 370: Among other things, the W88 Alteration 370 will replace the arming, fuzing, and firing subsystem for the W88 warhead, which is deployed on the Navy’s Trident II D5 submarine-launched ballistic missile system. In November 2014, the Nuclear Weapons Council decided to replace the conventional high explosive main charge, which led to an increase in costs for the alteration. The first production unit is planned for 2020, and the program is scheduled to end in 2026. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the program will require a total of $2 billion from 2016 to 2026. See figure 4 for an illustration of budget estimates against projected cost ranges. IW-1: The IW-1, also known as the W78/88-1, is the first ballistic missile warhead LEP in NNSA’s interoperable strategy to transition the stockpile to three interoperable ballistic missile warheads and two air-delivered warheads The first production unit is planned for 2030; the 2016 budget materials do not report an end date for the LEP. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the program will require a total of $13.4 billion from 2020 to 2040. See figure 5 for an illustration of budget estimates against projected cost ranges. IW-2: The IW-2 is an interoperable warhead intended to replace the W87/88 warhead. The Nuclear Weapons Council has not yet developed a more detailed implementation plan for this LEP. The first production unit is planned for 2034; the Fiscal Year 2016 Stockpile Stewardship and Management Plan does not contain a projected end date. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the program will require a total of $12.1 billion from 2023 to 2040. See figure 6 for an illustration of budget estimates against projected cost ranges. IW-3: The IW-3 is intended to provide the third interoperable warhead for NNSA’s future strategy for the stockpile. The first production unit is not yet specified, and there is not yet a budgeted end date. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that a total of $6.3 billion will be required for this program from 2030 to 2040. See figure 7 for an illustration of budget estimates against projected cost ranges. B61-13: According to NNSA officials, the B61-13 LEP is intended to replace the B61-12 bomb. The first production unit is not yet specified, and there is not yet a budgeted end date. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that a total of $1.2 billion will be required for this program from 2038 to 2040. See figure 8 for an illustration of budget estimates against projected cost ranges. In addition to the contact named above, William Hoehn (Assistant Director), Antoinette Capaccio, Pamela Davidson, Philip Farah, Bridget Grimes, Carol Henn, Aaron Karty, and Cynthia Norris made key contributions to this report. | Nuclear weapons are an integral part of the nation's defense strategy. Since 1992, the United States has shifted from producing new nuclear weapons to maintaining the stockpile through refurbishment. The 2010 Nuclear Posture Review —which outlines U.S. nuclear policy, strategy, capabilities, and force posture—identified long-term stockpile modernization goals for NNSA that include sustaining a safe, secure, and effective nuclear arsenal and investing in a modern infrastructure. The National Defense Authorization Act for Fiscal Year 2011 included a provision for GAO to report annually on NNSA's nuclear security budget materials. These materials are composed of NNSA's budget request justification and its Stockpile Stewardship and Management Plan , which describes modernization plans and budget estimates for the next 25 years. This report assesses (1) changes in the estimates in the 2016 budget materials from the prior year's materials and (2) the extent to which NNSA's 2016 budget materials align with plans for major modernization efforts. GAO analyzed NNSA's fiscal year 2015 and 2016 nuclear security budget materials, which describe modernization plans and budget estimates for the next 25 years. GAO also interviewed NNSA officials. In the National Nuclear Security Administration's (NNSA) fiscal year 2016 budget materials, the estimates for efforts related to modernizing the nuclear weapons stockpile total $297.6 billion for the next 25 years—an increase of $4.2 billion (1.4 percent) in nominal dollar values (as opposed to constant dollar values) compared with the prior year's budget materials. However, for certain program areas and individual programs, budget estimates changed more significantly than the overall estimates. NNSA's modernization efforts occur in four areas under the Weapons Activities appropriation account: stockpile; infrastructure; research, development, testing, and evaluation; and other weapons activities. For the stockpile area, budget estimates over 25 years increased by 13.2 percent over the nominal values in the Fiscal Year 2015 Stockpile Stewardship and Management Plan . Within the stockpile area, the estimates for life extension programs (LEP), which refurbish nuclear weapons, increased by 19.6 percent compared with the prior year's estimate, in part because of changes in the scope and schedule for some programs. In contrast, estimates for the other weapon activities area decreased by 18.1 percent, mainly because NNSA shifted two counterterrorism programs out of the Weapons Activities budget and into NNSA's separate Defense Nuclear Nonproliferation budget. The estimates in NNSA's 2016 nuclear security budget materials may not align with all elements of modernization plans for several reasons. First, the Fiscal Year 2016 Stockpile Stewardship and Management Plan includes estimates for 2021 through 2025 that are $4.4 billion higher than the same time period in a set of out-year projections for funding levels that were included in a joint report by the Department of Defense and Department of Energy. NNSA noted this issue in the 2016 plan and stated that it will need to be addressed as part of fiscal year 2017 programming. In addition, in some years, NNSA's budget estimates for certain weapons refurbishment efforts are below the low point of the programs' internally developed cost ranges. For example, the W88 Alteration 370 budget estimate of $218 million for 2020 was below the low end of the internal program cost range of $247 million. NNSA officials stated that the total estimates for this program are above the total of the midpoint cost estimates for 2016 through 2020 and that funding for 2016 to 2019 is fungible and could be carried over to cover any potential shortfall in 2020. GAO also identified instances where certain modernization costs were not included in the estimates or may be underestimated, or where budget estimates for some efforts could increase due to their dependency on successful execution of other NNSA programs. For example, an NNSA official said that budget estimates for the IW-1 LEP—which is NNSA's first interoperable ballistic missile warhead LEP—are predicated on NNSA successfully modernizing its plutonium pit production capacity. This official stated that if there are delays in modernizing this capacity, the IW-1 LEP could bear greater costs than currently estimated. In August 2015, GAO recommended that NNSA provide more transparency with regard to shortfalls in its budget materials. NNSA agreed and said that it plans to implement this recommendation starting in its 2017 budget supporting documents. GAO is not making any new recommendations in this report. In response to GAO's draft report, NNSA provided technical comments, which were incorporated as appropriate. |
Under TRICARE, beneficiaries have choices among various benefit options and may obtain care from either military treatment facilities or civilian providers. When nonenrolled beneficiaries receive care from civilian providers, they have the option of seeing either network or nonnetwork providers. The NDAA 2008 directed DOD to conduct surveys of beneficiaries and civilian providers to assess nonenrolled beneficiaries’ access to care. TRICARE provides benefits through several basic options for its non- Medicare-eligible beneficiary population. These options vary by enrollment requirements, choices in civilian and military treatment facility providers, and the amount beneficiaries must contribute toward the cost of their care. Table 1 provides a summary of some of these benefit options. Claims data from fiscal years 2008 to 2011 show that the percentages of the number of outpatient claims paid for TRICARE Prime and TRS have gradually increased, while the percentage of claims paid for TRICARE Standard has declined. (See fig. 1.) The percentage of claims paid for TRICARE Extra has remained steady over the same period. Starting on September 30, 2013, the number of PSAs will be reduced, and as a result, the TRICARE Prime option will be available to fewer beneficiaries. The targeted PSAs are those that are not in close proximity to existing MTFs or BRAC locations and will predominantly affect retirees and their dependents. According to a TMA official, this change is expected to affect about 171,000 retirees and dependents (37,000 in the North region, 36,000 in the West region, and 98,000 in the South region), with an estimated savings to DOD of $45 million to $56 million annually. In fiscal year 2011, TMA identified about 2 million nonenrolled beneficiaries (approximately one-fourth of the total eligible TRICARE population), who fell into three main categories: (1) retirees and their dependents or survivors, (2) active duty dependents, and (3) National Guard and Reserve servicemembers and their dependents. (See fig. 2.) Overall, during 2008-2011, an estimated one in three nonenrolled beneficiaries (about 31 percent) experienced problems finding any type of civilian provider—primary, specialty, or mental health care provider—who would accept TRICARE. Specifically: an estimated 25 percent of nonenrolled beneficiaries experienced problems finding a civilian primary care provider; an estimated 25 percent of nonenrolled beneficiaries experienced problems finding a civilian specialty care provider; and an estimated 28 percent experienced problems accessing a civilian mental health care provider. Overall, access to civilian primary care and specialty care providers differed for nonenrolled beneficiaries located in PSAs compared to those in non-PSAs. Specifically, we found that more nonenrolled beneficiaries in PSAs experienced problems finding civilian primary care and specialty care providers compared to those in non-PSAs. (See fig. 6.) However, access to civilian mental health care providers did not differ for nonenrolled beneficiaries in PSAs and non-PSAs. TMA also surveyed beneficiaries in HSAs in response to access concerns about these specific areas. We found that more nonenrolled beneficiaries in HSAs experienced problems accessing civilian specialty care than those in the areas outside of the surveyed HSAs. (See fig. 7.) However, there were no statistical differences in the estimated percentages of nonenrolled beneficiaries who experienced problems finding civilian primary or mental health care providers between the HSAs and the locations surveyed outside of these areas. The top two reasons reported by nonenrolled beneficiaries—regardless of type of care—for why they believed they experienced problems accessing a provider included “doctors not accepting TRICARE payments” and “doctors not accepting new TRICARE patients.” (See fig. 8.) Our analysis of the 4-year survey data showed that nonenrolled beneficiaries’ ratings for specific satisfaction measures were similar when compared between PSAs and non-PSAs, and between surveyed HSAs and the areas outside of the surveyed HSAs. Specifically, our analysis of beneficiaries’ ratings for four measures—satisfaction with primary care providers, specialty care providers, health care, and health plan— indicated no substantial differences between area types. For example, we found that about 80 percent of nonenrolled beneficiaries in both PSAs and non-PSAs rated their primary care provider as an 8 or higher on a scale from 0 to 10. Nationwide, an estimated 82 percent of civilian providers indicated they were aware of the TRICARE program, but only an estimated 58 percent were accepting new TRICARE patients, according to our analysis of the 2008 through 2011 civilian provider survey results. When compared to a national provider survey, civilian providers’ acceptance of new TRICARE patients was less than providers’ acceptance of other types of beneficiaries. Specifically, a survey of physicians in 2008 by the Center for Studying Health System Change found that about 96 percent of physicians accepted new commercially insured beneficiaries, about 86 percent accepted new Medicare beneficiaries, and about 72 percent accepted new Medicaid beneficiaries. According to the TRICARE survey results, when asked the reasons for not accepting new TRICARE patients, the most-cited category by those civilian providers who were not accepting any new TRICARE patients was that the provider “was not aware of the TRICARE program/not asked/don’t know about TRICARE.” (See fig. 10 for the top 7 categories of reasons for why civilian providers were not accepting new TRICARE patients.) Additionally, while nonenrolled beneficiaries cited that providers were not accepting TRICARE for payment as the top reason why any providers were unwilling to accept them as patients, the providers cited it as the third highest reason in addition to “don’t know/no answer.” When we compared the results of TMA’s 2008-2011 civilian provider survey (excluding nonphysician mental health providers) to the results of its 2005-2007 civilian physician survey, we found that although civilian physicians’ awareness has increased over time, their acceptance of new TRICARE patients has decreased over time. This was also true whether they were accepting any new patients or new Medicare patients. For example, civilian physicians’ acceptance of any new TRICARE patients has decreased from about 76 percent in 2005-2007 to an estimated 70 percent in 2008-2011. (See fig. 11.) When analyzed further by provider type, we found that civilian primary and specialty care providers had higher awareness and acceptance of TRICARE than civilian mental health care providers. (See fig. 12.) Specifically, only an estimated 39 percent of civilian mental health providers were accepting new TRICARE beneficiaries, compared to an estimated 67 percent of civilian primary care providers and an estimated 77 percent of civilian specialty care providers. The categories of reasons cited for not accepting new TRICARE patients also differed by provider type. For example, civilian mental health care providers more often cited “not aware of TRICARE/not asked/don’t know about TRICARE” than civilian primary or specialty care providers. Additionally, the top category of reasons cited by civilian primary care providers was that they were “not accepting patients” while the top category of reasons cited by specialty providers was “reimbursement.” (See fig. 13 for the top categories of reasons for civilian providers not accepting new TRICARE patients, by provider type.) We also found that providers’ awareness and acceptance of TRICARE differed by type of area. Similar to TMA’s nonenrolled beneficiary survey, which showed that nonenrolled beneficiaries in PSAs generally experienced more problems finding providers than their counterparts in non-PSAs, our analysis of the 2008 through 2011 civilian provider survey indicated that civilian providers in PSAs were less aware of TRICARE and less accepting of new TRICARE patients than civilian providers in non- PSAs. Specifically, an estimated 81 percent of civilian providers in PSAs were aware of the TRICARE program, compared to an estimated and an estimated 87 percent of civilian providers in non-PSAs, 56 percent of civilian providers in PSAs were accepting any new TRICARE patients, compared to an estimated 66 percent of those providers in non-PSAs. (See fig. 14.) An analysis of the collective results of the multiyear beneficiary and civilian provider surveys indicated particular geographic areas where nonenrolled beneficiaries are experiencing considerable access problems. These locations are defined as areas where (1) the percentage of nonenrolled beneficiaries who experienced difficulties finding a civilian provider was at least the national estimate and (2) the percentage of civilian providers who were accepting any new TRICARE patients was at or below the national estimate. Using these criteria, we identified a number of areas where beneficiaries were having access problems, mostly in Texas. (See app. IV for detailed information about these areas and how they were determined.) In determining areas where nonenrolled beneficiaries were experiencing access problems to any type of civilian provider, we first identified 24 individual areas (out of the 215 individual areas surveyed by the 2008- 2011 beneficiary surveys) where the estimated percentage of nonenrolled beneficiaries who experienced difficulties finding any type of civilian provider met or exceeded the national estimate (31 percent). Of these, we identified 2 PSAs where the estimated percentage of civilian providers who were accepting any new TRICARE patients was at or below the national estimate (58 percent)—Central/Southern-Central Coastal California and Northeastern Texas. Additionally, we identified 2 HSAs that also met these criteria, one of which is contained within the Northeastern Texas PSA. Table 4 shows each of these areas with the estimated percentage of (1) nonenrolled beneficiaries who experienced problems finding any type of civilian provider and (2) civilian providers who were accepting any new TRICARE patients. For the overlapping PSA and HSA (Northeastern Texas and Dallas/Fort Worth), we found that although a high percentage of civilian providers were accepting new patients (between 95 and 97 percent), only about half of these providers were accepting any new TRICARE patients. (See table 5.) For the remaining PSA (Central/Southern-Central California) and HSA (Austin, Texas), between 92 and 98 percent of civilian providers were accepting new patients, and less than half of those providers were accepting any new TRICARE patients. Further, of the civilian providers in all of these areas who were accepting new Medicare patients, between 65 and 70 percent were also accepting any new TRICARE patients. Reimbursement was the most cited reason for providers not accepting new TRICARE patients for all of the areas except the PSA in California for which “not aware of the TRICARE program” was the most cited reason. When analyzing this data by type of provider (primary care, specialty, and mental health), we found four areas where the percentage of civilian primary care providers who were accepting any new TRICARE patients was at or below the national estimate, but did not find similarly low- percentage areas for civilian specialty care providers. Because of the low numbers of survey responses, we are unable to report survey results for access problems to civilian mental health care providers. In determining areas where nonenrolled beneficiaries experienced access problems to civilian primary care providers, we first identified 21 individual areas where the estimated percentage of nonenrolled beneficiaries who experienced difficulties finding a civilian primary care provider met or exceeded the national estimate (25 percent). Of these, we identified 2 PSAs where the estimated percentage of civilian primary care providers who were accepting any new TRICARE patients was at or below the national estimate (67 percent)—Northeastern Texas and Eastern-Central Texas. We also identified 2 HSAs that met these criteria, each of which was contained in one of the PSAs we identified. Table 6 shows each of these areas with the estimated percentage of (1) nonenrolled beneficiaries who experienced problems finding a civilian primary care provider and (2) civilian primary care providers who were accepting any new TRICARE patients. As we similarly found in the areas where nonenrolled beneficiaries were having access problems for any type of civilian provider, we found that between 94 and 97 percent of civilian primary care providers in the Northeastern Texas PSA/Dallas/Ft. Worth HSA and the Eastern-Central Texas PSA/Austin, Texas, HSA were accepting new patients, but only around half of them were accepting new TRICARE patients. (See table 7.) Further, of the civilian primary care providers in the two PSAs who were accepting new Medicare patients, between 59 and 68 percent were accepting any new TRICARE patients. Reimbursement was the most cited reason by civilian primary care providers for not accepting any new TRICARE patients in each of these areas except for the Dallas/Ft. Worth, Texas, HSA, for which “don’t know/no answer” was the most cited reason. In determining areas where nonenrolled beneficiaries are experiencing access problems to civilian specialty care providers, we first identified nine individual areas where the estimated percentage of nonenrolled beneficiaries who experienced difficulties finding a civilian specialty care provider met or exceeded the national estimate (25 percent). Unlike the collective results for “any civilian provider” and “civilian primary care providers,” when we examined civilian specialty care providers’ responses for these areas, we did not identify any geographic areas where the estimated percentage of civilian specialty care providers who were accepting any new TRICARE patients was at or below the national estimate (77 percent) when accounting for the margins of error at the 95 percent confidence limit. For the nine areas where the estimated percentage of beneficiaries who experienced difficulties finding a civilian specialty care provider met or exceeded the national estimate, the percentage of civilian specialty care providers who were accepting new TRICARE patients ranged from 75 to 86 percent. Because of the low numbers of survey responses for beneficiaries who said they needed civilian mental health care, we are unable to report correlated survey results for access problems to civilian mental health care providers.of mental health providers and the survey results that only 39 percent of civilian mental health care providers were accepting new TRICARE patients, access to mental health care providers is a concern for all TRICARE beneficiaries, including those who use the TRICARE Standard and Extra options. In reviewing a draft of this report, DOD concurred with our overall findings and provided technical comments, which we incorporated where appropriate. (See app. VI.) We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VII. The National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008) directed the Department of Defense (DOD) to determine the adequacy of the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. We use the term “nonenrolled beneficiaries” for beneficiaries who are not enrolled in TRICARE Prime and who use the TRICARE Standard or Extra options, or TRICARE Reserve Select (TRS). The NDAA 2008 also included specific requirements related to the number and priority of areas to be surveyed, including the populations to be surveyed each year, content for each type of survey, and the use of benchmarks. Within DOD, the TRICARE Management Activity (TMA), which oversees the TRICARE program, has the lead responsibility for designing and implementing the nonenrolled beneficiary and civilian provider surveys. The following information describes TMA’s methodology, including its actions to address the requirements for each of the following: (1) survey area, (2) sample selection, (3) survey content, and (4) the establishment of benchmarks. The NDAA 2008 specified that DOD survey beneficiaries and providers in at least 20 TRICARE Prime Service Areas (PSA),areas in which TRICARE Prime is not offered—referred to as non–Prime Service Areas (non-PSA)—each fiscal year, 2008 through 2011. The NDAA 2008 also required DOD to consult with representatives of TRICARE beneficiaries and health care and mental health care providers to identify locations where nonenrolled beneficiaries have experienced significant access-to-care problems, and give a high priority to surveying health care and mental health care providers in these areas. Additionally, and 20 geographic the NDAA 2008 required DOD to give a high priority to surveying areas in which a high concentration of Selected Reserve servicemembers live. In designing the 2008 through 2011 nonenrolled beneficiary and civilian provider surveys, TMA defined 80 PSAs and 80 non-PSAs that allowed it to survey the entire country over a 4-year period, and subsequently develop estimates of access to health care and mental health care at service area and national levels. TMA identified the 80 PSAs by collecting zip codes where TRICARE Prime was offered from officials within each of the three TRICARE Regional Offices. TMA grouped these zip codes into 80 nonoverlapping areas so that each area had roughly the same number of TRICARE-eligible beneficiaries. Because non-PSAs had not previously been defined, TMA sought to define them by grouping all zip codes not in PSAs into one large area using Hospital Referral Regions, which are groupings of Hospital Service Areas (HSA). TMA divided the large area into 80 non-PSAs so that each area had roughly the same number of TRICARE-eligible beneficiaries. To identify locations where nonenrolled beneficiaries and health care and mental health care providers have identified significant levels of access- to-care problems under TRICARE Standard and Extra, TMA spoke with groups representing beneficiaries and health care and mental health care providers, as well as officials at the TRICARE Regional Offices. These groups suggested cities and towns where access should be measured (in addition to the larger PSAs and non-PSAs), and HSAs corresponding to each city and town were then identified. On the basis of the groups’ recommendations, multiple lists were created and sorted in priority order: 21 HSAs were surveyed in the 2008 surveys; 9 HSAs in the 2009 surveys; 25 HSAs in the 2010 surveys; and 16 HSAs in the 2011 civilian provider survey. This resulted in a total of 55 HSAs surveyed for the nonenrolled beneficiary survey, and 71 HSAs surveyed in the civilian provider survey (the 71 HSAs includes the same 55 HSAs surveyed for the nonenrolled beneficiary survey and an additional 16 that were selected for the 2011 fielding). Although the NDAA 2008 required DOD to give a high priority to surveying areas in which a high concentration of Selected Reserve servicemembers live, TMA officials decided to randomly select areas for the surveys in order to produce results that could be generalized to the populations in the areas surveyed and to survey the entire United States over the 4-year period—an approach we deemed acceptable in our previous report. TMA selected its sample of beneficiaries who met its criteria for inclusion in the beneficiary survey using DOD’s Defense Enrollment Eligibility Reporting System (DEERS), a database of DOD beneficiaries who may be eligible for military health benefits. TMA determined a beneficiary’s eligibility to be included in the nonenrolled beneficiary survey if DEERS indicated that the individual met five criteria: 1. eligible for military health care benefits as of the date of the sample 2. age 18 years old or older; 3. not an active duty member of the military; 4. residing in one of the 20 randomly selected PSAs or 20 randomly selected non-PSAs to be surveyed that year; and 5. not enrolled in TRICARE Prime, or is enrolled in TRS. From this database, TMA randomly sampled 1,000 beneficiaries from each PSA and non-PSA—a sample size that would achieve TMA’s desired sample error rate. For the 2008, 2009, and 2010 survey fieldings, TMA used a sample size between approximately 40,000 and 50,000 beneficiaries. Because of budgetary constraints, the sample size of the 2011 nonenrolled beneficiary survey was decreased to around 34,000. Because of this reduction, the 2011 sample was further stratified by using claims data to identify beneficiaries who would likely self-report as TRICARE Standard and Extra users. After receiving the returned surveys, TMA identified the responses that it considered complete and eligible on the basis of whether the beneficiary had answered at least half of TMA’s identified “key” questions. Table 8 shows the number of nonenrolled beneficiary surveys mailed, by fiscal year. For each survey fielding, TMA selected the civilian provider sample within the same 20 PSAs and 20 non-PSAs that had been randomly selected for that year’s nonenrolled beneficiary survey, as well as civilian providers in the HSAs identified by beneficiary and provider groups as having significant levels of access-to-care problems under TRICARE Standard and Extra. TMA used the American Medical Association Physician Masterfile to select a sample of physicians who were licensed, office- based civilian medical doctors or licensed civilian doctors of osteopathy within the specified locations who were engaged in more than 20 hours of patient care each week. The American Medical Association Physician Masterfile is a database of physicians in the U.S.—Doctors of Medicine and Doctors of Osteopathic Medicine—that includes data on all physicians who have the necessary educational and credentialing requirements. This “Masterfile” did not differentiate between TRICARE’s network and nonnetwork civilian providers, which TMA deemed acceptable to avoid any potential bias in TMA’s sample selection. As such, TMA selected this file because it is widely recognized as one of the best commercially available lists of providers in the United States and contained more than 940,000 physicians along with their addresses, phone numbers, and information on practice characteristics, such as their specialty. According to TMA, the American Medical Association updates physicians’ addresses monthly and other elements through a rotating census methodology involving approximately one-third of the physician population each year. Although the Masterfile is considered to contain most providers, deficiencies in coverage and inaccuracies in detail remain. Therefore, TMA attempted to update providers’ addresses and phone numbers and ensure that providers were eligible for the survey by also using state licensing databases, local commercial lists, and professional society and association lists. For its 2008 and 2009 mental health care provider sample selection, TMA selected a sample of mental health care providers from two sources: the American Medical Association’s Masterfile of psychiatrists, and LISTS, Inc.—a list of names with contact information assembled from state licensing boards. For the 2010 and 2011 mental health care provider sample selections, TMA also used mental health specialty areas from the National Plan and Provider Enumeration System database maintained by the Centers for Medicare & Medicaid Services, in addition to data from LISTS, Inc., and the psychiatrist data from the American Medical Association’s Masterfile. According to TMA, it selected these sources for mental health care providers because they have been identified as the most comprehensive databases for these health care providers. From these data sets, TMA planned to randomly sample about 800 providers (400 each of physicians and mental health care providers) from each PSA, non-PSA, and HSA—a sample size that would achieve In those instances where there were TMA’s desired sample error rate.not 800 providers in a single area, TMA selected all of the providers in that area to receive surveys. As the PSA and non-PSA regions were formed on the basis of the number of beneficiaries and not the number of civilian providers, some regions with a large number of civilian providers were sampled at relatively low rates in 2008, 2009, and 2010. To improve the precision of national estimates, in 2011 TMA selected six areas to oversample: (1) Southeastern N.Y. and Northern N.J. (New York City); (2) Los Angeles, Calif.; (3) Eastern Mass. (Boston); (4) Northeastern/ Central Ohio (Cleveland); (5) Southeastern/Northern Mich. (Detroit); and (6) Northwestern/Northeastern/Central-Eastern Ill. and Southwestern Wisc. (Chicago). Therefore, in 2011, a supplemental sample of 4,800 providers was drawn for these 6 PSAs, thereby increasing the numbers of eligible providers in each area: 1,600 providers from the two 2008 PSAs (Los Angeles, California, and Southeastern New York/Northern New Jersey); 800 providers from the one 2009 PSA (Eastern Massachusetts); and 2,400 providers from the three 2010 PSAs (Northeastern/Central Ohio, Southeastern/Northern Michigan, and Northwestern/Northeastern/Central-Eastern Illinois/Southeastern Wisconsin). Upon receipt of the returned surveys, TMA identified the responses that it considered complete and eligible based on the following criteria for respondents: (1) if the provider answered “yes” to the questions that asked whether the provider offers care in an office-based location or private practice; (2) for the nonphysician mental health survey, if the provider responded he or she was one of the six TRICARE participating specialties: certified clinical social worker, certified psychiatric nurse specialist, clinical psychologist, certified marriage and family therapist, pastoral counselor, or mental health counselor; and (3) the provider had to have completed three key questions on the physician survey instrument, or three key questions on the nonphysician mental health provider survey instrument. Table 9 shows the number of civilian provider surveys mailed, by fiscal year. The NDAA 2008 required that the beneficiary survey include questions to determine whether TRICARE Standard and Extra beneficiaries have had difficulties finding physicians and mental health care providers willing to provide services under TRICARE Standard or TRICARE Extra. TMA’s 2008 nonenrolled beneficiary survey included 91 questions that addressed, among other things, health care plans used; perceived access to care from a personal doctor, nurse, or specialist; the need for treatment or counseling; and ratings of health plans. TMA based some of its 2008 nonenrolled beneficiary survey questions on those included in the Department of Health and Human Services’ Consumer Assessment of Healthcare Providers and Systems (CAHPS), a national survey of beneficiaries of commercial health insurance, Medicare, Medicaid, and the Children’s Health Insurance Program. Over the 4 years of the nonenrolled beneficiary survey fielding, TMA added three additional questions to the original 91 questions in the 2008 nonenrolled beneficiary survey that covered topics about the beneficiaries’ flu-shot history, and what they liked and disliked about TRICARE Standard and Extra. Additionally, in 2011, “TRICARE Young Adult” and “TRICARE Retired Reserve” were added to the response selections for the question that asked about the health plan the beneficiary used. (See app. II for a copy of the 2011 beneficiary survey instrument.) When TMA began mailing the beneficiary survey, it included a combined cover letter and a questionnaire to all beneficiaries in its sample—with the option of having beneficiaries complete the survey by mail or Internet. The cover letter provided information on the options available for completing the survey, as well as instructions for completing the survey by Internet. If the beneficiary did not respond to the mailed questionnaire, TMA mailed a second combined cover letter and questionnaire 4 weeks later encouraging the beneficiary to complete the survey. For the civilian provider survey, the NDAA 2008 required questions to determine: (1) whether the provider is aware of TRICARE; (2) the percentage of the provider’s current patient population that uses any form of TRICARE; (3) whether the provider accepts Medicare patients for health care and mental health care; and (4) if the provider accepts Medicare patients, whether the provider would accept new Medicare patients. TMA obtained clearance for its provider survey from the Office of Management and Budget (OMB) as required under the Paperwork Reduction Act. Subsequent to this review, OMB approved an 11-item questionnaire for physicians (including psychiatrists) and a 12-item questionnaire for nonphysician mental health providers. The mental health care providers’ version of the survey includes an additional question about what type of mental health care the provider practiced. Beginning with the 2009 civilian provider survey, an additional follow-up question was added that asked the provider what type of practice they practiced in if the provider indicated that they were not in private practice. Although a civilian provider’s indication that the provider was not in private practice still made the provider’s responses ineligible for the survey, the additional information from these nonprivate practice civilian providers could be used by TMA to glean additional information about civilian providers. (See app. III for a copy of the 2011 civilian provider survey instruments.) When TMA began mailing the provider survey, it included a combined cover letter and a questionnaire to each provider in the sample. The providers had the option of completing the survey by mail, fax, or Internet. The cover letter provided information on the options available for completing the survey, as well as instructions for completing the survey by Internet. If the provider did not respond to the mailed questionnaire, TMA mailed a second combined cover letter and questionnaire about 4 weeks later encouraging the provider to complete the survey. In accordance with the NDAA 2008, TMA identified benchmarks for analyzing the results of the beneficiary and civilian provider surveys. Because TMA based some of its 2008 beneficiary survey questions on those included in the CAHPS surveys, it was able to compare the results of those questions with its 2008 through 2011 beneficiary survey results. To benchmark its provider survey, TMA compared the results of its 2008 through 2011 surveys with the results of its 2005, 2006, and 2007 provider surveys. A TMA official noted that TMA was unaware of any external benchmarks that would be applicable to its surveys of providers. In analyzing the results of the nonenrolled beneficiary survey, TMA representatives conducted yearly nonresponse analyses because the overall response rate for the surveys was around 38 percent. To conduct this analysis for the 2008, 2009, and 2010 survey years, TMA did the following: (1) compared key beneficiary demographic characteristics of respondents to those of nonrespondents (e.g., beneficiary gender and age) and (2) interviewed a sample of beneficiaries who did not respond to the original survey or the follow-up second mailing and compared their responses with the original survey respondents. Because of budgetary constraints during the 2011 survey year, TMA only compared key beneficiary demographic characteristics of respondents to those of the nonrespondents. The results of TMA’s nonresponse analyses indicated that respondents to the nonenrolled beneficiary survey differed substantially from the surveyed population in some demographic characteristics. For example, the analyses indicated that retirees, dependents of retirees, and dependents of survivors were overrepresented in the study, and dependents of active duty servicemembers, dependents of Guard/Reserve personnel, and dependents of inactive guard personnel were underrepresented in the study. Additionally, in each of the years in which TMA representatives conducted follow-up interviews (2008-2010), they found some response differences between survey respondents. For example, each year in follow-up interviews of nonrespondents, they found these beneficiaries rated their primary care provider and health plans more favorably than beneficiaries who responded to the survey. According to TMA representatives, they used a weighting scheme to reflect the survey population proportions to correct any bias as a result of survey nonresponse. In analyzing the results of the provider survey, TMA conducted a nonresponse analysis because the overall response rate to the surveys was about 42 percent. To conduct this analysis for the 2008, 2009, and 2010 surveys, TMA did the following: (1) compared key provider demographic characteristics of respondents to those of nonrespondents (for example, provider type and area) and (2) interviewed a sample of physicians and mental health care providers who did not respond to the survey, follow-up second mailing, or follow-up telephone calls and compared their responses with the survey respondents. Because of budgetary constraints during the 2011 survey year, TMA only compared key provider demographic characteristics of respondents to those of the nonrespondents. The results of TMA’s nonresponse analyses indicated that there are some demographic differences between respondents and those who did not respond. For example, the analyses indicated that in some years psychiatrists were underrepresented in the survey samples. Overall, however, the results were consistent among the nonresponse analyses and indicated little variation between respondents and nonrespondents. As TMA used in the weighting scheme for the nonenrolled beneficiary survey, TMA used a weighting scheme to reflect the survey population proportions to correct any bias as a result of survey nonresponse. The National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008) directed the Department of Defense (DOD) to determine the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. For the purpose of this report, we use the term “nonenrolled beneficiaries” for beneficiaries who are not enrolled in TRICARE Prime and who use the TRICARE Standard or Extra options, or TRICARE Reserve Select (TRS). Specifically, the NDAA 2008 specified that DOD conduct surveys of beneficiaries each fiscal year, 2008 through 2011. The NDAA 2008 also required that the beneficiary survey include questions seeking information from nonenrolled beneficiaries to determine whether they have had difficulties finding health care and mental health care providers willing to accept them as patients. For the 2008 fielding of the beneficiary survey, 91 questions were included in the survey instrument. Over the next 3 years of the beneficiary survey’s fielding, TRICARE Management Activity (TMA) used the same 91 questions and added these additional questions: For the 2009 survey fielding and beyond, TMA added Question #81, which asked “When did you last have a flu shot?” for a total of 92 questions in 2009; For the 2010 survey fielding and beyond, TMA added two questions (Questions #75 and #76) that asked what the beneficiary liked and disliked about TRICARE Standard and Extra, respectively, for a total of 94 questions in 2010 and 2011. In addition, for the 2011 survey instrument, “TRICARE Young Adult” and “TRICARE Retired Reserve” were added to the response selections for Question #2, which asked “By which health plan are you currently covered?” Following is the actual survey instrument from the 2011 fielding that TMA used to obtain information from nonenrolled beneficiaries. The National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008) directed the Department of Defense (DOD) to determine the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. For the purpose of this report, we use the term “nonenrolled beneficiaries” for beneficiaries who are not enrolled in TRICARE Prime and who use the TRICARE Standard or Extra options, or TRICARE Reserve Select (TRS). Specifically, NDAA 2008 directed DOD to survey providers each fiscal year, 2008 through 2011. The NDAA 2008 also required that the provider survey include questions seeking information to determine (1) whether the provider is aware of the TRICARE program, (2) the percentage of the provider’s current patient population that uses any form of TRICARE, (3) whether the provider accepts Medicare patients, and (4) if the provider accepts Medicare patients, whether the provider would accept new Medicare patients. DOD implemented two versions of its provider survey, one for physicians, including psychiatrists, and one for nonphysician mental health providers. For the 2008 fielding of the civilian provider survey, 11 and 12 questions were included in the physician and nonphysician mental health provider survey instruments, respectively. Over the next 3 years of the civilian provider survey’s fielding, TRICARE Management Activity (TMA) generally used the same questions, but made the following adjustments to the survey instruments: Beginning with the 2009 fielding of both survey instruments and beyond, TMA adjusted Question #1 which asked the provider whether they provided health care to patients in an office-based practice (for physicians) or a private practice (for nonphysician mental health care providers) so that a “no” response would no longer instruct the provider to stop answering the survey at that point. Instead, the revision directed the provider to the newly added Question #1a that asked the provider what type of practice they were in (if they answered “no” to Question #1). For the 2010 and 2011 fieldings of the physician survey instrument, TMA also adjusted Question #1 from “Does provide treatment to patients through an office-based practice?” to “Does [the provider] provide treatment to patients through private practice?” Following are the actual survey instruments from the 2011 fielding that TMA used to obtain information from physicians and nonphysician mental health care providers. The 2008-2011 beneficiary survey indicated individual areas where nonenrolled beneficiaries experienced problems finding “any civilian provider,” civilian primary care providers, and civilian specialty care providers. We define these locations as areas where the percentage of nonenrolled beneficiaries who experienced difficulties finding a civilian provider was at the national estimate or higher. We identified 24 individual areas (out of the 215 individual areas surveyed by the 2008-2011 beneficiary surveys) where the percentage of nonenrolled beneficiaries who experienced problems finding any type of provider who would accept TRICARE met or exceeded the national estimate. We then identified 49 additional areas where the percentage of nonenrolled beneficiaries who experienced these problems was less than the national estimate. The remaining 130 areas had estimates that ranged from 18 to 50 percent, but because of their confidence intervals, were neither above nor below the 31 percent threshold.the geographic distribution of these three categories of areas. We identified 21 individual areas where the percentage of nonenrolled beneficiaries who experienced problems finding a civilian primary care provider who would accept TRICARE patients met or exceeded the national estimate. We then identified 50 additional areas where the percentage of nonenrolled beneficiaries who experienced these problems was less than the national estimate. The remaining 129 areas had estimates that ranged from 13 to 44 percent, but because of their confidence intervals, were neither above nor below the 25 percent threshold.categories of areas. We identified nine individual areas where the percentage of nonenrolled beneficiaries who experienced problems finding a civilian specialty care provider who would accept TRICARE patients met or exceeded the national estimate. We then identified 34 additional areas where the percentage of nonenrolled beneficiaries who experienced these problems was less than the national estimate. The remaining 144 areas had estimates that ranged from 14 to 47 percent, but because of their confidence intervals, were neither above nor below the 25 percent threshold.categories of areas. Because of the low number of nonenrolled beneficiary responses to the questions about civilian mental health care, we are unable to identify specific geographic areas where nonenrolled beneficiaries have access problems to civilian mental health care providers. Of the 215 areas surveyed in the 4-year beneficiary survey, only 5 areas had 30 or more respondents—TMA’s threshold for reporting beneficiary survey results— who indicated that they needed mental health care and received it from a civilian provider. Additionally, for those 5 areas that did have at least 30 nonenrolled beneficiary responses, the margins of error were between 10 and 25 percentage points. The TRICARE Management Activity (TMA) fielded its provider and beneficiary surveys to the same Hospital Service Areas (HSA) each year with one exception. Because of resource constraints, the 2011 fielding of the beneficiary survey did not include any HSAs. However, 16 HSAs were included in the 2011 fielding of the provider survey. Because beneficiaries were not surveyed for these HSAs, they are not included in our collective analysis of the beneficiary and civilian provider survey results. Table 13 lists the 16 HSAs that were surveyed in the 2011 civilian provider survey fielding and the estimated percentage of civilian providers who were accepting any new TRICARE patients. In addition to the contact named above, Bonnie Anderson, Assistant Director; Jennie Apter; Linda Galib; Giselle Hicks; Jeff Mayhew; Lisa Motley; Dan Ries; and Eric Wedum made key contributions to this report. | DOD provides health and mental health care through its TRICARE program. TRICARE offers three basic options. Beneficiaries who choose TRICARE Prime, an option that uses civilian provider networks, must enroll. Beneficiaries who do not enroll in this option may obtain care from nonnetwork providers under TRICARE Standard or from network providers under TRICARE Extra. In addition, qualified National Guard and Reserve servicemembers may purchase TRICARE Reserve Select, a plan whose care options are similar to those of TRICARE Standard and TRICARE Extra. GAO refers to servicemembers who use TRICARE Standard, TRICARE Extra, or TRICARE Reserve Select as nonenrolled beneficiaries. The National Defense Authorization Act for Fiscal Year 2008 directed DOD to conduct annual surveys over fiscal years 2008 through 2011 of both beneficiaries and civilian providers to determine the adequacy of access to health and mental health care providers for nonenrolled beneficiaries. It also directed GAO to review these surveys. This report addresses (1) what the results of the 4-year beneficiary surveys indicate about the adequacy of access to care for nonenrolled beneficiaries; (2) what the results of the 4-year civilian provider surveys indicate about civilian providers' awareness and acceptance of TRICARE, and (3) what the collective results of the surveys indicate about access to care by geographic area. To do so, GAO interviewed DOD officials, obtained relevant documentation, and analyzed the data for both surveys over the 4-year period. In its analysis of the 2008-2011 beneficiary survey data, GAO found that nearly one in three nonenrolled beneficiaries experienced problems finding a civilian provider who would accept TRICARE and that nonenrolled beneficiaries' access to civilian primary care and specialty care providers differed by type of location. Specifically, a higher percentage of nonenrolled beneficiaries in Prime Service Areas (PSA), which are areas with civilian provider networks, experienced problems finding a civilian primary care or specialty care provider compared to those in non-Prime Service Areas (non-PSA), which do not have civilian provider networks. GAO found that the top reasons reported by nonenrolled beneficiaries for why they experienced access problems--regardless of type of provider--were that the providers were either not accepting TRICARE payments or new TRICARE patients. Additionally, GAO's comparison of the Department of Defense's (DOD) beneficiary survey data to related data from a Department of Health and Human Services survey showed that nonenrolled beneficiaries' satisfaction ratings for primary and specialty care providers were consistently lower than those of Medicare fee-for-service beneficiaries. GAO's analysis of the 2008-2011 civilian provider survey data found that about 6 in 10 civilian providers were accepting new TRICARE patients and the most-cited reason for not accepting new TRICARE patients was that the civilian providers were not aware of the TRICARE program. Civilian physicians' acceptance of TRICARE has also decreased over time. Specifically, when compared to DOD's 2005-2007 civilian physician survey results, civilian physicians' acceptance of new TRICARE patients has decreased. This was also true whether they were accepting any new patients or new Medicare patients. Civilian providers' awareness and acceptance of TRICARE also differed by provider type, as fewer civilian mental health care providers were aware of TRICARE or accepting new TRICARE patients than other types of providers. For example, only an estimated 39 percent of civilian mental health care providers were accepting new TRICARE patients, compared to an estimated 67 percent of civilian primary care providers and an estimated 77 percent of civilian specialty care providers. The analysis also showed that civilian providers' awareness and acceptance of TRICARE differ by location type, as civilian providers in PSAs were less aware of TRICARE and less likely to accept new TRICARE patients than those in non-PSAs. GAO's analysis of the collective results of the beneficiary and civilian provider survey results indicates specific geographic areas, including areas in Texas and California, where nonenrolled beneficiaries have experienced considerable access problems. In each of these areas, although almost all civilian providers were accepting new patients, less than half were accepting new TRICARE patients. In most of these areas, civilian providers most often cited reimbursement concerns as the reasons why they were not accepting any new TRICARE patients. In commenting on a draft of this report, DOD concurred with GAO's overall findings. |
Private security contractors are defined as private companies, or personnel, that provide physical security for persons, places, buildings, facilities, supplies, or means of transportation. These contractors provide security services for a variety of U.S. government agencies in Iraq; however, they are principally hired by DOD and State. DOD private security services contracts include a contract to provide security for DOD— controlled facilities in Iraq, known as the Theater Wide Internal Security Services contract. According to DOD officials, four contractors employing more than 8,000 guards, supervisors, and operations personnel are performing task orders issued under their contracts. The State Department’s private security services contracts include a contract to provide security and support, known as the Worldwide Personal Protective Services contract, a contract to provide security for the U.S. Embassy Baghdad, and a security contract managed by State’s Bureau of International Narcotics and Law Enforcement Affairs. In August 2004, the President issued HSPD-12 to require that United States government agencies (including DOD and State) collaborate to develop a federal standard for secure and reliable forms of identification for all U.S. government employees and contractors needing regular physical access to federal facilities. In February 2005, to comply with HSPD-12, the Department of Commerce’s National Institute of Standards and Technology issued implementing guidance; the Federal Information Processing Standards 201-1, which define a governmentwide personal identification verification system. HSPD-12 requires that all U.S. government agencies mandate the use of the standard identification credential for all employees and contractors—U.S. citizens and foreign nationals alike—who need regular physical access to federal facilities, including U.S. military installations abroad. As part of this process, all U.S. government employees and contractors who are issued an approved credential are to undergo a National Agency Check with Written Inquiries (NACI), or, at minimum, an FBI National Criminal History Check (a fingerprint check against a FBI database). We have previously reported on the challenges associated with applying a similar process to foreign nationals, including the limited applicability of U.S.-based databases of the names of criminals to foreign nationals. Federal Information Processing Standard 201-1 applies to foreign nationals working for the U.S. government overseas and requires a process for registration and approval using a State Bureau of Diplomatic Security approved method, except in the case of employees under the command of a U.S. area military commander. However, the standards do not offer any guidance as to what process should be used overseas. In addition to the HSPD-12 requirements, DOD and State have been instructed to comply with other requirements intended to protect the safety of property and personnel. For example, DOD policy makes military commanders responsible for enforcing security measures intended to ensure that property and personnel are protected. Likewise, the Omnibus Diplomatic Security and Antiterrorism Act of 1986 requires the Secretary of State to develop and implement policies and programs, including funding levels and standards, to provide for the security of U.S. government diplomatic operations abroad. Section 862 of the FY2008 NDAA requires that the Secretary of Defense, in coordination with the Secretary of State, prescribe regulations on the selection, training, equipping, and conduct of personnel performing private security functions under a covered contract in an area of combat operations. Section 862 of the FY2008 NDAA states that the regulations shall, at a minimum, establish processes to be used in an area of combat operations for the following: registering, processing, accounting for, and keeping appropriate records of personnel performing private security functions; authorizing and accounting for weapons to be carried by, or available to be used by, personnel performing private security; and registration and identification of armored vehicles, helicopters, and other military vehicles operated by contractors performing private security functions. In addition, the regulations shall establish requirements for qualification, training, screening (including, if practicable, through background checks), and security for personnel performing private security functions in an area of combat operations. Section 862 of the FY2008 NDAA also states that the regulations must establish a process by which to report the following incidents: (1) a weapon is discharged by personnel performing private security functions in an area of combat operations; (2) personnel performing private security functions in an area of combat operations are killed or injured; (3) persons are killed or injured, or property is destroyed, as a result of conduct by contractor personnel; (4) a weapon is discharged against personnel performing private security functions in an area of combat operations; or (5) active, non-lethal countermeasures are employed by the personnel performing private security functions in an area of combat operations in response to a perceived immediate threat to these personnel. In addition, the regulations must establish a process for the independent review and, if practicable, investigation of these incidents and incidents of alleged misconduct by personnel performing private security functions in an area of combat operations. The regulations are also to include guidance to the combatant commanders on the issuance of (1) orders, directives, and instructions to private security contractors regarding, for example, security and equipment; (2) predeployment training requirements; and (3) rules on the use of force. Fragmentary orders also establish guidance and requirements governing private security contractors in Iraq. In December 2007, MNF-I issued Fragmentary Order 07-428 to consolidate what previously had been between 40 and 50 separate fragmentary orders relating to regulations applicable to private security contractors in Iraq. The fragmentary order establishes authorities, responsibilities and coordination requirements for MNC-I to provide oversight for all armed DOD contractors and civilians in Iraq including private security contractors. In March 2009, MNF-I superseded this order by issuing Fragmentary Order 09-109. This Fragmentary Order contains information related to the roles and responsibilities of contract oversight personnel and required contract clauses including clauses related to background screening, training, and weapons accountability. One such clause requires that all contractors working in the Iraq theater of operations shall comply with and shall ensure that their personnel supporting MNF-I forces are familiar with and comply with all applicable orders, directives, and instructions issued by the MNF-I Commander relating to force protection and safety. State has developed a process for conducting background screenings of its private security contractor personnel, U.S. citizens, and foreign and local nationals alike, which, according to State officials, meets the requirements of HSPD-12. Initially, private security contractors submit the resumes of all prospective employees to be reviewed by a State Department contracting officer representative. After this prescreening, the Worldwide Personal Protective Services contract requires firms to screen employees using a screening process approved by State’s Bureau of Diplomatic Security. The process includes examining a prospective employee’s past work history, police records, prior military service, and a credit check. The contractor is responsible for reviewing the results of the initial screening and, based on the results, forwards a list of the candidates to the contracting officer representative. Then, State’s Bureau of Diplomatic Security conducts and adjudicates its own background investigation of prospective employees. All personnel performing work on the contract must possess a security clearance, a determination of eligibility for moderate or high-risk public trust positions, or have had an investigative check conducted by regional security officers of local or foreign nationals equivalent to the public trust determination required for the position. According to State Department officials, the department requires that foreign national private security contractor personnel have a Moderate Risk Public Trust determination, which is equivalent to a Secret clearance, but it does not grant access to classified information. The Moderate Risk Public Trust determination includes checking a prospective contractor employee’s name against both local and national data sources. These data sources include the consular name-check database that is used by U.S. embassies to access information used to approve or deny visa applications. The system contains records provided by numerous U.S. agencies and includes information on persons with visa refusals, immigration violations, criminal histories, and terrorism concerns. In addition, prospective employees are screened by Regional Security Officers in the U.S. embassy in their home countries and if necessary, the Regional Security Officers may interview prospective employees. For example, when State Department officials in Uganda uncovered prospective employees using false documentation, the certificates were not granted to Ugandans until the Regional Security Officer had completed a personal interview. Moreover, in Iraq, prospective Iraqi employees sometimes undergo polygraph examinations. State Department officials told us that this process was HSPD-12-compliant based on their interpretation of an Office of Management and Budget memorandum that states that investigations related to making a public trust determination can be sufficient to meet HSPD-12 requirements. Federal Information Processing Standards 201-1 require that contractor personnel, including private security contractors in Iraq, undergo a National Agency Check with Written Inquiries investigation or its equivalent prior to being issued an access credential. While DOD has established procedures to apply this requirement to private security contractor personnel who are U.S. citizens, it has not, as of June 2009, developed a process and procedures to apply this requirement to foreign and local nationals. According to DOD Instruction 3020.41, the comprehensive policy document on the management of contractors authorized to accompany the Armed Forces, USD-I is responsible for developing and implementing procedures for conducting background screenings of contractor personnel authorized to accompany the U.S. Armed Forces. The instruction, which was issued in October 2005 by the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L), also directs USD-I to coordinate with AT&L, to develop these procedures, and to draft appropriate contract clauses. In November 2008, DOD issued a Directive-Type Memorandum to begin the process of bringing DOD policy into alignment with HSPD-12. However, while the memorandum directs USD-I to coordinate with AT&L and the Office of the Under Secretary of Defense for Personnel and Readiness (P&R) to develop the department’s policy for conducting background screenings of contractor personnel, it does not provide specifics on what the policy should contain. P&R, the office responsible for DOD’s HSPD-12 compliance, is currently drafting a DOD Instruction that includes standards for conducting background screenings of U.S. citizens, but does not yet include standards for screening foreign nationals because according to officials from P&R, they have not received input from USD-I. As of May 2009, USD-I officials were unable to provide an estimate of when the foreign national screening standards would be complete. The lack of a focal point to resolve disagreements among the offices responsible for developing and implementing DOD’s background screening policies and procedures has hindered timely execution of the HSPD-12 requirements. For example, officials from USD-I have interpreted HSPD-12 as requiring a government screening and adjudication process for foreign nationals that would be equivalent to the National Agency Check with Written Inquiries investigation used for U.S. citizens. Officials within AT&L maintain that this approach is unrealistic and would severely limit the numbers of foreign national contractor personnel the department could use to support U.S. forces in contingency operations. According to AT&L officials a National Agency Check with Written Inquiries equivalent screening for foreign nationals would not be feasible, given the difficulty of screening foreign nationals and the inconsistent quality of criminal and employment records from one country to another. As previously noted, private security contractors currently conduct their own background screenings of prospective employees. Based on these results, firms make final hiring decisions. AT&L officials believe that contractor-led background screenings, in conjunction with processes established by combatant commanders to screen contractors, such as those in place in Iraq, provides reasonable assurance that the security risk posed by foreign national contractor personnel is minimal. As we reported in 2006, commanders are responsible for the safety and security of their installations. Additionally, AT&L officials maintain that U.S. government employees serving as contracting officer representatives are the final adjudicators of background screening results. However, USD(I) officials disagree and have stated that DOD policy prohibits contracting officer representatives from being final adjudicators, and note that they lack the necessary training and time to do so. As early as 2004 we noted that DOD had a lack of personnel available to provide oversight. Most recently we noted in our 2008 report that DOD was strained to provide a sufficient number of contract oversight personnel and military personnel needed better training on their responsibilities to provide contract oversight over private security contractors. An April 2009 report by the Special Inspector General for Iraq Reconstruction found similar concerns, noting that contracting officer representatives received limited training and had insufficient available time to devote to their oversight responsibilities. As a result of these disagreements, DOD has not developed minimum background screening standards as required by DOD Instruction 3020.41 and HSPD-12. While DOD has acknowledged the inherent force protection risk it assumes when using contractor personnel, without the development and implementation of departmentwide background screening procedures that apply to all private security contractor personnel, including foreign nationals, DOD does not have full assurance that all of its private security contractor personnel have been properly screened. By direction of the MNF-I commander, MNF-I, the U.S.-led military organization responsible for conducting the war in Iraq, has established a process in Iraq aimed at ensuring that all private security contractors, U.S. citizens, Iraqi nationals, and other foreign nationals providing security services to DOD have been screened. According to MNF-I guidance, which shall be incorporated into all contracts and solicitations where arming of contracted employees is contemplated in Iraq, private security contractors and subcontractors in Iraq are required to conduct background screenings of employees; verify with the MNC-I Provost Marshal that no employee has been barred by any commander within Iraq; and certify after completing all checks that all persons armed under the contract are not prohibited under U.S. law from possessing a weapon or ammunition. In addition, in Iraq DOD has developed background screening measures that are intended to act as an additional safeguard after contractor- conducted screening procedures. For example, MNC-I officials told us that every private security contractor employee serving in Iraq also must receive a badge issued by MNF-I. According to officials, as part of the badge process, host and foreign national personnel are subjected to a background screening using several U.S. government automated systems and undergo an interview conducted by MNF-I intelligence officials. In addition, MNF-I guidance establishing minimum requirements for access to MNF-I installations throughout the Iraq Theater of Operations states that all host and foreign national private security contractor personnel are subjected to screening using an automated system unique to Iraq that collects biometric information such as photographs, fingerprints, and iris scans. While force protection officials we spoke with in Iraq were generally satisfied with the current background screening process and felt that it sufficiently accounted for the security of U.S. installations, our work identifies several shortcomings that limit the effectiveness of this process. For example, we found that some of the current background screening requirements established by MNF-I were unrealistic because they directed contractors to use data sources that were not readily available to private firms. According to MNF-I guidance, which shall be incorporated into all contracts and solicitations where arming of contracted employees is contemplated in Iraq, private security contractors should, to the extent possible, use FBI, Country of Origin Criminal Records, Country of Origin US Embassy Information Request, CIA records and/or any other equivalent records systems as sources. However, as we noted in our past work, contractors may not have access to certain data sources, such as FBI databases. Moreover, these data sources provide only limited data on foreign national contractor personnel who may have spent minimal if any time in the United States. While private companies may have access to other sources of background screening information, these data sources have similar limitations when applied to prospective foreign national personnel. As a result, contractors have adopted their own methods, such as obtaining Interpol-issued Certificates of Good Conduct, which one private security contractor official told us his company requires as a prerequisite to an interview. We reviewed a copy of one such certificate and observed that the certificate signifies that the bearer has never been the subject of a police inquiry. However, according to the official these certificates are not available in every country. Further, only the individual, and not the company, may obtain this certificate. Therefore, there may be incentives for prospective employees to forge or alter the certificates in order to gain employment. In addition, MNF-I officials we spoke to who were responsible for contractor oversight did not have a full understanding of the screening process, the process’ limitations, or of how contractors conducted their background screenings. For example, MNF-I officials told us that the office responsible for approving civilian arming requests—known as the arming authority—reviewed background screening results prior to approving arming requests. However, officials from the arming authority stated that they did not see the results of the background screenings and did not interpret or adjudicate based on the results. Officials were also unaware of what the background screening entailed, and stated background screening was the private security contractor’s responsibility. According to MNF-I officials, contracting officer representatives are responsible for ensuring that private security personnel files contain all of the necessary information, including background screening results. However, officials responsible for providing contract oversight in Iraq stated that contracting officers and contracting officer representatives only check to ensure that the correct documentation is maintained; they do not try to interpret or adjudicate the background screening results. Officials added that they are not trained to interpret or adjudicate the results. Moreover, while some of the name-checks and biometric data collection associated with issuing badges and requests for arming authority use data collected in Iraq, such as information collected from suspected insurgents, the current screenings rely primarily upon U.S.-based databases of criminal and terrorist information. As we have previously reported, background checks that are reliant upon U.S.-based databases, such as the automated process, described above, may not be effective in screening foreign nationals who have not lived or traveled to the United States. Without training to ensure that military commanders and contracting officials understand the department’s policies and procedures for background screening as well as their roles and responsibilities, DOD will not have reasonable assurance that contractor personnel have been screened. The existing MNF-I process also does not provide contractors with standards on what the background screening should entail and how the results should be interpreted, particularly for foreign national personnel. According to MNF-I guidance, which shall be incorporated into all contracts and solicitations where arming of contracted employees is contemplated in Iraq, DOD private security contractors are required to develop a background screening plan and submit the results of the background screening to their contract’s contracting officer representative upon completion. The Theater Wide Internal Security Services contract also requires that private security contractors conduct the appropriate criminal and financial background screenings identified in chapters 2 and 3 of Army Regulation 190-56. For example, the regulation requires that the contractor conduct a security screening check of applicants for security guard positions, to include a check of arrest and criminal history records of the state in which the prospective employee has resided during the most recent 5 years. It also requires that prospective security contractor employees be subjected to a National Agency Check with Written Inquiries. However, the regulation does not provide instructions on how to apply these screenings to non-U.S. citizens. Our review of the background screening plans submitted by the four Theater Wide Internal Security Services contractors found that the processes the plans described were not consistent in their approach to screen personnel, particularly for foreign national personnel. Our review of the plans found that they did not provide specific details as to how the company would go about screening foreign nationals. For example, while one plan states that all prospective foreign national employees are subjected to a criminal record check, it does not explain what records will be checked, the time period examined, or how the company intends to evaluate derogatory information. Furthermore, one of the plans we reviewed failed to address screening foreign national personnel at all. The plans were generally more specific in their descriptions of how they intended to screen U.S. national personnel. However, as we have previously reported, contractors have access to a limited range of criminal records data, and particularly in foreign countries these data can be of questionable quality. Furthermore, while DOD officials in Iraq stated that they were comfortable that the screening process was sound because contractors’ screening processes were part of the evaluation criteria used to award the contracts, as previously noted officials responsible for evaluating these plans have not been trained to do so. Without minimum standards screening firms will use varying techniques to screen personnel and DOD will not have reasonable assurance that a minimum level of safety and protection has been met. Section 862 of the FY2008 NDAA directed the Secretary of Defense, in coordination with the Secretary of State, to develop regulations on the selection, training, equipping, and conduct of private security contractor personnel under a covered contract in an area of combat operations. The public law lists a number of minimum processes and requirements, which the regulations are to establish. While DOD has drafted an interim final rule,which is intended to meet the requirements of the public law, our analysis of a May 2009 version of the draft regulation indicates that it does not address all of the requirements of the law. The draft delegates the responsibility of developing specific private security contractor guidance and procedures to the geographic combatant commanders without fully establishing all of the minimum processes as required under Section 862. For example, the law directs DOD to develop requirements for the screening and security of private security contractor personnel. The draft instructs geographic combatant commanders to develop procedures consistent with principles established in existing DOD instructions and directives. However, while the draft makes reference to existing DOD regulations regarding these areas, neither the draft nor the referenced documents articulate a process or requirements that geographic combatant commanders can use to ensure that all private security contractor personnel meet screening and security requirements. The draft regulation also establishes that all incidents listed in Section 862 (a)(2)(D) shall be reported, documented, and independently reviewed or investigated. However, the regulation does not specify who should report or document incidents, what information should be recorded, how the incident should be investigated, or to whom the incident report should be sent. Furthermore, it leaves the implementation of procedures for reporting, reviewing, and investigating incidents to the combatant commanders. In addition, while the law instructs that DOD develop minimum processes and requirements for private security contractor personnel operating in an area of combat operations,the draft regulation only points to an agreement between DOD and State that is specific to Iraq and directs it be used as a framework for the development of guidance and procedures regardless of location. Specifically, the draft references a December 2007 Memorandum of Agreement between DOD and State, which provides that private security contractor personnel who wish to possess and carry firearms in Iraq, must fulfill the core standards of background checks, security clearances, training with annual refreshers on topics such as the rules for the use of force, weapons qualification consistent with U.S. Army standards, and use of weapon types authorized by DOD and State. As noted in our discussion on background screenings, absent minimum departmentwide processes, combatant commanders may develop less comprehensive guidance and procedures and the guidance and procedures developed may widely vary from theater-to-theater. Moreover, the draft regulation does not establish a time frame for combatant commanders to develop and issue the implementing guidance and procedures. Without developing minimum departmentwide processes in a timely manner to assist commanders in developing theaterwide standards and a timeline for completion, DOD will not be able to ensure that its policies related to private security contractors are consistent across the geographic combatant commands and available at the onset of a combat operation. Our review of a May 2009 version of the draft regulation found that it does establish some processes. For example, the draft regulation establishes a process for requesting permission to arm private security contractor personnel. This process includes written acknowledgment by the security contractor and its individual personnel that such personnel are not prohibited under U.S. law to possess firearms and requires documentation of individual training that includes weapons qualification and training on the rules of the use of force. The draft also states that individual training and qualification standards must meet, at a minimum, one of the military department’s established standards. With regard to the registration, processing, and accounting of private security contractor personnel, the draft regulation references a draft update to DOD Instruction 3020.41, which designates the Synchronized Predeployment and Operational Tracker (SPOT) as the joint Web-based database to maintain contractor accountability and visibility of DOD-funded contracts supporting contingency operations. The draft regulation also identifies SPOT as the repository for registering and identifying military vehicles operated by private security contractor personnel. DOD officials stated that they interpreted Section 862’s vehicle identification requirements as the need to register vehicles in a database using a unique identifier as opposed to identifying vehicles with a visual identifier such as a placard. Officials stated identifying vehicles using a visual identifier would expose private security contractors to enemy attacks. However, during our trip to Iraq in 2008, we observed that many DOD private security contractors affixed readable numbers on their vehicles. While DOD was required to develop this guidance by July 2008, as of June 2009 the guidance has not been finalized. According to DOD officials, promulgation of the guidance has taken considerable time due to coordination efforts with State and the federal rule-making process, which requires a draft rule be published for public comment in the Federal Register when it has an impact beyond the agency's internal operations. Because of this delay, the Federal Acquisition Regulation (FAR) has not been revised to require that covered contracts and task orders contain a contract clause to address the selection, training, equipping and conduct of personnel performing private security functions. According to DOD officials, the FAR will not be revised to implement the regulation until the regulation has been finalized. According to officials in State’s Bureau of Diplomatic Security, contractors performing under State’s Worldwide Personal Protective Services contract are required to provide 164 hours of personal protective security training. The training curriculum includes topics such as the organization of a protective detail, firearms proficiency, driver training, and defensive tactics. According to officials in State’s Bureau of Diplomatic Security, this training curriculum was reviewed and approved by State’s Diplomatic Security Training Center. In addition, officials in the Bureau of Diplomatic Security approve the course instructors after reviewing the instructors’ resumes and other qualifications. Our review of State’s Worldwide Personal Protective Services contract found that it contained detailed training requirements for private security contractor personnel. For example, the contract identified very detailed weapons qualification training requirements. These requirements include establishing a minimum number of hours of weapons training, the acceptable venues for conducting the training, and the materials the contractor must furnish. The contract also determines the specific topics to be covered in the weapons training including procedures on safe weapon handling, proper marksmanship techniques, and firing positions. The requirements also establish the minimum number of rounds that must be fired per each weapon being used for training. To determine if private security contractor personnel are trained, officials from State’s Diplomatic Security Training Center and the Office of Protective Operations periodically visit contractor training facilities to monitor training. According to State officials, during these inspections officials review the certifications of training instructors, observe individual training modules, and review individual student training records. According to State officials, the department is also in the process of conducting a comprehensive review of all three Worldwide Personal Protective Services contractor training programs. Officials stated that this is the first comprehensive review under the Worldwide Personal Protective Services contract and as part of this review officials are reviewing a full training curriculum at each contractor’s training location. Officials stated that these reviews will result in recommendations for immediate improvements to each company’s training program and may result in changes to the overall high-threat curriculum. To confirm that State conducted training inspections, we reviewed the two most recent inspection reports of each of the three private security contractors providing services under State’s largest security contract, Worldwide Personal Protective Services. Our review of the records confirmed that State had inspected each contractor and that the reviews were conducted by State subject matter experts. For example, one inspection report we reviewed included a State firearms expert observing the firearms proficiency portion of the training. In each inspection report we reviewed, State concluded that the contractors met training requirements. We also observed that each inspection included suggestions for improvement even when training requirements were met. Officials also stated that in Iraq, Regional Security Officers provide daily oversight and as part of this oversight they are responsible for ensuring that the training standards are met. Much like State, DOD has established contractual training requirements for private security contractor personnel. However, DOD’s training requirements are generally broader than State’s. For example, while State’s training requirements establish a detailed training curriculum that includes a minimum number of hours of training, DOD private security training requirements are more broadly defined. For example, Annex A of Fragmentary Order 09-109 which identifies requirements that must be included in DOD contracts where private security contractors will be armed, establishes that documentation should be maintained to attest that each armed private security contractor employee has been successfully familiarized with and met qualification requirements established by any DOD or other U.S. government agency for each weapon they are authorized to possess. Similarly, the order requires that employees be trained on the law of armed conflict and the rules for the use of force but does not provide specifics to be included in the training. Contracts also contain provisions to ensure that training does not lapse. For example, DOD contracts performed in Iraq or Afghanistan must provide that if the contractor fails to retrain an armed employee within 12 months of the last training date the employee will lose authorization to possess and carry a weapon in Iraq. Individual task orders may reiterate employee training requirements. Fragmentary Order 09-109 makes contracting officer representatives responsible for monitoring the contractor’s performance and compliance with contractual requirements, including compliance with all applicable laws, regulations, orders, and directives. These representatives are co- located on the contractor site to facilitate day-to-day contract oversight. According to DOD officials, contracting officer representatives periodically review individual private security contractor personnel training records to ensure that the training requirements have been met. Additionally, the Defense Contract Management Agency (DCMA) conducts reviews to ensure that contracting officer representatives are providing proper oversight. In February 2008, DCMA began to use a series of checklists developed by DCMA to guide inspections of contracting officer representatives and confirm that these representatives are maintaining the appropriate documentation and providing sufficient contractor oversight. According to DCMA officials, these checklists were developed by taking contract requirements and other DOD guidance and translating them into a tool that could be used for an objective evaluation. These checklists may vary by contract and have been tailored for specific areas of contract performance. For example, while aspects of training may be found on multiple checklists, DCMA has developed a specific training checklist. Among the items checked are that contractors determined that personnel had been trained on the required subjects, that a training plan had been submitted for approval, and that training remained current. To confirm that these inspections covered training, we reviewed 215 completed checklists. While each checklist varied in length and scope, our review of the checklists found that they contained 7 to 54 total items and among those items were several training-related items. For example, one checklist asked if the contractor ensured that all guard force personnel were trained and authorized to be armed before beginning their duties. Another checklist we reviewed asked if the contractor’s training records validated training, certifications, and recertification. Of the checklists we reviewed, the checklists generally documented no concerns about training. However, 7 of the checklists contained observations that raised concerns about the training of personnel. Four checklists contained observations that indicated that personnel were qualified with a different weapon than the one they were assigned. Another checklist indicated that personnel deployed with little to no training noting that personnel learned everything about their posts once they were deployed. Two checklists observed that personnel were not trained in all of the required training subjects. Additionally, according to DOD officials, the department conducts periodic site visits of private security contractors’ Iraq-based training facilities. However, because DOD personnel responsible for providing oversight of DOD’s private security contracts in Iraq are based in Iraq and not elsewhere, such as the United States, these inspections do not regularly include facilities located outside of Iraq, such as contractors’ U.S. training facilities. For example, an official at one private security firm we visited indicated that noone from DOD had ever inspected the firm’s U.S.- based training facility. Unlike State, which maintains personnel in Iraq and in the U.S. to provide contract oversight, DOD’s contracts are administered by the Joint Contracting Command for Iraq and Afghanistan and its personnel responsible for private security contract oversight are all located in Iraq. According to State officials, the Worldwide Personal Protective Services contract requires a quarterly inventory of all U.S. government- and contractor- furnished property, including weapons. According to State officials, all operational weapons are government furnished and are issued to the private security contractors by the regional security officer. The regional security officer conducts an annual sight inventory, which is corroborated with the contractors’ quarterly inventories and records from the State Department branch that acquired the weapons. In addition, officials stated that the quarterly inventories are tracked by officials in State’s high-threat protection office and verifies this during periodic program management reviews. In Iraq, DOD has established a process that includes granting arming authority to private security contractor personnel, and conducting reviews of weapons inventories and inspections of private security contractor armories. DCMA also conducts reviews to ensure that private security contractor personnel are properly authorized to carry weapons. In addition, DOD antiterrorism/force protection officials conduct yearly assessments of every MNF-I installation or forward operating base with over 300 personnel, known as vulnerability assessments. During these assessments officials check physical security measures and verify that armed contractors, including private security contractor personnel, are carrying the required arming authorization letter and meeting the requirements to be compliant with the arming authority requirements. Officials stated that ultimately contracting officer representatives are responsible for ensuring that DOD’s private security contractors adhere to the arming regulations. Officials felt that while there were many good contracting officer representatives, there were some that would benefit from additional training on their responsibilities, instead of learning these things on the job. Recent audits by State’s Office of the Inspector General and the Special Inspector General for Iraq Reconstruction found that weapons were properly accounted for. In April 2009, State’s Office of the Inspector General published results of a performance audit of security contractor Triple Canopy and concluded that the firm established sound inventory controls at the two facilities State inspected in Iraq. To reach this conclusion, the office conducted an inventory of weapons and reviewed inventory documents maintained by the contractor and by the Regional Security Officer. In June 2009, a joint audit of security firm Blackwater, by State’s Office of Inspector General and the Special Inspector General for Iraq Reconstruction, reached similar conclusions. The audit team was able to verify all weapons randomly selected from weapons assigned to Blackwater personnel. The report attributed their ability to verify the weapons to the level of State oversight through quarterly physical inventories and other periodic reconciliations by State personnel. Additionally, a January 2009 audit of security firm Aegis (a private security contractor) by the Special Inspector General for Iraq Reconstruction observed weapons inventory tests at four locations in Iraq and determined that all items were accounted for. State, DOD, and private security contractors have developed and implemented policies related to the use of alcohol by private security contractor personnel. State has established policies that govern when private security contractors can consume alcohol. For example, State’s Worldwide Personal Protective Services contract prohibits private security contractor personnel from consuming alcohol while on duty and within 6 hours prior to going on duty. Although State does not prohibit alcohol consumption by private security contractor personnel, private security contractors with State told us that they have established policies to govern employee alcohol consumption. Private security contractors with DOD contracts told us that their employees were subject to General Order #1 and thus were prohibited from possessing or consuming alcohol while in Iraq. General Order #1, which was established by the Commanding General of MNC-I, prohibits military personnel or contractors employed by or accompanying U.S. forces from the introduction, purchase, possession, sale, transfer, manufacture or consumption of any alcoholic beverage within MNC-I’s area of responsibility. However, General Order #1 does not apply to private security contractors who support the State Department. Private security contractors we spoke with told us that personnel who violate the established alcohol policies are subject to disciplinary actions and depending on the severity of the use may have their employment terminated. When asked how often individuals have been let go due to alcohol, the contractors indicated that it is not very often. For example, one firm stated that out of an average staffing level of more than 650 non- Iraqi personnel, it has only terminated 7 employees due to violations of the alcohol policy. Homeland Security Presidential Directive 12 and its implementing guidance intend to create a consistent, federalwide approach to ensure that federal employees and contractors with regular access to federal facilities and installations are sufficiently screened for security risk. As we reported in 2006, military commanders and other officials are aware of the risks that contractors pose to U.S. forces in part because of the difficulties in screening employees, particularly foreign and host country nationals. While State and DOD have developed policies and procedures to ensure that U.S. citizen personnel and contractors are screened, only the State Department has developed departmentwide procedures to screen foreign national personnel. Efforts within DOD have been stalled by disagreement over how to develop and implement policies and procedures that comply with HSPD-12 while fulfilling DOD’s need to provide private security contractor personnel to fulfill security requirements in Iraq. While we acknowledge the difficulties of conducting background screenings of foreign national personnel, the armed nature of private security contractor personnel presents the need for assurance that all reasonable steps have been taken to provide for their thorough vetting and minimize the risk they present. Without a coordinated DOD-wide effort to develop and implement standardized policies and procedures to ensure that contractor personnel, particularly foreign national private security contractor personnel, have been screened, DOD cannot provide this assurance. Even with established policies and procedures in place, there are inherent risks involved with employing foreign national personnel, making it critical that military commanders and contracting officials understand the risks and limitations associated with background screenings of foreign national personnel. Additionally, until DOD expands and finalizes guidance related to private security contractors, including the development of timelines for combatant commanders, it will not have fully responded to the congressional concerns which led to the development of Section 862 of the National Defense Authorization Act of Fiscal Year 2008. We recommend the five following actions to help ensure that DOD develops a departmentwide approach to properly screening private security contractor personnel, including non-United States citizens. We recommend that the Secretary of Defense appoint a focal point, at a sufficiently senior level and with the necessary authority to ensure that the appropriate offices in DOD coordinate, develop, and implement policies and procedures to conduct and adjudicate background screenings in a timely manner. More specifically the focal point should direct the Office of the Under Secretary of Defense for Intelligence, in consultation with the Under Secretary of Defense for Personnel and Readiness and the Under Secretary of Defense for Acquisition, Technology, and Logistics, to develop departmentwide procedures for conducting and adjudicating background screenings of foreign national contractor personnel and establish a time frame for implementation; develop an effective means to communicate to MNF-I the new procedures so that MNF-I officials can adjust their existing background screening policies and procedures, if necessary, to comport with the procedures; and develop a training program to ensure that military commanders and contracting officials, including contracting officers and contracting officers’ representatives, understand the department’s policies and procedures for background screening as well as their roles and responsibilities. To ensure that DOD fully meets the requirements of Section 862 of the 2008 National Defense Authorization Act we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: establish minimum processes and requirements for the selection, accountability, training, equipping, and conduct of personnel performing private security functions under a covered contract during a combat operation; direct the geographic combatant commanders, through the Chairman of the Joint Chiefs of Staff, to develop and publish the regulations, orders, directives, instructions, and procedures for private security contractors operating during a contingency operation within their area of responsibility; provide a report to Congress with the timelines for completing the minimum processes discussed in the recommendation above; and revise the Federal Acquisition Regulation to require the insertion into each covered contract a clause addressing the selection, training, equipping, and conduct of personnel performing private security functions under such, contract. In commenting on a draft of this report, DOD concurred with two of the five recommendations and partially concurred with three. DOD partially concurred with our recommendation that the Secretary of Defense appoint a focal point at a sufficiently senior level and with the necessary authority to ensure that the appropriate DOD offices coordinate, develop and implement policies and procedures to conduct and adjudicate background screenings in a timely manner. In DOD’s response, the department noted that the Assistant Deputy Under Secretary of Defense for Program Support has been designated to be responsible for monitoring the registration, processing, and accounting of private security contractor personnel in an area of contingency operations. As we noted in this report, the Office of the Under Secretary of Defense for Intelligence (USD-I) is responsible for developing DOD’s background screening policy in conjunction with the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L) and the Office of the Under Secretary of Defense for Personnel and Readiness (P&R). While we don’t dispute the role that the Assistant Deputy Under Secretary of Defense for Program Support has to monitor the registration, processing, and accounting of private security contractor personnel, we do not believe that this office is the correct office to resolve disagreements among the offices responsible for developing DOD’s background screening policy. DOD also noted that it is in the process of institutionalizing the Operational Contract Support Functional Capabilities Integrations Board. According to DOD, the board will provide the senior level oversight to provide cross- component alternatives and recommendations on current and future capability needs, policies, and investments. Since the board has not yet been established, we were unable to determine if the board would have sufficient authority to implement our recommendation or if USD-I will be included on the board. Unless the board is given the authority to resolve the policy differences between the USD-I and AT&L and direct the development of background screening polices, the disagreements that have hampered the development of screening policies and procedures will continue. In addition, DOD stated that it does not conduct its own background investigations on foreign nationals and lacks the infrastructure to do so. The department stated that it depends on the Office of Personnel Management (OPM) to conduct its background investigations. While this may be true for background investigations that lead to the granting of security clearances, our report was focused on background screenings that do not lead to the granting of security clearances. As we noted in this report, contractors are responsible for conducting background screenings for their foreign national employees using standards, processes, and procedures developed by the contractors themselves or as in Iraq, developed by the military. In addition, in Iraq, MNF-I has developed their own background screening process to supplement contractor-led screening of private security contractor personnel. However, as we noted, the process used in Iraq has several shortcomings. We believe that in order to meet the intent of this recommendation, the department needs to develop departmentwide standards and procedures for conducting and adjudicating background screenings to assure itself that screenings are providing as much background information as possible and that the department has a common understanding of what information is or is not included in a contractor-conducted background screening. Without this information, military commanders may be unaware of the risks foreign national private security contractor personnel may pose. Regarding the department's comment that it will ensure that the Defense Federal Acquisition Regulation is modified, it is unclear how the clause can be modified until standards are developed to include in the clause. DOD also partially concurred with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to establish minimum processes and requirements for the selection, accountability, training, equipping, and conduct of personnel performing private security functions under a covered contract during a combat operation. As we noted, Section 862 of the fiscal year 2008 NDAA directed the Secretary of Defense, in coordination with the Secretary of State, to develop regulations on the selection, training, equipping, and conduct of private security contractor personnel under a covered contract in an area of combat operations. DOD responded that the Interim Final Rule published in the July 17, 2009, issue of the Federal Register meets the requirements of Section 862 of the fiscal year 2008 NDAA and that the department was also soliciting input from the geographical combatant commanders on this subject. While the Interim Final Rule published in the Federal Register on July 17th contains some minor variations from the May 2009 draft Interim Final Rule we reviewed for the purposes of this report, our criticisms of the draft Interim Final Rule continue to be applicable to the Interim Final Rule published in the Federal Register. As we noted in our report, the Interim Final Rule directs geographic combatant commanders to develop procedures consistent with principles established in existing DOD instructions and directives and makes reference to existing DOD regulations regarding these areas. However, neither the draft nor the referenced documents articulate a process or requirements that geographic combatant commanders can use to ensure that all private security contractor personnel meet screening and security requirements. The Interim Final Rule published in the Federal Register on July 17th contains these same shortcomings. We continue to believe that DOD should establish minimum processes and requirements for the selection, accountability, training, equipping, and conduct of private security contractor personnel to meet the intent of our recommendation. These processes and requirements could be strengthened, if necessary, by the geographic combatant commanders. As we noted, without these minimum standards DOD will not have reasonable assurance that a minimum level of safety and protection has been met. In the past, DOD has taken a similar approach. In December 2006, DOD updated the department’s antiterrorism instruction. The instruction established minimum DOD antiterrorism measures, while providing military commanders and civilians with the flexibility and adaptability to develop measures that are more stringent if conditions warrant. In addition, DOD partially concurred with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to direct the geographic combatant commanders to develop and publish the regulations, orders, directives, instructions, and procedures for private security contractors operating during a contingency operation within their area of responsibility. DOD stated that this had already been accomplished in large part through the issuance of Multi-National Forces-Iraq Operations Order 09-01in Iraq and OPORD 09-03 in Afghanistan. However, the orders cited by DOD are specific to Iraq and Afghanistan and are not applicable to other geographic commands. Therefore, we believe additional guidance should be developed for other geographic commands. DOD concurred with the remainder of our recommendations. However, DOD did not indicate what, if any, specific actions it would take to address the intent of our recommendations. Therefore, we believe DOD needs to more clearly identify what steps it will take to implement these recommendations. The full text of DOD’s written comments is reprinted in appendix II. The Department of State did not provide formal written comments on a draft of this report. We are sending copies of this report to other interested congressional committees, the Secretary of Defense, and the Secretary of State. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-8365. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Carole Coffey, Assistant Director; Johana Ayers, Assistant Director, Acquisitions and Sourcing Management; Vincent Balloon; Laura Czohara; Robert Grace; Jason Pogacnik; Karen Thornton; Cheryl Weissman; and Natasha Wilder. Our scope was limited to contracts and contractors with a direct contractual relationship with either the Department of Defense (DOD)or the Department of State. For DOD, our analysis of contract materials was limited to contractors performing under DOD’s largest security contract in terms of employment, the Theater Wide Internal Security Services contract. Similarly for State, our contract analysis was limited to contractors performing under the agency’s largest contract for security services in terms of employment, the Worldwide Personal Protective Services contract. Because contractor personnel requiring security clearances are subject to a standard government-led and adjudicated screening process, for reporting purposes, our scope in assessing background screening policies and procedures is limited to those covering private security contractor personnel who do not require a security clearance. To determine the extent to which DOD and State have developed and implemented policies and procedures to ensure that the backgrounds of private security contractor personnel have been screened, we obtained and reviewed government-wide and DOD documents including Homeland Security Presidential Directive 12 and DOD regulations related to vetting, background screening, and operational contract support such as DOD Instruction 3020.41 dealing with operational contract support, Army Regulation 196.56 on vetting and screening of security guards, and DOD Iraq-theater-specific private security contractor guidance including Fragmentary Order 07-428, Fragmentary Order 08-605, and Fragmentary Order 09-109. Additionally, we obtained and reviewed State documentation related to the processes used to conduct background screening including the Foreign Affairs Handbook. We also interviewed officials from various DOD and State offices who were responsible for developing and implementing policies and procedures related to the background screening of private security contractor personnel including officials from the offices of the Under Secretary of Defense for Personnel and Readiness, the Under Secretary of Defense for Acquisitions, Technology and Logistics, the Under Secretary of Defense for Intelligence, and State’s Bureau of Diplomatic Security. In Iraq we met with officials from the Defense Contract Management Agency, the DOD agency tasked with administering DOD security contracts, several contracting officers’ representatives who provide day to day oversight of security contracts, officials from Multi-National Force-Iraq, and State Department officials responsible for oversight of the agency’s Worldwide Personal Protective Services contract, including the Regional Security Officer. Additionally, we obtained and reviewed contracts for security services awarded by both DOD and State to determine what screening requirements were included in the contracts and obtained copies of contractor background screening plans to determine how contractors intended to screen foreign national employees. To determine the extent to which DOD has developed regulations to address the elements of Section 862 of the National Defense Authorization Act for Fiscal Year 2008, we obtained and reviewed the Act. We also obtained and reviewed DOD’s draft regulation—DOD Instruction 3020.pp— being developed by officials in the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L). We also met with officials from AT&L to discuss the progress made in developing this regulation. Our assessment of the extent to which DOD’s draft policy meets the requirements of Section 862 of the National Defense Authorization Act for Fiscal Year 2008 is based on our review of the draft regulation as it was written on May 2009. We did not evaluate the effectiveness of the draft regulation because it has not yet been finalized and is subject to change. To determine the extent to which DOD and State have implemented processes to ensure that private security contractor personnel in Iraq are trained, we reviewed DOD and State contracts for security services in Iraq and interviewed DOD and State officials responsible for ensuring that these requirements have been met. To corroborate statements made by DOD, State, and private security firm officials, we also obtained and reviewed compliance audit checklists from inspections conducted on Theater Wide Internal Security Services contractors by the Defense Contract Management Agency. We selected completed checklists from inspections of the two Theater Wide Internal Security Services contractors we interviewed through the course of our audit. From the full list of completed checklists from the two contractors, we then eliminated checklists related to inspections that were not relevant to our audit, including checklists related to trafficking-in-persons and life support. In total, we reviewed 215 compliance audit checklists from inspections that were conducted from March 2008 through January 2009. Similarly, to ensure that State conducted training inspections of its Worldwide Personal Protective Services contractors, we obtained and reviewed the two most recent inspection reports for each of its three Worldwide Personal Protective Services contractors. We did not evaluate the quality of training provided by DOD and State. To examine the measures the two departments have taken to account for weapons used by private security contractors in Iraq, we obtained and reviewed DOD and State arming guidance and policies. This guidance includes Fragmentary Order 07-428 and Fragmentary Order 09-109. We also interviewed DOD and State officials responsible for providing arming authority for private security contractor personnel including officials in MNC-I’s arming office and officials in the office of the U.S. Embassy Baghdad’s Regional Security Officer. We also met with officials from 11 private security firms who currently provide or have recently provided private security services in Iraq. Finally we reviewed recent reports from various audit agencies such as the State Department’s Office of the Inspector General and the Special Inspector General for Iraq Reconstruction related to weapons accountability. To determine what policies DOD and State had developed to govern alcohol use among private security contractor personnel in Iraq, we reviewed DOD and State contracts for security services to determine if the contracts included any statements on the use of alcohol, obtained copies of department policies such as U.S. Central Command’s General Order 1, which governs the conduct of contractors in Iraq. We also discussed alcohol policies with officials from 8 private security firms who currently provide or have recently provided private security services in Iraq. To obtain the industry’s perspective on background screening, training, and other issues, we interviewed officials from three private security industry associations and officials from 11 private security firms who currently provide or have recently provided private security services in Iraq. We selected firms that represented the variety of security services and approaches used in Iraq. For example, we selected firms that provided both high- and low- visibility security services. We also selected firms with large contracts and firms with small contracts. In addition, to ensure that our discussions with DOD and State private security contractors about background screening were applicable to a wide range of nationalities, we selected firms that recruited employees from a variety of nationalities, including those from the United States, United Kingdom, Peru, and Uganda. Of the 11 firms, we met with 9 who currently provide or had recently provided security services to DOD. Of those 9, we met with 3 of the 5 firms who provide security services under the Theater Wide Internal Security Services contract. For State, we met with all 3 of the department’s Worldwide Personal Protective Services contractors. To achieve our objectives we also reviewed various reviews conducted by DOD and State including those released by DOD’s Office of the Inspector General and State’s Office of the Inspector General. We also examined recent reports issued by the Special Inspector General for Iraq Reconstruction. These reports dealt with issues related to contract management and oversight, weapons accountability, and training. We conducted this performance audit from August 2008 through June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We visited or contacted the following organizations during our review Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, D.C. Office of the Under Secretary of Defense for Intelligence, Arlington, Va. Office of the Under Secretary of Defense for Personnel and Readiness, Washington, D.C. Office of General Counsel, Washington, D.C. Office of the J4, Washington, D.C. U.S. Central Command, Tampa, Fla. Defense Contract Management Agency, Baghdad, Iraq Multi-National Forces-Iraq, Baghdad, Iraq Multi-National Corps-Iraq, Baghdad, Iraq Multi-National Division-Baghdad, Baghdad, Iraq Joint Contracting Command Iraq/Afghanistan, Baghdad, Iraq Army Corps of Engineers Gulf Regional Division, Baghdad, Iraq Army Corps of Engineers, Logistics Movement Coordination Center, Bureau of Diplomatic Security, Washington, D.C. Office of Acquisitions Management, Arlington, Va. Office of the Legal Adviser, Arlington, Va, Baghdad, Iraq Secretary of State’s Panel on Personal Protective Services in Iraq, Washington, D.C. US Embassy Iraq, Baghdad, Iraq Criminal Division, Washington, D.C. Federal Bureau of Investigation, Washington, D.C. U.S. Agency for International Development (USAID), Washington, D.C. Foreign & Commonwealth Office, London, United Kingdom Industry associations and background screening firms Private Security Contractor Association of Iraq, Baghdad, Iraq International Peace Operations Association, Washington, D.C. British Association of Private Security Companies, London, United employeescreenIQ, Cleveland, Ohio Aegis, London, United Kingdom Armor Group, London, United Kingdom Blackwater (now known as Xe), Baghdad, Iraq; Moyock, N.C. Blue Hackle, Baghdad, Iraq Control Risks Group, London, United Kingdom Dyncorps International, West Falls Church, Va. Erinys International, London, United Kingdom Olive Group, Baghdad, Iraq Raymond Associates, Clifton Park, N.Y. SOC-SMG, Minden, Nev. Triple Canopy Inc., Herndon, Va. Rebuilding Iraq: DOD and State Department Have Improved Oversight and Coordination of Private Security Contractors in Iraq, but Further Actions Are Needed to Sustain Improvements. GAO-08-966.Washington, D.C.: July 31, 2008. Military Operations: Implementation of Existing Guidance and Other Actions Needed to Improve DOD’s Oversight and Management of Contractors in Future Operations. GAO-08-436T.Washington, D.C.: January 24, 2008. Military Operations: Background Screenings of Contractor Employees Supporting Deployed Forces May Lack Critical Information, but U.S. Forces Take Steps to Mitigate the Risk Contractors May Pose. GAO-06-999R.Washington, D.C.: September 22, 2006. Rebuilding Iraq: Actions Still Needed to Improve the Use of Private Security Providers. GAO-06-865T.Washington, D.C.: June 13, 2006. Military Operations: High-Level DOD Action Needed to Address Long- standing Problems with Management and Oversight of Contractors Supporting Deployed Forces. GAO-07-145.Washington, D.C.: December 18, 2006. Electronic Government: Agencies Face Challenges in Implementing New Federal Employee Identification Standard. GAO-06-178.Washington, D.C.: February 1, 2006. | Currently in Iraq, there are thousands of private security contractor (PSC) personnel supporting DOD and State, many of whom are foreign nationals. Congressional concerns about the selection, training, equipping, and conduct of personnel performing private security functions in Iraq are reflected in a provision in the fiscal year 2008 National Defense Authorization Act (NDAA) that directs DOD to develop guidance on PSCs. This report examines the extent (1) that DOD and State have developed and implemented policies and procedures to ensure that the backgrounds of PSC employees have been screened and (2) that DOD has developed guidance to implement the provisions of the NDAA and (3) that DOD and State have addressed measures on other issues related to PSC employees in Iraq. To address these objectives, GAO reviewed DOD and State guidance, policies, and contract oversight documentation and interviewed agency and private security industry officials. State and DOD have developed policies and procedures to conduct background screenings of PSC personnel working in Iraq who are U.S. citizens, but only State has done so for foreign nationals. Homeland Security Presidential Directive 12 (HSPD-12) directs U.S. government agencies to establish minimum background screening requirements in order to issue access credentials. But DOD has not developed departmentwide procedures for conducting background screenings of its foreign national PSC personnel. Disagreements among the various DOD offices responsible for developing and implementing these policies and procedures hindered timely execution of the HSPD-12 requirements, and the completion of this development and implementation has been hampered by the lack of a focal point to resolve these disagreements. For example, officials at the Office of the Under Secretary of Defense for Intelligence interpret HSPD-12 as requiring a government screening process for foreign national contractor personnel that is equivalent to the National Agency Check with Written Inquiries (NACI) currently used for U.S. citizen contractor personnel. But officials at the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics maintain that a NACI-equivalent screening for foreign nationals would not be feasible, given the inherent difficulty of screening foreign nationals and the inconsistent quality of criminal and employment records from one country to another, and further, such an approach would will severely limit the numbers of foreign national contractor personnel DOD could use. The offices also differ as to who should approve background screenings, known as adjudication. The Commander of Multi-National Forces-Iraq has established a screening process for PSCs, but GAO has identified several shortcomings that limit the effectiveness of this process. For example, the process directs contractors to obtain background screening for entities that will not provide data to contractors. While DOD has acknowledged the inherent force protection risk it assumes when using contractor employees, without the timely development of standardized policies and procedures, DOD lacks full assurance that all its PSCs are properly screened. While DOD is developing guidance to meet the requirements of the 2008 National Defense Authorization Act, the draft guidance does not meet all of the requirements of that act. For example, the draft guidance does not address the requirement for establishing minimum standards for background screening of PSCs. Instead it directs the combatant commanders to establish standards for their respective areas of responsibility, though it does not establish time frames within which they should do so. Without addressing these concerns, DOD's draft guidance only partially meets the requirements of the 2008 National Defense Authorization Act. DOD and State have taken actions on other issues related to PSCs in Iraq. For example, they have implemented similar processes to ensure that PSC personnel are trained, and to account for PSC weapons. Both agencies have also developed policies related to alcohol use by PSCs |
SSI provides cash benefits to low-income aged, blind, or disabled people. Currently, the aged SSI population is roughly 1.4 million and the blind and disabled population more than 5.2 million. Those who are applying for benefits on the basis of age must be age 65 or older and be financially eligible for benefits; those who are applying for disability benefits must qualify on the basis of two criteria: financial and disability eligibility. To qualify for benefits financially, individuals may not have income greater than the current maximum monthly SSI benefit level of $484 ($727 for a couple) or have resources worth more than $2,000 ($3,000 for a couple). To be qualified as disabled, applicants must be unable to engage in any substantial gainful activity because of an impairment expected to result in death or last at least 12 months. The process SSA uses to determine an applicant’s financial eligibility for SSI benefits involves an initial determination when someone first applies and periodic reviews to determine whether the recipient remains eligible. SSI recipients are required to report significant events that may affect their financial eligibility for benefits, including changes in income, resources, marital status, or living arrangements, such as incarceration or residence in a nursing home. To verify that the information provided by a recipient is accurate, SSA generally relies on matching data from other federal and state agencies, including Internal Revenue Service form 1099 information, Department of Veterans Affairs benefits data, and state-maintained earnings and unemployment benefits data. When SSA staff find discrepancies between income and assets claimed by a recipient and the data from other agencies, they send notices to SSA field offices to investigate further. To determine a person’s qualifications for SSI as a disabled person, SSA must determine the individual’s capacity to work as well as his or her financial eligibility. To determine whether an applicant’s impairment qualifies him or her for SSI benefits, SSA uses state Disability Determination Services (DDS) to make the initial assessment. Once a recipient begins receiving benefits, SSA is required to periodically conduct Continuing Disability Reviews (CDR) to determine whether a recipient’s condition remains disabling. Regarding returning recipients to work, the Social Security Act states that to the maximum extent possible, individuals applying for disability benefits should be rehabilitated into productive activity. To this end, SSA is required to refer SSI recipients to state vocational rehabilitation agencies for services intended to prepare them for returning to work. The act also provides various work incentives to safeguard cash and medical benefits while a recipient tries to return to work. To correctly determine an individual’s initial and continuing financial eligibility, SSA needs accurate and timely information because it is much easier to prevent overpayments than to recover them. SSA tries to get this information directly from applicants and recipients but also supplements these data through the use of computer matches with other federal and state agencies. To do this, SSA compares federal and state data with information claimed by SSI applicants. In many instances, these matches allow SSA to detect information that SSI recipients fail to report; in other cases, they provide more accurate information. However, our prior reviews have found that data from computer matches are often quite old and sometimes incomplete. For example, computer matches for earned income rely on data that are from 6 to 21 months old, allowing overpayments to accrue for this entire period before collection actions can begin. This puts SSI at risk because it collects only about 15 percent of outstanding overpayments. Another weakness in this process is that SSA does not conduct some matches that could help to detect additional overpayments. For example, SSA has not matched data from Aid to Families With Dependent Children (AFDC) to detect SSI recipients who may be receiving benefits from this program. in SSI benefits, mainly because SSA lacked timely and complete information on their incarceration. Recipients or their representative payees did not report the incarceration to SSA as required, and SSA had not arranged for localities to report such information. SSA told us that it has begun a program to identify SSI recipients in jails who should no longer be receiving benefits. Our ongoing SSI work is identifying similar program problems and weaknesses as those noted in prior reports. For example, SSA staff have indicated that recipients’ reporting of changes in living arrangements is frequently subject to abuse. One common scenario involves recipients who become eligible for SSI benefits and shortly thereafter report to SSA that they have separated from their spouse and are living in separate residences. SSA field staff suspect that these reported changes in living arrangements take place because recipients become aware that separate living arrangements will substantially increase their monthly benefits. Another ongoing study of SSI recipients admitted to nursing homes has found that despite SSA procedures and recent legislation to encourage reporting such living arrangement changes, thousands of SSI recipients in nursing homes continue to receive full benefits, resulting in millions of dollars in overpayments each year. This happens because recipients and nursing homes do not report changes in living arrangements and because computer matches with participating states to detect nursing home admissions are not done in a timely manner and are often incomplete. Consequently, these admissions and the resulting overpayments are likely to go undetected for long time periods. In a final area related to financial eligibility, we recently reported that between 1990 and 1994, approximately 3,500 SSI recipients transferred ownership of resources, such as cash, houses, land, and other items valued at an estimated $74 million, to qualify to SSI benefits. This figure represents only transfers of resources that recipients actually told SSA about. Although these transfers are legal, using them to qualify for SSI benefits raises serious questions about SSA’s ability to protect taxpayer dollars from waste and abuse and may undermine the public’s confidence in the program. SSA has acknowledged and supports the need to work with the Congress to develop legislation to address this problem. can be obtained immediately by SSA staff as soon as requested and used for a variety of purposes, including verifying the amount of AFDC or other benefit income a client reports. After reviewing this SSA initiative, we concluded that nationwide use of online access to state computerized data could prevent or more quickly detect about $130 million in overpayments due to unreported or underreported income in one 12-month period. Online access could save program dollars by controlling overpayments and reducing the administrative expense of trying to recover them. In responding to our review, SSA noted that it was exploring options for expanding online access and was examining the cost-effectiveness of doing so. Although some states can currently provide online access to their data inexpensively and easily, SSA has moved slowly in this area. In addition to state data, online access to other federal agencies’ data may help SSA save program dollars. SSA has also moved slowly in this area, however. In addition to financial eligibility, for those who apply for disability benefits, SSA must also determine their disability eligibility or their capacity to work. SSA’s lengthy and complicated disability decision-making process results in untimely and inconsistent decisions. Adjudicators at all levels of this process have to make decisions about recipients’ work capacity on the basis of complex and often judgmental disability criteria. Determining disability eligibility became increasingly difficult in the early 1990s as younger individuals with mental impairments began to apply for benefits in greater numbers. Generally, mental impairments are difficult to evaluate, and the rates of award are higher for these impairments than for physical impairments. SSA’s processes and procedures for determining disability have placed the SSI program at particular risk for fraud, waste, and abuse. For example, in 1995, we reported that SSA’s ability to ensure reasonable consistency in administering the program for children with behavioral and learning disorders had been limited by the subjectivity of certain disability criteria. To address these problems, recent welfare reform legislation included provisions to tighten the eligibility rules for childhood disability and remove children from the rolls who have qualified for SSI on the basis of less restrictive criteria. It is too early, however, to tell what impact the new legislation will ultimately have on SSI benefit payments and SSA’s ability to apply consistent disability policies to this population. In addition, we reported in 1995 that middlemen were facilitating fraudulent SSI claims by providing translation services to non-English- speaking individuals who were applying for SSI. These middlemen were coaching SSI claimants on appearing mentally disabled, using dishonest health care providers to submit false medical evidence to those determining eligibility for benefits, and providing false medical information on claimants’ medical and family history. In one state alone, a middleman arrested for fraud had helped at least 240 people obtain $7 million in SSI benefits. SSI’s vulnerability to fraudulent applications involving middlemen was the result of the lack of a comprehensive strategy for keeping ineligible applicants off the SSI rolls, according to our review. SSA told us that half of all SSI’s recently hired field office staff are bilingual, a step that it believes will reduce the involvement of fraudulent middlemen. In light of the difficulty of determining disability and SSI’s demonstrated vulnerability to fraud and manipulation, periodic reviews are essential to ensure that recipients are disabled. Our work has shown, however, that SSA has not placed adequate emphasis on CDRs of SSI cases. In 1996, we reported that many recipients received benefits for years without having any contact with SSA about their disability. We also noted that SSA performed relatively few SSI CDRs until the Congress mandated in 1994 that it conduct such reviews. Furthermore, SSA’s processes for identifying and reviewing cases for continuing eligibility did not adequately target recipients with the greatest likelihood for medical improvement. Currently, SSA is implementing new review requirements in the welfare reform law. In addition, SSA had about 2-1/2 million required CDRs due or overdue in the Disability Insurance (DI) program and 118,000 SSI CDRs due or overdue as of 1996. Despite the importance of CDRs for ensuring SSI program integrity, competing workloads from implementing welfare reform legislation will challenge SSA in completing the required number of SSI CDRs. As mentioned previously, the Social Security Act states that as many people as possible who are applying for disability benefits should be rehabilitated into productive activity. We have found, however, that SSA places little priority on helping recipients move off the SSI rolls by obtaining employment. Yet, if only a small proportion of recipients were to leave the SSI rolls by returning to work, the savings in lifetime cash benefits would be significant. Technological and societal changes in the last decade have raised the possibility of more SSI recipients returning to work. For example, technological advances, such as standing wheelchairs and synthetic voice systems, have made it easier for people with disabilities to enter the workplace. Legislative changes, such as the Americans With Disabilities Act, and social changes, such as an increased awareness of the economic contributions of individuals with disabilities, have also enhanced the likelihood of these individuals finding jobs. During the past decade, the proportion of middle-aged SSI recipients has steadily increased. Specifically, the number of SSI recipients between the ages of 30 and 49 has increased from 36 percent in 1986 to about 46 percent in 1995 to about 1.6 million people. Thus, many SSI recipients have many productive years in which to contribute to the workforce. Despite these factors, SSA has missed opportunities to promote work among disabled SSI recipients. In 1972, the Congress created the plan for achieving self-support (PASS) to help low-income individuals with disabilities return to work. The program allows SSI recipients to receive higher monthly benefits by excluding from their SSI eligibility and benefit calculations any income or resources used to pursue a work goal. SSA pays about $30 million in additional cash benefits annually to PASS program participants. Despite these cash outlays, almost none of the participants leave the rolls by returning to work. SSA has poorly implemented and managed the PASS program. In particular, SSA has developed neither a standardized application containing essential information on the applicant’s disability, education, and skills nor ways to measure program effectiveness. We have recommended that SSA act on several fronts to control waste and abuse and evaluate the effect of PASS on recipients’ returning to work. In general, SSA has agreed with our recommendations and taken some steps to more consistently administer the PASS program. In the past several months, however, some efforts have begun to place a greater emphasis on returning disabled people to work. The administration is seeking statutory authority to create a voucher system that recipients could voluntarily use to get rehabilitation and employment services from public or private providers and is also seeking legislation to extend medical coverage for recipients who return to work. The Congress has also put forth several proposals in these areas. The problems we have identified in the SSI program are long-standing and have contributed to billions of tax dollars being overpaid to recipients. They have also served to compromise the integrity of the program and reinforce public perceptions that the SSI program pays benefits to too many people for too long. Although many of the changes recently enacted by the Congress or implemented by SSA may result in improvements, the underlying problems still exist. Our work has shown that SSI’s vulnerability is due both to problems in program design and inadequate SSA management attention to the program. Revising SSA’s approach to managing the program will require sustained attention and direction at the highest levels of the agency as well as actively seeking the cooperation of the Congress in improving the program’s operations and eligibility rules. One challenge for the new SSA Commissioner will be to focus greater agency attention on management of SSI and the future viability and integrity of this program. This concludes my prepared statement. I will be happy to respond to any questions you or other members of the Subcommittee may have. For more information on this testimony, please call Jane Ross on (202) 512-7230 or Roland Miller, Assistant Director, on (202) 512-7246. Social Security Disability: Improvements Needed to Continuing Disability Review Process (GAO/HEHS-97-1, Oct. 16, 1996). Supplemental Security Income: SSA Efforts Fall Short in Correcting Erroneous Payments to Prisoners (GAO/HEHS-96-152, Aug. 30, 1996). Supplemental Security Income: Administrative and Program Savings Possible by Directly Accessing State Data (GAO/HEHS-96-163, Aug. 29, 1996). SSA Disability: Return-to-Work Strategies From Other Systems May Improve Federal Programs (GAO/HEHS-96-133, July 11, 1996). Social Security: Disability Programs Lag in Promoting Return to Work (GAO/T-HEHS-96-147, June 5, 1996). SSA Disability: Program Redesign Necessary to Encourage Return to Work (GAO/HEHS-96-62, Apr. 24, 1996). Supplemental Security Income: Some Recipients Transfer Valuable Resources to Qualify for Benefits (GAO/HEHS-96-79, Apr. 30, 1996). PASS Program: SSA Work Incentive for Disabled Beneficiaries Poorly Managed (GAO/HEHS-96-51, Feb. 28, 1996). Supplemental Security Income: Disability Program Vulnerable to Applicant Fraud When Middlemen Are Used (GAO/HEHS-95-116, Aug. 31, 1995). Social Security: New Functional Assessments for Children Raise Eligibility Questions (GAO/HEHS-95-66, Mar. 10, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Social Security Administration's (SSA) Supplemental Security Income (SSI) program and GAO's decision to designate the program one of its high-risk areas. GAO noted that: (1) the SSI program has had significant problems in determining initial and continuing financial eligibility because of the agency's reliance on individuals' own reports of their income and resources and failure to thoroughly check this information; (2) moreover, the judgmental nature of SSA's disability determination process and SSA's past failure to adequately review SSI recipients to determine whether they remain disabled have also exposed the program to fraud, waste, and abuse; (3) SSA is at risk of paying some SSI recipients benefits for too long because it has not adequately addressed their special vocational rehabilitation needs or developed an agencywide strategy for helping recipients who can enter the workforce; (4) the Congress has recently made several changes that address program eligibility issues and increase the frequency of SSA's continuing eligibility reviews; (5) SSA has also begun addressing its program vulnerabilities and has made the prevention of fraud and abuse a part of its plan for rebuilding public confidence in the agency; (6) however, GAO's concerns about underlying SSI program vulnerabilities and the level of management attention devoted to these vulnerabilities continue; and (7) as part of GAO's high-risk work, it is continuing to evaluate the underlying causes of long-standing SSI problems and the actions necessary to address them. |
As part of a uniform set of benefits provided to all veterans who enroll in its health care system, VA, through PSAS, provides prosthetic items to veterans. PSAS has budget and management responsibilities for VA’s provision of prosthetic items, including allocating funding for prosthetic items to VISNs and VAMCs and ensuring veterans receive prescribed prosthetic items in a timely manner. According to PSAS officials, several factors—including expansions in the types of items VA defines as prosthetic items—contributed to an increased demand for prosthetic items between fiscal years 2005 and 2009. VA, through VHA, operates one of the nation’s largest health care systems. VA provides a range of services to veterans who are enrolled in its health care system, such as preventive and primary health care, a full range of outpatient and inpatient services, and prescription drugs. VA’s outpatient care includes providing prosthetic items to those veterans disabled as a result of amputation or permanent impairment of a body part or function. VA classifies a variety of medical devices and equipment as prosthetic items, including artificial arms and legs, eyeglasses, hearing aids, hearing aid batteries, home dialysis equipment and supplies, home respiratory aids, hospital beds, orthoses (orthotic braces, supports, and footwear), pacemakers, telehealth equipment, and wheelchairs. These items range in price, including a cane tip that costs about $2 as well as a microprocessor-controlled knee which can cost more than $100,000. In addition, while the vast majority of prosthetic items are purchased from outside vendors, VA fabricated nearly 4 percent of the artificial limbs and orthoses provided to veterans in fiscal year 2009. Table 1 shows the types of prosthetic items VA provides and specific examples of each type. In fiscal year 2009, the type of prosthetic items for which VA spent the largest amount was surgical implants, which accounted for 27 percent of the more than $1.6 billion VA spent for prosthetic items that year. (See fig. 1.) See appendix I for information on the total costs of and number of prosthetic items provided to veterans in fiscal years 2005 through 2009. The funding VA uses to procure prosthetic items for veterans is made available as part of the appropriations process for VA’s health care services. Each year, VA formulates its annual health care budget by developing estimates of its likely spending for all of its health care services, including prosthetic items. We have previously noted that the formulation of VA’s budget is challenging, as it is based on assumptions and imperfect information on the health services VA expects to provide. For example, VA is responsible for anticipating the service needs of very different veteran populations—including an aging veteran population and a growing number of veterans returning from military operations in Afghanistan and Iraq—and for calculating future costs associated with providing health care services to these populations. VA uses an actuarial model to develop its budget estimates for most of its health care services, including estimates for prosthetic items, and incorporates these estimates in the department’s annual congressional budget justification to the appropriations subcommittees. Rather than receiving an appropriation for each individual health care service it provides, VA receives an appropriation for all its health care services—the Medical Services appropriation. As a result, VA has considerable discretion in how it allocates appropriated funding among its various health care services. VA allocates the Medical Services appropriation either as specific purpose funding or general purpose funding. Where specific purpose funding is restricted to the purposes of individual health care services, such as organ transplant services or readjustment counseling, general purpose funding may be used to cover costs related to any health care service, including services for which specific purpose funding may be insufficient. While most of the funding from the Medical Services appropriation is distributed among the VISNs and ultimately to the VAMCs, according to VA officials, VA also maintains a national reserve to provide additional funding, when needed, to VISNs and VAMCs, as well as for those health care services for which VA allocates specific purpose funds. In addition, during the course of a fiscal year, VA may reallocate funding—that is, adjust how the department allocates its funding—to match spending needs, including redesignating specific purpose funds as general purpose funds or vice versa. Citing significant decreases in the level of care and timely delivery of prosthetic items, VA designated funding for prosthetic items as specific purpose funding in 2001. In general, VA allocates specific purpose funds to PSAS, which in turn allocates them to VISNs; VISNs then allocate funds to VAMCs. These specific purpose funds are for the procurement of prosthetic items as well as the procurement of various components for VA- fabricated or VA-repaired prostheses and orthoses. According to VA officials, these funds do not cover administrative and clinical costs, such as the salaries and benefits of PSAS personnel or labor costs associated with VA fabrication of prosthetic items. Typically, these administrative and clinical costs are covered by a VISN’s or VAMC’s general purpose funds. In addition, VISNs and VAMCs may use their general purpose funds for prosthetic items if spending needs exceed the amount available in specific purpose funds. After physicians and other clinicians at VA medical facilities determine the prosthetic needs of veterans and prescribe specific prosthetic items to meet those needs, PSAS is responsible for processing the prescriptions and providing the prescribed prosthetic items to individual veterans. According to PSAS officials, purchasing agents, generally located at VAMCs, perform administrative actions to process prescriptions for prosthetic items. These administrative actions include activities such as requesting and obtaining additional information from a prescribing clinician, obtaining a price quote from a contractor, and creating a purchase order to authorize the procurement and shipment of an over-the- counter item or the fabrication of a custom-ordered item. PSAS officials stated that the processing of the prescription is considered complete when a prosthetic item has been issued to the veteran from PSAS’s inventory or a purchase order is created for the item. PSAS also has some clinical staff—prosthetists and orthotists—who provide clinical services related to the provision of artificial limbs and orthoses, including participating in the evaluations of prosthetic needs for amputees and, subsequently, designing, fabricating, fitting, and adjusting artificial limbs and custom orthoses. PSAS officials reported that they provide varying levels of services related to the design, fabrication, fitting, and delivery of artificial limbs and orthoses at 77 locations. PSAS officials are also responsible for the overall administration of VA’s provision of prosthetic items. Specifically, PSAS officials in VA’s central office establish national policies and procedures on VA’s provision of prosthetic items; allocate VA specific purpose funding for prosthetic items among the 21 VISNs; monitor the spending of this specific purpose funding and, if appropriate, facilitate the reallocation of funding among the VISNs; and establish and monitor mechanisms, such as performance measures and goals, to evaluate VA’s performance in providing prosthetic items. VISN prosthetic representatives (VPR), located within each of VA’s 21 VISNs, further allocate specific purpose funding among their VAMCs and, with the assistance of local prosthetics chiefs, support central office efforts to monitor VA’s spending for prosthetic items and VA’s performance in providing prosthetic items. Between fiscal years 2005 and 2009, the annual number of veterans who received prosthetic items through PSAS increased about 50 percent and the total amount VA spent on those items grew by about 60 percent. According to VA officials, a number of factors have contributed to this growth and may contribute to expected increases in the future. These factors include the following: VA has expanded the medical devices and equipment it classifies as prosthetic items. For example, during fiscal year 2008, VA classified biological implants, such as bone and tissue grafts, as prosthetic items. In fiscal year 2009, VA spent about $21 million on biological implants. New technologies in prosthetic items available to veterans may increase costs. For example, in the fall of 2010, PSAS plans to begin providing the X2 microprocessor knee—the latest generation of components for prosthetic legs—to some veterans. According to PSAS officials, this component is expected to add about $40,000 to the cost of each prosthesis using this technology. VA guidance clarifying veteran eligibility for certain prosthetic items expanded the number of veterans receiving prosthetic items. For example, in October 2008, VA released a directive restating the department’s policy on veteran eligibility for eyeglasses. As result, the number of eyeglasses VA provided to veterans increased by nearly 22 percent, from about 830,000 pairs in fiscal year 2008 to more than 1 million pairs in fiscal year 2009. In addition, VA expanded eligibility for enrollment in its health care system. In 2009, VA raised the income thresholds that define certain veterans’ eligibility for VA health care services, resulting in approximately 260,000 additional veterans gaining eligibility. This may also have increased the number of prosthetic items provided. In each of fiscal years 2005 through 2009, VA’s actual spending needs for prosthetic items differed from the estimates VA reported in its congressional budget justifications for those years, on which the initial allocation to PSAS for prosthetic items was based. As shown in figure 2, VA spent less for prosthetic items than it had estimated in its justifications for fiscal years 2006 and 2007. These differences—about $82 million in fiscal year 2006 and about $150 million in fiscal year 2007—represented 7 and 12 percent, respectively, of VA’s actual spending for prosthetic items during those fiscal years. In fiscal years 2005, 2008, and 2009, VA spent about $91 million, $83 million, and $183 million more, respectively, than originally estimated (9, 6, and 11 percent, respectively, of VA’s spending for prosthetic items in those fiscal years). VA officials from the VHA Office of Finance and PSAS central office said that they did not perform analysis to determine the specific reasons for the differences between VA’s budget estimates and its actual spending for prosthetic items in a given fiscal year. PSAS officials reported that they do perform some analysis to identify new trends in VA’s spending for prosthetic items, which are taken into account when allocating specific purpose funding for prosthetic items. According to officials, to develop the budget estimates, VHA’s Office of Finance uses the most recently available spending and utilization data in its actuarial model. They noted, however, that these data are 3 years old at the time VA begins to develop budget estimates for a new fiscal year—for example, the actuarial model in VA’s 2010 budget estimate used spending and utilization data from fiscal year 2007. This, coupled with the increased demand for prosthetic items, makes it more difficult to accurately estimate year-to-year PSAS funding needs, according to VA officials. PSAS central office officials reported that they depend upon staff at the VISNs and VAMCs to identify local factors, such as a new surgical service, that could increase demand for prosthetic items, in order to develop more up-to-date estimates for the purpose of allocating specific purpose funding for prosthetic items to VISNs and VAMCs. PSAS officials at each of the 13 VAMCs in our sample identified numerous local factors that can affect spending for prosthetic items during a particular fiscal year. For example, at one VAMC, the prosthetics chief said that the hiring of a new surgeon was expected to increase local spending for certain surgical implants, such as pacemakers, by more than $300,000. This same prosthetics chief also noted that recent increases in the diagnosis and treatment of sleep apnea resulted in an increase of nearly $380,000 in local spending for prosthetic items. In 4 of the 5 fiscal years we reviewed, VA reallocated the funding available for prosthetic items—that is, adjusted the amount of the specific purpose funding for these items—in an effort to better match specific purpose funds for prosthetic items with actual spending needs. Specifically, in fiscal years 2006, 2007, and 2009, VA reduced the amount of specific purpose funding for prosthetic items. During fiscal year 2008, VA allocated an additional $56 million in specific purpose funds from the department’s national reserve in order to meet a request from PSAS for additional funding. (See table 2.) VA based these reallocations on projections of annual spending for prosthetic items developed throughout each fiscal year using year-to-date information on spending. Each year during the third quarter of the fiscal year, for example, VA uses the amount spent on prosthetic items through the first two quarters of the fiscal year to project spending for the rest of the fiscal year and reallocates funding to adjust the amount of specific purpose funding available for prosthetic items accordingly. In addition to the efforts VA made at the national level to reallocate funds to better match specific purpose funding for prosthetic items with actual spending needs, for 3 of the 5 fiscal years we reviewed, some VISNs and VAMCs used general purpose funds for prosthetic items. VA policy requires that VISNs provide additional funding to PSAS, when necessary, from general purpose funding to ensure the provision of prosthetic items is not delayed for lack of funding. In fiscal years 2005, 2007, and 2008, VISNs and VAMCs provided $91 million, $5 million, and $27 million, respectively, from their general purpose funds to address the difference between allocated specific purpose funding and actual spending needs for prosthetic items. (See fig. 3.) While VHA and PSAS officials acknowledge the use of general purpose funds for prosthetic items reduced the funding available for other purposes, they emphasized that this use did not compromise any veteran’s medical care. PSAS has performance measures that monitor the timeliness of its processing of prosthetic prescriptions and a number of veteran feedback mechanisms to identify problems in how it provides prosthetic items to veterans. In fiscal year 2009, PSAS’s performance measures showed that nearly all of its prescriptions for prosthetic items met its performance goals. While in many cases, PSAS’s performance measures serve as a reasonable proxy for monitoring the timeliness of veterans’ receipt of their prosthetic items, they may miss some instances in which veterans experience long wait times. Recognizing this shortcoming, PSAS officials rely on a number of other mechanisms—such as feedback submitted through telephone calls from veterans and receipt of veteran evaluation cards—to obtain information on veteran satisfaction that may alert them of timeliness or other problems not reflected in its performance measures. During fiscal years 2005 through 2009, PSAS had in place and monitored two performance measures that assessed the timeliness of administrative actions related to processing prosthetic prescriptions. The first measure, called “delayed orders,” assessed the percentage of prosthetic prescriptions for which the first administrative action related to the prescription, such as researching the cost of the prosthetic item from different commercial vendors, occurred more than 5 business days after the clinical provider submitted it. PSAS’s performance goal related to this measure was to have no more than 2 percent of orders categorized as delayed orders. The second measure, called “consults pending,” assessed the percentage of prosthetic prescriptions that took more than 45 business days to complete the administrative process associated with ordering the prosthetic item; that is, from the time the first administrative action was taken to the time PSAS determined that the order was complete. PSAS’s related performance goal was to have no consults pending; that is, to administratively process all prescriptions within 45 business days. For fiscal year 2009, PSAS basically met both of its goals related to the delayed orders and consults pending performance measures. PSAS calculated its performance relative to these performance measures for processing all prosthetic prescriptions submitted at each of its VAMCs and VISNs during the year. Based on its calculations, PSAS met its delayed order goal of no more than 2 percent delayed orders, and slightly missed its goal of having no consults pending by about 0.3 percent. According to VA, the delayed order and consults pending measures in many cases accurately reflected the timeliness of processing of prosthetic items. However, because of a weakness in PSAS’s consults pending measure, some prescriptions that took longer than 45 business days to process were not detected by the measure. Specifically, PSAS officials found that prescriptions could be cancelled and reentered, effectively resetting the clock on their processing time. In one VAMC we visited, for example, the prosthetics chief noted that the VAMC was receiving a number of complaints from veterans on the timely receipt of their prosthetic items. Upon further investigation, she identified more than 3,000 unprocessed prosthetic prescriptions that were not reflected in the VAMC’s consults pending measure because the computer system used to process prescriptions allowed purchasing agents to cancel and reenter the prescriptions that were not meeting the 45 business day goal. According to PSAS officials, there are a number of legitimate reasons why processing the prescription for prosthetic items can take longer than 45 days to complete. However, this prosthetics chief told us that, due to high purchasing agent workloads, some of the delays she identified most likely represented orders for prosthetic items that fell through the cracks, and veterans may not have received their prosthetic items until 5 or 6 months after their prescriptions were submitted. Recognizing the limitation in its consults pending measure, PSAS started— at the beginning of fiscal year 2010—to use a new measure, called the “timeliness monitor,” which according to PSAS officials was designed to better assess the timeliness of the complete administrative process of providing a prosthetic item to a veteran and provide better assurance that a prosthetic item was provided in a timely manner. Specifically, PSAS officials said that the timeliness monitor assesses whether both the goals for the delayed order and consults pending measures were met, and whether the prescription was completed by either issuing a prosthetic item directly to the veteran from PSAS’s inventory, or generating a purchase order for the item. PSAS’s goal related to the new timeliness monitor is to have 95 percent of prosthetic prescriptions meet the timeliness monitor performance measure, according to PSAS officials. In the first quarter of fiscal year 2010, PSAS’s timeliness monitor showed that less than 83 percent of prescriptions for prosthetic items met the time frames in the timeliness monitor performance measure. According to PSAS officials at two VISNs, one factor that played a significant role in PSAS not meeting its goal for the timeliness monitor was that the new measure may recognize some prescriptions as incomplete when actually they have been completely processed by PSAS staff. For example, if a veteran does not return to the VAMC to pick up a custom-fit item, such as a pair of orthopedic shoes, the item would not be recorded on the veteran’s prosthetic record even though PSAS staff had completed the administrative process related to the item and it was available for pick up. PSAS officials told us that they are updating their system to allow purchasing agents to close prescriptions that were processed by PSAS but not recorded in a veteran’s record for legitimate reasons, effectively excluding these prescriptions from being considered in the timeliness monitor. According to PSAS officials, once the system is updated, PSAS’s timeliness monitor scores should improve considerably. An additional weakness of VA’s performance measures—both with the new “timeliness monitor” and the former “consults pending” measure—is that these measures do not always identify cases in which a veteran waits a long time to receive an item. In many cases, the administrative actions related to prosthetic prescriptions do serve as a reasonable proxy for monitoring the timeliness of when veterans received their prosthetic items. For example, when a veteran receives an item out of inventory at a VAMC, the time the prescription is recorded as complete reflects the time that the veteran received their prosthetic item. However, the completion of processing of a prescription does not always correspond with the time at which the veteran receives the item. In particular, delays that occur for items that must be fabricated for the veteran or are back-ordered by a vendor are not reflected in VA’s performance measures. For example, according to the prosthetics chief at one VAMC we visited, veterans routinely waited 10 to 12 weeks to receive eyeglasses because of manufacturing delays at the facility that produced the eyeglasses—even though PSAS processed the eyeglass prescriptions in a timely manner. That is, once the purchase order was sent to the facility to manufacture the eyeglasses, VA’s system considered the processing of the prescription to be complete. Officials reported that the VA optical laboratory could not meet the unexpected increase in demand that followed guidance that VA issued in October 2008. This guidance restated the department’s policy that veterans whose vision impairment interferes with their participation in their own medical treatment are eligible to receive eyeglasses. The prosthetics chief at the VAMC explained that through feedback from veterans, they became aware of these delays. The VA optical laboratory has since taken steps to improve wait times, including authorizing overtime and using commercial vendors. Further, officials told us that they are planning a renovation of the optical laboratory to improve its operations. PSAS officials stated that they recognize their performance measures have limitations and that they rely on a number of feedback mechanisms— described below—to alert them of timeliness or other problems not reflected in PSAS’s performance measures. At a national level, PSAS uses additional mechanisms to identify timeliness or other problems which are not captured by its performance measures. These include the following: Comments or complaints on VA’s Web site. When VA receives comments, complaints, or other inquiries related to prosthetics through its Web site, the department directs the information to PSAS’s central office, according to PSAS officials. PSAS’s central office either handles the inquiry directly, or routes it to the relevant location or service, such as a VISN or VAMC, to resolve. As of July 2010, PSAS central office officials reported that they have responded to and closed more than 2,085 inquiries received through VA’s Web site. Direct contact with PSAS staff. PSAS central office, VISNs, and VAMCs receive letters, in-person visits, and telephone calls from veterans about complaints or problems with prosthetic items, according to VA officials we spoke with. If these complaints come in through contacts with PSAS staff at the central office, VISN, or VAMC leadership, the complaints or problems are generally passed on to PSAS staff at the VAMC level for direct action. PSAS officials told us that it is PSAS’s policy to handle complaints and problems in the most direct manner. For example, in one VAMC we visited, officials said that when they receive a complaint, they pass it on to the PSAS purchasing agent responsible for ordering the prosthetic item. The purchasing agent is then responsible for contacting the patient to resolve the complaint or problem. PSAS central office officials reported that while individual VISNs and VAMCs may, to varying degrees, track patient complaints made in person or by letter or telephone, PSAS central office does not systematically track these complaints. In addition to efforts initiated nationally, some VISNs and VAMCs we visited reported that they have developed local mechanisms to further monitor veteran satisfaction with VA’s processing and providing prosthetic items: VISN-sponsored surveys. One VISN we visited conducts patient satisfaction surveys of veterans who receive prosthetic items from the VAMCs in the VISN. On a quarterly basis, PSAS personnel at the VISN send these surveys directly to a sample of veterans who have received prosthetic items requesting veterans to rate aspects of PSAS’s performance such as the quality of the prosthetic items they received, the instructions they received, the courtesy and knowledge of the prosthetic staff that they came in contact with, and the time it took to deliver the prosthetic items. The patients return the surveys to the VISN, where PSAS staff summarize the results of the surveys and provide quarterly reports to the prosthetics chiefs and leadership in each of the VAMCs. The VISN requires prosthetics chiefs at VAMCs with a patient satisfaction score below the VISN’s established goal to develop improvement plans. VISN PSAS staff told us that the surveys have enabled them to identify problems and make improvements in how PSAS staff at the VAMCs interact with veterans, how PSAS staff process prosthetic prescriptions, and in the timeliness and quality of the services provided by vendors, such as home oxygen suppliers. Vendor evaluation cards. PSAS officials at one VISN reported that their VISN required some of their vendors—such as vendors for home oxygen and durable medical equipment—to include a patient comment card with the delivery of the prosthetic items. Veterans return these comment cards to the VISN where VISN PSAS officials review the comments and forward relevant information to VAMC prosthetics offices on at least a quarterly basis. VAMC comment cards. Several VAMCs in our sample provided their own comment cards. Typically, these cards are or will be made available at the PSAS counter and waiting area. PSAS officials at these VAMCs told us that they collect and review the comment cards they receive and address the comments veterans make concerning VA’s processing and providing of prosthetic items on a case-by-case basis. Letters informing veterans to expect delivery of prosthetic items. Officials at two of the five VISNs we visited told us that they use a feature in PSAS’s system for processing prescriptions to generate and send a letter to a veteran each time a prescription is processed. These letters provide information such as the date the prosthetic item was ordered from a vendor, the date the veteran can expect to receive the prosthetic item, and contact information for the PSAS staff responsible for monitoring the order. PSAS officials expressed confidence that, together, the mechanisms they have in place would alert them of serious timeliness and veteran satisfaction issues. Officials at PSAS’s central office, the VISNs, and the VAMCs in our sample told us about a number of local, regional, and national efforts to enhance management effectiveness and efficiency and improve prosthetic services for veterans. PSAS staff at the 13 VAMCs in our sample reported that they had undertaken local efforts to improve performance. For example, PSAS personnel at one VAMC were working to obtain funding from VA’s Office of Rural Health to place orthotic fitters—technicians who fit orthoses—at community-based outpatient clinics. By placing fitters in these clinics, PSAS officials hope to improve access for veterans—for example, to eliminate the need for veterans to travel for several hours to a VAMC to be fitted for and obtain their orthotic shoes—as well as to relieve the workload of prosthetists and orthotists at the VAMCs in this VISN. In addition, 6 of the 13 VAMCs in our sample had recently completed renovations, were in the process of renovating, or were planning renovations of their laboratories and clinical space. PSAS officials explained that the purpose of these renovations was to provide greater patient privacy or increase their capacity to fabricate artificial limbs within the VAMCs. Some officials further explained that increasing the capacity of their prosthetic laboratories would allow more veterans to receive their prosthetic limbs directly from the VA rather than from outside vendors, which could increase convenience for veterans and reduce costs for VA. At the regional level, according to PSAS central officials, 7 of VA’s 21 VISNs have chosen to centralize the management of PSAS within the VISN. Under a centralized PSAS management structure, the VPR is in charge of managing all aspects of the provision of prosthetic items in the VAMCs within their VISN, including the hiring and firing of PSAS personnel such as prosthetics chiefs and purchasing agents, and resolving veterans’ complaints. PSAS’s central office has recommended that VISNs adopt this management structure for PSAS for more than a decade, but as part of VA’s overall decentralized management structure, each VISN’s leadership has the authority to determine how PSAS is managed in its region. Although PSAS’s central office has not collected performance data conclusively showing the benefits of centralized management, officials we spoke with identified several potential benefits. PSAS central office officials stated that a centralized management structure allows for resource sharing within the VISN—for example, PSAS purchasing agents at one VAMC performing duties for other VAMCs within the VISN—and helps ensure greater uniformity of supervision and services. PSAS and VISN officials at three VISNs we visited that had centralized management noted that because centralization shifts costs and decisions related to PSAS personnel from the VAMCs to the VISN, PSAS avoids competing with other health care services within VAMCs for staff resources. Officials in two of these VISNs also stressed that centralization not only improved efficiency by facilitating the development and implementation of standardized procedures for processing prosthetic prescriptions across the VISN, but also enhanced veteran care by moving some of the day-to- day administrative tasks up to the VISN, thus freeing PSAS staff at the VAMCs to devote more time to meeting veterans’ needs. While in general, the officials we spoke with—both at PSAS’s central office and at VISNs that had adopted a centralized approach to managing PSAS—supported centralization, a few VAMC officials in some centralized VISNs expressed some concerns. For example, VAMC officials at two VAMCs we visited said that although PSAS was currently meeting the needs of veterans at their facilities, they were concerned that under a centralized management structure local leadership might not have the authority to take appropriate action if the performance of local PSAS staff was not satisfactory. In addition, one of these officials noted that under a prior director, centralization had contributed to a lack of communication between PSAS personnel and VAMC leadership. Specifically, it was their understanding that, since PSAS staff reported directly to the VISN rather than the leadership at that VAMC, the previous VAMC leadership had at times not included PSAS staff in management meetings and decisions that affected PSAS. PSAS has a number of national efforts to improve the delivery of prosthetic items across VA. These efforts include developing national contracts, conducting site visits to poorly performing VAMCs, providing clinical practice recommendations for physicians who prescribe prosthetic items, obtaining accreditation and certification for prosthetic laboratories and staff, and training new management staff. National contracts. PSAS uses national contracts that, according to PSAS officials, provide prosthetic items to veterans across the country in a more consistent, timely, and cost-efficient manner. PSAS first used national contracts to purchase prosthetic items in fiscal year 2002, and in fiscal year 2009, PSAS had 49 national contracts for prosthetic items ranging from orthopedic shoes and diabetic socks to implantable joints and cardiac pacemakers. According to PSAS officials, national contracts can improve efficiency and timeliness because the specifications, price, and shipping requirements for prosthetic items are determined by the contract rather than by individual purchasing agents. These contracts also help ensure that the quality of the prosthetic items provided to veterans is consistent across the country. According to PSAS officials, PSAS’s use of national contracts has resulted in substantial cost savings since fiscal year 2002. Site visits. Officials from PSAS central office told us that they have begun to conduct site visits to review PSAS operations in a number of VAMCs. Specifically, PSAS officials told us that they are conducting site visits to identify staffing or other problems that lead to poor performance and to make recommendations that should lead to faster and more consistent prosthetic services for veterans at these facilities. For the initial visits, PSAS selected VAMCs and VISNs that performed poorly on PSAS’s performance measures, such as its consults pending measure. As of June 2010, PSAS staff had conducted 42 site visits, and PSAS officials said they plan to conduct reviews in all 153 VAMCs. Clinical practice recommendations. PSAS has developed 40 clinical practice recommendations, which are guidance documents to help VA clinical staff make appropriate decisions about prosthetic prescriptions. These include prescribing recommendations for orthotic devices, home oxygen equipment, pacemakers, and hip and knee joint replacements. According to PSAS officials, these recommendations help ensure that prosthetic items are provided to veterans in a more consistent manner across the country. Accreditation and certification. PSAS has implemented an initiative to provide additional assurance that VA is providing high-quality prosthetic services and to develop the technical and management skills of PSAS staff. In fiscal year 2007, PSAS established a policy to obtain accreditation for its orthotic and prosthetic laboratories, and certification for all clinical personnel. According to PSAS officials, as of September 2010, PSAS had obtained accreditation, or the accreditation was pending, for nearly all of its 77 orthotic and prosthetic service locations, and certification for 165 of its 172 orthotists, prosthetists, and fitters. Management training. PSAS created a technical intern program to train prospective managers on the operations of PSAS at the VAMC level. According to VA officials, this program is important because a large number of the prosthetics chiefs are nearing retirement and in many cases there are few experienced staff who could replace them. We provided a draft of this report to VA for review. We received technical comments from VA, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. This appendix provides the results of our analysis of data from the National Prosthetic Patient Database (NPPD)—an internal database used by the Department of Veterans Affairs (VA) to administer its provision of prosthetic items that contains information on prosthetic items furnished to veterans. This appendix presents the total costs and number of prosthetic items provided to veterans in fiscal years 2005 through 2009. Table 3 shows the total costs and number of prosthetic items provided to veterans by type of prosthetic item. For fiscal year 2009, the total cost for various types of prosthetic items ranged from about $93 million for orthoses to about $439 million for surgical implants. Table 4 shows the total costs and number of prosthetic items provided to veterans by Veterans Integrated Service Network (VISN). For fiscal year 2009, the total costs for prosthetic items for VISNs ranged from about $34 million in VISN 5 (Capitol Health Care Network) to about $164 million in VISN 8 (VA Sunshine Healthcare Network). Table 5 shows the total costs and number of prosthetic items provided to veterans by VA station. For fiscal year 2009, the total costs of prosthetic items at individual stations that provided any prosthetic items ranged from less than $1 million at Pittsburgh HCS-Highland Dr., Chattanooga, and several other stations to about $39 million at the San Antonio VAMC. In addition to the contact named above, Kim Yamane, Assistant Director; Susannah Bloch; Matthew Byer; Aaron Holling; Lisa Motley; Daniel Ries; and Said Sariolghalam made key contributions to this report. | In fiscal year 2009, the Department of Veterans Affairs (VA) provided more than 59 million prosthetic items to more than 2 million veterans. After VA physicians and other clinicians prescribe prosthetic items, VA's Prosthetic and Sensory Aids Service (PSAS) is responsible for processing prescriptions and providing prosthetic items to veterans. PSAS is also responsible for managing VA's spending for prosthetic items--more than $1.6 billion in fiscal year 2009. In fiscal year 2008, this spending exceeded VA's budget estimates. Each year, VA makes an initial funding allocation for prosthetic items, and may reallocate by increasing or decreasing the funding available for prosthetic items during the fiscal year. GAO was asked to examine (1) how, for fiscal years 2005 through 2009, VA's spending for prosthetic items compared to budget estimates, and the extent to which VA reallocated funding for prosthetic items; (2) how PSAS monitors its performance in processing and providing prosthetic items to veterans; and (3) the efforts VA has undertaken to improve PSAS's performance. GAO reviewed VA's spending and funding allocation data for fiscal years 2005 through 2009. GAO also reviewed documents and interviewed VA officials at headquarters, 5 of VA's 21 regional health care networks, called VISNs, and 13 VA medical centers (VAMC). VA spending for prosthetic items for each of fiscal years 2005 through 2009 differed from budget estimates, varying in amounts--both under and over budget estimates--ranging from 6 to 12 percent of VA's overall spending for prosthetic items during the 5 fiscal years. In fiscal years 2005, 2008, and 2009, VA spent about $91 million, $83 million, and $183 million more, respectively, than VA originally estimated for its congressional budget justification. Conversely, in fiscal years 2006 and 2007, VA spent about $82 million and about $150 million less, respectively, for prosthetic items than estimated. VA officials reported that they did not perform analysis to determine the specific causes of these differences, but that new trends are taken into account when allocating funding to be used for prosthetic items. In an effort to more closely match funds available for prosthetic items to actual spending needs, VA reallocated the funding available to PSAS and relied on VISNs and VAMCs to address the need for additional funding for prosthetic items at specific VA locations. For example, in fiscal year 2008, when an additional $83 million in funding was required for prosthetic items, VA reallocated $56 million to PSAS and VISNs and VAMCs covered $27 million in spending for prosthetic items. PSAS has performance measures that monitor the timeliness of its processing of prosthetic prescriptions and a number of veteran feedback mechanisms to identify problems in how it provides prosthetic items to veterans. In fiscal year 2009, PSAS's performance measures showed that nearly all of its prescriptions for prosthetic items met its performance goals. While in many cases, PSAS's performance measures serve as a reasonable proxy for monitoring the timeliness of veterans' receipt of their prosthetic items, they may miss some instances in which veterans experience long wait times. Recognizing this shortcoming, PSAS officials rely on a number of other mechanisms--such as telephone calls from veterans and receipt of veteran evaluation cards--to obtain information on veteran satisfaction that may alert them to timeliness or other problems not reflected in their performance measures. VA is making a number of efforts at various levels to improve its performance in providing prosthetic items to veterans. For example, in 7 of VA's 21 VISNs, PSAS personnel at the VISN level centrally manage the provision of prosthetic items at all of the VAMCs in their region. According to VA officials in several VISNs that have adopted this centralized management structure, giving VISN-level PSAS personnel more authority has allowed local PSAS personnel at the VAMCs to devote more time to meeting veterans' needs, and in some cases, has enhanced management effectiveness and efficiency. At the national level, in fiscal year 2009, PSAS had 49 national contracts for prosthetic items, which, according to PSAS officials, help ensure that the quality of prosthetic items provided to veterans is consistent across the country. VA provided technical comments that GAO incorporated as appropriate. |
The United States provides military equipment and training to partner countries through a variety of programs. Foreign partners may pay the U.S. government to administer the acquisition of materiel and services on their behalf through the FMS program. The United States also provides grants to some foreign partners through the FMF program to fund the partner’s purchase of materiel and services through the process used for FMS. In this report, we refer to FMS, FMF, and other State Department programs implemented by DOD as “traditional” security assistance programs. In recent years, Congress has expanded the number of security cooperation programs to include several new programs with funds appropriated to DOD, as well as administered and implemented by DOD, that focus on building partner capacity (BPC). See table 1 for descriptions of the BPC programs included in our report. DSCA oversees program administration for both traditional programs and newer BPC programs. DSCA establishes security assistance procedures and systems, provides training, and guides the activities of implementing agencies. Implementing agencies of the military departments—the Army, Navy, and Air Force—are responsible for preparing, processing, and executing the vast majority of security assistance agreements.While these implementing agencies maintain their own unique systems and procedures, DSCA provides overall guidance through the Security Assistance Management Manual and associated policy memos. DSCA provides education and training to security cooperation officials through its Defense Institute of Security Assistance Management. Both the traditional and BPC programs that DSCA administers use the FMS process to provide security assistance, but, as shown in figure 1, some roles, responsibilities, and actors differ. In contrast to traditional programs, under the BPC programs, the United States consults with the partner country, but takes lead in identifying partner requirements and funds, obtains, and delivers equipment on the partner’s behalf. The form of the FMS process used to implement BPC programs is referred to as the “pseudo-FMS” process. While the many steps of the FMS and pseudo-FMS processes can be grouped in different ways, they fall into five general phases: assistance request, agreement development, acquisition, delivery, and case closure. See figure 1 for a summary of selected entities and their roles in these phases of the FMS and pseudo- FMS processes. Assistance Request. During the assistance request phase, in traditional FMS, the partner country identifies its requirements (needed materiel or services) and documents them in a formal letter of request. Implementing agencies as well as SCOs and officials at DOD’s six geographic combatant commands may provide input to the assistance request. In the pseudo-FMS process, SCOs and combatant commands consult with partner countries and take the lead in identifying partner country requirements and drafting the request, sometimes with input from the partner country. Agreement Development. During the agreement development phase, the implementing agency enters the letter of requestDefense Security Assistance Management System, a DSCA information system used by all implementing agencies to process letters of request and produce security assistance agreements. DSCA reviews the draft agreement and coordinates with the State Department before sending any congressional notifications that, for the traditional FMS process, may be required based on the dollar value or sensitivity of the potential sale but for pseudo-FMS are required for all programs. When approvals are in place, DSCA conducts a final quality assurance review and State performs a final review. In traditional FMS, DSCA authorizes the implementing agency to send the agreement to the partner country for acceptance; in the pseudo-FMS process, the implementing agency accepts the agreement on behalf of the combatant command. Acquisition. During the acquisition phase, implementing agencies requisition from existing supply or procure equipment and services using the same procedures they use to supply the U.S. military. The process is the same for both FMS and pseudo-FMS. Case managers at implementing agencies monitor acquisitions and enter status information into their data systems. Unlike the single information system used to develop agreements, the information systems used in the acquisition phase are not common across implementing agencies. However, DSCA has created a web-based overlay, the Security Cooperation Information Portal, which imports some of the information available in implementing agency data systems and is accessible over the Internet by security cooperation and partner country officials. Delivery. In the traditional FMS process, the partner country takes custody of materiel in the United States and is responsible for arranging delivery. The partner country may pay to use the U.S. military transportation system, but often uses its own freight forwarder—an authorized agent responsible for managing shipment to the final destination. If shipments are incomplete or otherwise deficient, the partner country may file a supply discrepancy report to seek redress. All BPC program shipments use the U.S. military transportation system or other U.S. government-procured transportation, with the SCO responsible for providing the delivery address, ensuring foreign customs requirements can be met, jointly checking shipments for completeness with the partner country, and preparing any needed supply discrepancy reports. Implementing agencies are responsible for conducting BPC deliveries and confirming that SCOs are ready to receive a planned delivery. For both FMS and pseudo-FMS processes, DOD uses the Enhanced Freight Tracking System (EFTS), a secure web-based application accessible within the Security Cooperation Information Portal designed to provide visibility of the security assistance distribution system. Case Closure. An FMS case is a candidate for closure when all materiel has been delivered, all ordered services have been performed, no new orders exist or are forthcoming, and the partner has not requested the case be kept open. At case closure, any remaining case funds may be made available to the country for further use. Pseudo-FMS cases may be submitted for closure as soon as supply and services are complete. DOD has undertaken internal improvement efforts designed to address challenges in implementing security cooperation and security assistance programs and improving timeliness of U.S. efforts. DSCA has also undertaken improvements recommended by its internal improvement program, begun in 2008, which has reviewed DSCA and implementing agency processes.initiated a comprehensive review of DOD’s internal processes. The results of the task force review led to recommendations focusing on areas In fiscal year 2010, the Secretary of Defense for improvement including: identification of partner requirements; acquisition and transportation; and training, education, and workforce development. DOD and DSCA have initiated a variety of efforts to implement the recommendations, and a follow-up task force report describes the status of action on the recommendations. In focus groups we conducted in 2012 at all six combatant commands and interviews with the officials at SCOs in 17 countries, security cooperation officials reported three types of challenges: (1) optimizing training and workforce structure, (2) defining partner country requirements, and (3) obtaining information on the acquisition and delivery status of assistance agreements. DSCA has undertaken reforms to address challenges associated with (1) training and workforce structure, (2) defining partner country requirements, and (3) obtaining information on the acquisition and delivery status of assistance agreements.addressing the first two challenges in the short term, reforms to address information system gaps are more long-term focused and are expected to take years to complete. Security cooperation officials reported that the existing training and workforce structure presented a challenge to successfully implementing security assistance. Specifically, focus groups at four of the six combatant commands indicated that training or staffing of SCOs was insufficient, limiting SCO effectiveness as they develop assistance requests, build relationships in-country, and track assistance agreements through to delivery. These focus group participants and officials at the SCOs and military departments reported that they felt a number of changes were needed, such as including more training on newer security cooperation authorities, providing additional refresher courses, and ensuring that security cooperation officers meet with their military department points of contact as part of their predeployment training for their SCO assignments. In addition, according to focus groups and interviews we conducted with SCOs, SCOs were insufficiently staffed or rotations in the field were not long enough. For example, some SCOs reported having only one security cooperation officer, and rotations sometimes lasted only 1 year, which was often less than the cycle time to develop and execute a security assistance agreement. Focus group participants said a lack of institutional memory in these SCOs created challenges for new officers who must assume responsibility for ongoing security cooperation efforts. DSCA has initiated a number of reforms designed to address training and workforce structure challenges previously identified by DOD and raised again during our focus groups and interviews. DOD recognized the need for improved training and workforce management as early as 2009, when the Deputy Secretary of Defense included efforts to improve security cooperation training in his top 10 Office of Management and Budget high- priority performance goals for 2010 and 2011. DSCA is developing several courses to address reported gaps in knowledge and to increase the percentage of the security cooperation workforce that receives training. For example, the Deputy Secretary of Defense declared in 2009 that DSCA must plan to educate 95 percent of the security cooperation workforce by the end of fiscal year 2011. As of September 2012, DSCA has consistently reported that this goal has been met or exceeded since it was first achieved in June 2011. DSCA and the Defense Institute of Security Assistance Management are currently identifying key positions in the security cooperation community and developing improved procedures to help ensure the selection of well-qualified candidates for those positions. In addition to monitoring the percentage of people trained, DOD has reforms underway to address concerns about the content of the training for, and the staffing of, security cooperation positions. In 2011, DOD’s Defense Institute of Security Assistance Management began expanding a required course for DOD personnel responsible for security assistance and security cooperation management in overseas positions such as at SCOs, combatant commands, and Defense Attaché Offices. Furthermore, the course now includes information that security cooperation personnel identified as important, such as a section on BPC programs. As of September 2012, the Institute reported that students found the initial expansion of the required course better covered the planning and execution of the wide variety of security cooperation programs. The course changes are now complete and, beginning in October 2012, the Institute plans to offer the final expanded course. As a result of changes to this course, security cooperation officers are now able to meet—in person and by video-teleconference—with the DOD points of contact they will work with to implement security cooperation programs once they are in the field. In addition, these new course offerings introduce the topic of security cooperation to U.S. government officials who interact with partner countries but do not necessarily work on security cooperation programs. In addition to improvements to course offerings, DSCA has created additional resources for security cooperation officials. In April 2012, DSCA added a new chapter devoted specifically to building partner capacity to the Security Assistance Management Manual. DSCA and the Office of the Secretary of Defense for Policy have created a tool kit which provides points of contact and implementation guidance for each assistance program. Mandatory training for security cooperation officers includes a review of this tool kit. Focus group participants in five of six combatant commands and officials at 9 of the 17 SCOs noted challenges in identifying and defining partner country assistance requirements. These officials noted that partner countries did not have enough experience or expertise to identify their requirements or develop an assistance request that DOD can act upon. Further, focus groups at four of the six combatant commands reported that SCOs lacked the experience or capacity necessary to identify equipment to match the partner country’s requirements. For example, officials in two focus groups reported that some SCOs lacked staff with expertise to develop either traditional or BPC assistance requests. Since 2009, DOD has initiated reforms to improve the process of developing assistance requests intended to reduce implementation delays and improve the effectiveness of assistance to partner countries. DOD reforms include developing new training courses and providing in- country advisors to help country officials identify short-term and long-term requirements and strategies to meet those requirements. DOD has also reformed its own processes for defining requirements to improve long- term effectiveness of security cooperation programs and provide short- term solutions for meeting requirements using assistance requests. For example, beginning in 2011, DOD issued new policies and guidance to help combatant commands and implementing agencies plan for, and better develop, security assistance requests. Also in 2011, DSCA established a strategic planning support group to assist combatant commands with early identification and resolution of issues related to capability requirements and certain types of assistance requests. In addition, DSCA established Expeditionary Requirements Generation Teams whose purpose is to help the combatant commands, partner countries, and security cooperation officers identify and refine a partner country’s requirements. These teams are available for both traditional and BPC programs upon request by combatant commands. DSCA noted that these teams would be particularly useful when a security cooperation officer lacks experience or familiarity with the type of equipment in question. DSCA provided pilot teams for Bulgaria, Iraq, and Uzbekistan and, after the pilot was determined to be successful, sent teams to assist Armenia, the Philippines, and again Iraq. The pilot teams produced 34 assistance letters of request, including some for FMF programs. DOD officials participating in focus groups at all six combatant commands and officials at 16 of the 17 SCOs we interviewed reported difficulties obtaining information from DSCA and the implementing agencies of the military departments—the Army, Navy, and Air Force—on the status of assistance agreements throughout the security assistance process. These officials reported that obtaining information on acquisition and delivery status was particularly problematic. According to DSCA’s Security Assistance Management Manual, in order to facilitate information sharing regarding assistance agreement status, the implementing agencies must communicate frequently with DSCA, the combatant commands, and the security cooperation officers, as well as with other entities involved in executing security assistance programs. However, focus group participants at the commands and the security cooperation officers we interviewed reported a number of problems obtaining the information they need in order to implement security assistance programs throughout the process. Specifically, they reported that: DSCA and implementing agency information systems were difficult to implementing agency information systems often did not contain current information; these systems often did not contain the specific type of information the officials needed; implementing agencies generally did not proactively provide the information that was available; shipping documentation was often missing or inadequate; and deliveries arrived when the SCOs did not expect them. Security cooperation officials we interviewed reported examples of this lack of information delaying assistance, increasing costs, or negatively affecting their ability to keep partner countries and senior officers at the combatant commands informed about the progress of the assistance agreements. For example, security cooperation officers at four SCOs reported that equipment was held by the partner country’s customs agency because the delivery lacked proper documentation or proper address labels, and additional customs fees were incurred while the security cooperation officers found the missing information. Security cooperation officers in two SCOs noted instances where shipments were warehoused in a customs office for 2 years because they had no addresses or were improperly addressed. Security cooperation officers in three SCO reported discovering equipment at ports and airports that had arrived without advance notice. In addition to receiving reports of challenges encountered by officials using the various DOD information systems, we analyzed the extent to which data were available in the delivery tracking information system. DOD has created an information system intended to provide a single, consolidated, authoritative source for security assistance shipment information tracking. However, we found that DOD is not ensuring that entities charged with carrying out deliveries are fully providing data for this system. The Security Assistance Management Manual recommends that SCOs use the EFTS to maintain awareness of incoming shipments to the partner country when the items are shipped using the U.S. Defense Transportation System. EFTS, accessible through the Security Cooperation Information Portal, collects, processes, and integrates transportation information generated by the military services, Defense Logistics Agency, the U.S. Transportation Command, participating carriers, freight forwarders, and partner countries—all of which can play a role in the equipment delivery process and in populating the information systems. However, EFTS is not currently populated with sufficient information to provide transit visibility. The system currently provides information regarding when cargo leaves the supply source for most security assistance deliveries, but we found that information availability decreases as deliveries transit through intermediate points and on to final destinations. EFTS provides limited information documenting, for example, the date a shipment departs the United States and arrives at a port in the recipient country. In addition, the system documents about 1 percent of the dates that equipment arrived at the in-country final destination. Figure 2 provides percentages of fields in EFTS for which participating entities provided data, based on a sample of FMF deliveries, for fiscal years 2007-2011. The lack of data in EFTS is caused by inconsistent participation by the entities executing deliveries, which need to provide the data that would populate the system. Equipment deliveries for traditional security assistance programs are often executed by partner country freight forwarders. According to DOD officials, some freight forwarders have been reluctant to participate in EFTS and must be directed by the partner country to do so, possibly requiring a change to the freight forwarder’s contract with the partner country. Although DSCA can issue guidance to freight forwarders, according to DSCA officials, it has no authority to require them to follow the guidance. The 2008 DSCA memo announcing the introduction of EFTS notes that the success of the program relies greatly on the participation of partner countries and their freight forwarders, and DSCA officials have since discussed ways to encourage freight forwarders to participate in the EFTS system and report final shipments. DSCA officials have acknowledged that there is still work to be done to address challenges in implementing EFTS. DOD has reforms underway for additional information systems to address the lack of information across the process. In an effort to develop more complete, comparable, and detailed data on security assistance agreement execution, DSCA is developing a new electronic system, the Security Cooperation Enterprise Solution, to aggregate data from the separate computer management systems used by DOD’s implementing agencies and standardize the handling of security assistance agreements regardless of the assigned military service. The system is intended to improve visibility on the acquisition and later phases of the security assistance process. Agency leaders have noted that the Army’s Security Assistance Enterprise Management Resource system has already contributed to significantly increased management visibility across the entire security assistance process for Army-implemented assistance agreements and will bolster efforts to make similar management tools available across implementing agencies, particularly once incorporated into the Security Cooperation Enterprise Solution. The Security Cooperation Enterprise Solution is intended to be a long- term solution to information management challenges. DSCA officials expect to provide the system to one of the implementing agencies in 2015 and plan to complete system implementation in 2020, when the remaining two implementing agencies will have access to the system. DSCA has also initiated reforms intended to increase visibility for specific phases of the process. For example, in 2010, DSCA undertook an effort to improve the quality of the documentation included with each shipment. As a result, the Defense Institute of Security Assistance Management issued a training guide in 2012 to improve the accuracy of addresses on shipments. Furthermore, to address problems with the agreement development phase, DSCA is working with the Defense Contracting Management Agency to develop a way to make contract information available to FMS customers via the Security Cooperation Information Portal. The stated goal is to allow customers to search for the information as well as to create reports containing contract information that can be sent to a range of FMS customers. The training and workforce structure reforms discussed earlier may also address some of the reported challenges regarding information accessibility. DSCA has collected data that show improved timeliness in processing security assistance requests and developing security assistance agreements. However, assessing the timeliness of the entire security assistance process is difficult, because DSCA lacks timeliness performance measures for the other phases and for the overall process.For example, the agency does not measure the timeliness of assistance acquisition, delivery, and case closure, which usually comprise the most time-consuming activities. According to Standards for Internal Control in the Federal Government, U.S. agencies should monitor and assess the quality of performance over time. Furthermore, the Government Performance and Results Act of 1993, as amended, requires agencies to develop performance measures, monitor progress on achieving goals, and report on their progress in their annual performance reports. Our previous work has noted that the lack of clear, measurable goals makes it difficult for program managers and staff to link their day-to-day efforts to achieving the agency’s intended mission. DSCA has access to many security assistance management systems that implementing agencies use and maintain to manage the security assistance process. DSCA routinely extracts selected information from these systems to oversee the process and has established some performance measures to assess timeliness in various phases. DSCA data indicate improvements in the timeliness of assistance request processing. In the assistance request phase, DSCA measures the number of security assistance requests and the time spent processing them after they are received. DSCA measures processing time as the number of days from the time a request is formally received until it is “complete,” or ready for the agreement development phase. According to DSCA data, the number of days necessary for processing assistance requests once they are formally received has improved from about 22 days in fiscal year 2008 to about 13 days in fiscal year 2011. While DOD has improved its response to the formal request, a partner country or combatant command’s perspective of the time required to develop security assistance requests may be different from the portion of that time under the oversight of DSCA. A significant amount of time devoted to the development of assistance requests takes place before the customer submits an assistance request to an implementing agency or DSCA. For example, U.S. officials such as combatant command and SCO staff as well as experts on relevant defense equipment may work intensively with partner country officials before the request is officially submitted. DSCA data show that implementing agencies have reduced the time spent during the agreement development phase of the security assistance process. DSCA uses a single data system to collect detailed information from implementing agencies on the time required to develop security assistance agreements. This information allows common performance measurement that provides a basis for focused reforms to reduce process times in this phase. Aggregate DSCA data for all agreements indicate a reduction in the average, or mean, number of days for an assistance agreement to be fully developed and offered to partner countries from 124 days in fiscal year 2007 to 109 days in fiscal year 2011, with a fiscal year 2009 low of 103 days (see fig. 3). In addition, sample DSCA data we analyzed indicate that agreement development is faster for BPC programs than the traditional FMF security assistance program for the 17 countries in our sample.years 2007 through 2011, FMF security assistance agreement development for our 17 sample countries took an average of 89 days, whereas agreement development for BPC programs in sample countries other than Iraq and Afghanistan took an average of 76 days. Agreement During fiscal development for BPC assistance projects in Afghanistan and Iraq was faster still—36 days on average. See table 2. DOD officials we interviewed suggested several factors that may be contributing to the faster agreement development time for Afghanistan, Iraq, and other BPC programs in our sample. For example, funding for BPC programs may need to be obligated more quickly than traditional security assistance funding; intensive management offices for Afghanistan and Iraq help expedite agreement development for those partner countries; and DOD’s combatant command for the region including Iraq and Afghanistan has created a task force to enhance communication of command priorities. Furthermore, our analysis of sample data did not indicate that the improved timeliness in developing BPC security assistance agreements decreased the timeliness of developing FMF agreements for our sample countries. We found that the time spent developing FMF agreements in our selected countries decreased slightly from over 100 days in fiscal year 2007 to less than 90 days in fiscal year 2011. Despite reducing the time spent in the agreement development phase, implementing agencies have not consistently met DSCA’s established timeliness goal. In 2010, DSCA defined this goal as providing security assistance agreements to customers on or before the anticipated offer date for at least 85 percent of agreements. The anticipated offer date is the target date by which the implementing agency is to complete agreement development and offer the agreement for acceptance. As shown in table 3, DSCA data indicate that in fiscal year 2011, implementing agencies met DSCA’s timeliness goal for BPC agreements, 88 percent of which were completed by the anticipated offer date. For traditional agreements, the implementing agencies fell short of this goal, regardless of the complexity of the agreement. In the acquisition phase of the security assistance process, DSCA has not established performance measures to assess timeliness of acquisitions, which are carried out by the implementing agencies. This phase, from when implementing agencies begin to make acquisitions needed for finalized security assistance agreements until such activities are completed and equipment is ready to ship, is often the longest phase of the process. DSCA data indicate that acquisitions that required DOD to award a contract in fiscal year 2011 took between 376 and 1,085 days, but there are limited common data sources across implementing agencies for assessing acquisition performance and documenting trends. DOD’s implementing agencies manage acquisitions with several unique electronic systems, each of which allows for various status updates and reporting. However, we have previously reported that although the systems may provide performance information within each implementing agency, the information is not comparable across agencies, thus reducing its value to DSCA for overall oversight. DSCA plans for the Security Cooperation Enterprise Solution to include information from all implementing agencies and improve DSCA’s ability to monitor acquisition activities across agencies. This new system is intended to be fully implemented in 2020. DSCA does not measure the timeliness of all security assistance deliveries. Furthermore, DSCA does not consistently record either the original target delivery dates or the actual delivery dates required to determine delivery timeliness. DSCA monitors and reports one timeliness target that is common across implementing agencies. According to this target, estimated delivery dates for major assistance items, established by implementing agencies, should be met for 95 percent or more of these cases. However, the usefulness of this measure for assessing or noting improvement in performance is limited. First, it does not cover all security assistance agreements. Rather, it is used only for major equipment items and excludes all BPC deliveries. Second, estimated delivery dates may be extended in some circumstances. For example, DSCA officials have noted that implementing agencies frequently change these dates when it is determined that the original commitments cannot be met. Therefore, the measure monitors timeliness against the most recently updated estimated delivery date, not the original date. The Security Cooperation Information Portal includes a data field for an estimated date by which all security assistance materiel and services contained in an agreement are envisioned to be delivered, as well as a field for the actual date. Implementing agencies update the estimated date when schedules change, rather than maintaining the original date. Furthermore, while DSCA cannot compel partner nations to provide actual receipt information for all traditional security assistance deliveries, and U.S. SCO staff are required to record the actual receipt date of BPC deliveries, they rarely do. As a result, DSCA does not always have information regarding the actual receipt dates of security assistance deliveries. Without original estimated delivery dates and actual delivery receipt dates, DSCA cannot fully assess the timeliness of deliveries. Furthermore, DSCA cannot assess historical delivery timeliness performance to identify challenges to be addressed or report improvements achieved through reform efforts. Learning that there is a problem with equipment that has been delivered is often the only indication of delivery DSCA receives from partner countries that use freight forwarders. If a partner country identifies delivery errors, such as equipment that is missing or damaged upon receipt, it may file a supply discrepancy report to request restitution. It is then the implementing agencies’ responsibility, along with the DOD or commercial source of the item in question, to address the complaints. DSCA does have a performance measure related to adjudication of supply discrepancy reports—the number of reports that have not been addressed within 1 year.the number of reports that have taken more than 1 year to address, as shown in figure 4. Increasing global threats to U.S. interests abroad make the timely provision of U.S. assistance in building foreign partner capacity to address transnational threats vital to U.S. national security. Congress has created new programs to build partner capacity, and DOD has in turn created new procedures to implement those programs. DOD has recognized a number of challenges to managing its efforts to build foreign partner capacity and has ongoing reforms to address challenges associated with personnel training and workforce structure and with defining partner country needs. While DOD’s reforms are addressing several challenges, existing information systems are not consistently populated with needed data. A lack of timely and accurate information for partners, combatant commands, and SCO staff on agreement and delivery status can delay assistance, impact the costs of fielding equipment and training, and may adversely affect U.S. relationships with partner countries. Without performance measures to monitor timeliness across all phases of the security assistance process—particularly acquisition, delivery, and case closure, which comprise some of the most time-consuming activities—DSCA cannot assess the results of reforms or inform Congress of their progress. To improve the ability of combatant command and SCO officials to obtain information on the acquisition and delivery status of assistance agreements, we recommend that the Secretary of Defense establish procedures to help ensure that DOD agencies are populating security assistance information systems with complete data. To improve the ability to measure the timeliness and efficiency of the security assistance process, we recommend that the Secretary of Defense take the following actions: establish a performance measure to assess timeliness for the acquisition phase of the security assistance process; establish a performance measure to assess timeliness for the delivery phase of the security assistance process; and establish a performance measure to assess timeliness for the case closure phase of the security assistance process. We provided a draft of this report to the Departments of State and Defense for comment. State elected not to provide comments on the draft report; DSCA concurred with the report’s recommendations. DSCA stated that it would work with military departments to ensure that information systems are populated with acquisition and delivery status data and continue to promote the use of the EFTS. In addition, DSCA stated that it will work with the military departments to assess timeliness during the acquisition phase; establish performance measures for the delivery phase and encourage adherence to reporting in-country deliveries; and establish performance measures to assess the timeliness of case closure. DOD also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, and the Secretary of State. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In response to a Senate Armed Service Committee mandate to review the Defense Security Cooperation Agency’s (DSCA) program implementation processes, this report assesses the extent to which (1) Department of Defense (DOD) reforms address challenges that security cooperation officials face in implementing assistance programs and (2) DSCA performance measures indicate improvement in the timeliness of security assistance. To describe the phases and participants in the traditional Foreign Military Sales (FMS) process and the pseudo-FMS process used for newer programs, we reviewed and summarized the Security Assistance Management Manual description of these processes and DSCA flow charts that illustrate the FMS process at varying levels of detail. We also met with DSCA officials and reviewed system documentation describing the functions of DSCA information systems. Our summary of the FMS and pseudo-FMS processes does not encompass all steps and actors that may be involved, such as technology releasability reviews that may be required for sensitive equipment. To describe newer building partnership capacity programs (BPC), we reviewed summaries of those programs in the Security Assistance Management Manual, previous GAO reports and appropriations and authorizing legislation creating the programs. Our review focuses on those security cooperation programs where DSCA plays a role, and it does not assess other security assistance programs implemented by the State Department or most DOD counternarcotics programs. To assess the extent to which ongoing DOD security cooperation reforms address challenges that security cooperation officials face in implementing assistance programs, we compared security assistance implementation challenges to DOD reforms that are currently planned or in progress. To identify ongoing reform efforts, we reviewed the Security Cooperation Reform Phase I Report and analyzed its recommendations to identify those that required action by DSCA. To verify our analysis of DSCA’s role in addressing the recommendations, we met with the director and deputy director of the Security Cooperation Reform Task Force and DSCA officials who participated in the Task Force or are involved in addressing the recommendations. We also met with DSCA’s Acting Chief Performance Officer and the Manager of DSCA’s Continuous Process Improvement Program to identify and describe DSCA-directed internal process reviews and requested and received documentation of these efforts. To identify challenges to the implementation of security assistance, we conducted focus groups or interviews with security cooperation officials in the six geographic combatant commands and interviewed security cooperation officers in Security Cooperation Organizations (SCOs) in 17 countries. These officers manage DOD security cooperation programs under the guidance of the combatant commands. To select the 17 countries, we obtained data from DSCA regarding total value of transactions per fiscal year from 2006 to 2010 for each country benefitting from seven U.S. government-funded programs administered by DSCA: Foreign Military Financing; Section 1206; Peacekeeping Operations and the Global Peace Operations Initiative; Iraq Security Forces Fund; Afghanistan Security Forces Fund; Pakistan Counterinsurgency Fund; and Pakistan Counterinsurgency Capability Fund. We did not include the Global Security Contingency Fund as part of our data analysis because it was newly authorized in fiscal year 2012. We excluded International Military Education and Training in order to focus on equipment and equipment-associated training and because we had recently issued a report specifically assessing International Military Education and Training. We then selected three countries from each of the combatant commands for inclusion in our review based on those countries that received both the highest volume of assistance and received the widest diversity of programs. For Northern Command, however, we selected the only two countries within the combatant command’s area of responsibility, Mexico and the Bahamas, which benefitted from one of these programs. For the remaining five geographic combatant commands, countries included in our review were: Africa Command: Djibouti, Ethiopia, and Tunisia; Central Command: Afghanistan, Iraq, and Pakistan; European Command: Albania, Romania, and Ukraine; Pacific Command: Bangladesh, Indonesia, and the Philippines; and Southern Command: Belize, the Dominican Republic, and Honduras. Using questions tailored slightly for individual countries where appropriate, we interviewed the staff of SCOs in these 17 countries. We also interviewed officials at the military departments, and in the Office of the Secretary of Defense to further clarify these challenges and their effects. In addition, at the recommendation of security cooperation officials we interviewed at the combatant commands and DSCA, we also interviewed the staff of SCOs of two additional countries, Georgia and Yemen. These two SCOs each had experience with a specific program, the Coalition Readiness Support Program in Georgia and the new Global Security Contingency Fund program in Yemen. Using a single facilitator and common set of questions, we conducted eight focus groups with more than 50 security cooperation officials at all geographic combatant commands except Northern Command. We conducted at least one focus group in each command and two in Pacific Command, Central Command, and Africa Command. For Northern Command, we conducted the discussion as a phone interview due to the small number of officials involved but used the same questions that we used with the focus groups. The focus group questions asked security cooperation officials to describe challenges they experienced in each phase of the security assistance process. For these sessions, we divided the process into: creating a case (beginning with the development of a letter or memorandum of request and ending with the finalization of a letter of offer and acceptance); approvals (including disclosure notification, technology transfer, Congressional notifications, and State Department concurrence); executing a case (including procurement or provision from DOD stock); delivery and case closure, and postdelivery sustainment in the form of training and spares. We requested that combatant commands identify focus group participants who would be able to speak about their experience implementing security cooperation at the combatant command as well as having responsibility for one or more of the 17 countries selected for SCO interviews. During the focus groups, the GAO facilitator wrote comments as they were made so that all focus group participants could see them and other GAO staff took notes documenting the discussion. No audio recordings were made. GAO staff then consolidated the notes from each session and two GAO staff members independently summarized the challenges and common themes identified by each focus group and the Northern Command interview. The two independent staff members then met to resolve any discrepancies and agreed to a common set of 65 distinct challenges to the implementation of U.S. government-funded programs raised in the focus group discussions and the Northern Command interview. For additional analyses of the challenges, we counted which challenges were raised by more than one geographic combatant command. We then conducted a second round of coding. Two staff members independently analyzed the challenges and identified those that were within DSCA’s purview and identified themes under which these challenges could be grouped. The two coders met to resolve any discrepancies and identified 20 challenges within DSCA’s purview that were raised by more than one geographic combatant command. The coders grouped the challenges according to categories; along with 2 other challenges that did not fit these categories. Additional challenges fall under the authority of government agencies other than DSCA, and others fall beyond the U.S. government’s control. The focus group and interview results are not generalizable to all recipient countries but represent the experiences of security cooperation officials in all combatant commands for the countries with the highest transaction values. We also reviewed interviews with the SCOs to further document the challenges identified by focus group participants. To determine the availability of data on the status of deliveries in process, we requested data on all deliveries from fiscal years 2007 to 2011 for the 17 countries and BPC programs in our sample from DOD’s Enhanced Freight Tracking System. We then analyzed the extent to which data in the system were populated for key milestones in the delivery process from origin to final destination. To identify the extent to which DSCA performance measures indicate improvement in the timeliness of security assistance, we reviewed DSCA performance measures reported at DSCA’s Security Cooperation Business Forum and the discussion of these measures reflected in the minutes of these quarterly meetings. We met with DSCA officials and implementing agency officials to further understand these measures and the systems that implementing agencies have in place to track and report data to DSCA. We also inquired of DSCA’s acting Chief Performance Officer whether there were any other performance measures routinely compiled for senior management review. We reviewed the additional measures provided and determined that these did not assess timeliness. We reviewed the first three quarterly forum reports for fiscal year 2012 to identify the current performance measures that exist for the five phases of the FMS process we identified in order to determine whether DSCA has data in that phase on the time required to complete it, and performance measures to assess the timeliness of the phase. We then analyzed these data and measures to assess the extent that DSCA has performance measures that can be used to assess timeliness. To determine the timeliness of the phases of the security assistance process, we summarized existing DSCA data reporting and performed additional analyses of DSCA source data. We also performed an independent analysis of the number of days spent by DSCA and implementing agencies developing security assistance agreements based on security assistance requests received from the 17 partner countries in our sample and for BPC programs for fiscal years 2007 through 2011. For this analysis we used data from the DSCA’s Defense Security Assistance Management System. The system contains information regarding key milestone dates that can be used to assess timeliness of some aspects of the security assistance process. We determined the DSCA performance metrics and data were sufficiently reliable for our purposes by undertaking data reliability steps including reviewing system usage and documentation guidance; interviewing knowledgeable agency officials; conducting electronic and manual data testing to identify missing data, outliers, and obvious errors; and by reviewing internal controls. To determine the time to develop an agreement, we calculated the number of days between the date listed for “Customer Request Complete” and “Document Sent,” in accordance with DSCA’s method of measuring processing time from the time when a letter of request is complete until the release of the security assistance agreement to partner countries for signature. Using these data, we analyzed the time frames to develop agreements for BPC programs and traditional programs for the 17 sample countries. The results of our analysis may differ from overall DSCA timeliness metrics due to factors such as the type of equipment and training requested by sample partner countries and the quality of the assistance request submitted. The results of our work for the BPC programs and 17 countries in our sample are not generalizable to all countries receiving assistance. We conducted this performance audit from November 2011 through November 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Between April and June 2012, GAO conducted eight focus groups with 50 security cooperation officials at all geographic combatant commands except Northern Command. For Northern Command, we conducted the discussion as an interview in June due to the small number of officials involved but used the same questions as for the focus groups. GAO analyzed the results and identified 65 distinct challenges to implementing security assistance programs, 20 of which were raised in two or more of the six commands and are within DOD’s purview. GAO grouped these 20 challenges into four categories: U.S. training and workforce structure; U.S. ability to define partner country requirements; information on assistance security agreement status; and other challenges. In addition to the contact named above, James B. Michels, Assistant Director; Kathryn Bolduc; Martin DeAlteriis; Karen Deans; Katherine Forsyth; Mary Moutsos; Michael Silver; and Michael Simon made key contributions to this report. C. Etana Finkler provided additional technical assistance. State Partnership Program: Improved Oversight, Guidance, and Training Needed for National Guard’s Efforts with Foreign Partners. GAO-12-548. Washington, D.C.: May 15, 2012. Security Force Assistance: Additional Actions Needed to Guide Geographic Combatant Command and Service Efforts. GAO-12-556. Washington, D.C.: May 10, 2012. Humanitarian and Development Assistance: Project Evaluations and Better Information Sharing Needed to Manage the Military’s Efforts. GAO-12-359. Washington, D.C.: February 8, 2012. Persian Gulf: Implementation Gaps Limit the Effectiveness of End-Use Monitoring and Human Rights Vetting for U.S. Military Equipment. GAO-12-89. Washington, D.C.: November 17, 2011. International Military Education and Training: Agencies Should Emphasize Human Rights Training and Improve Evaluations. GAO-12-123. Washington, D.C.: October 27, 2011. Foreign Police Assistance: Defined Roles and Improved Information Sharing Could Enhance Interagency Collaboration. GAO-11-860SU. Washington, D.C: May 9, 2012. International Affairs: Accountability for U.S. Equipment Provided to Pakistani Security Forces in the Western Frontier Needs to Be Improved. GAO-11-156R. Washington, D.C: February 15, 2011. Defense Exports: Reporting on Exported Articles and Services Needs to Be Improved. GAO-10-952. Washington, D.C.: Sep 21, 2010. Persian Gulf: U.S. Agencies Need to Improve Licensing Data and to Document Reviews of Arms Transfers for U.S. Foreign Policy and National Security Goals. GAO-10-918. Washington, D.C: September 20, 2010. International Security: DOD and State Need to Improve Sustainment Planning and Monitoring and Evaluation for Section 1206 and 1207 Assistance Programs. GAO-10-431. Washington, D.C.: April 15, 2010. Defense Exports: Foreign Military Sales Program Needs Better Controls for Exported Items and Information for Oversight. GAO-09-454. Washington, D.C.: May 20, 2009. Afghanistan Security: Lack of Systematic Tracking Raises Significant Accountability Concerns about Weapons Provided to Afghan National Security Forces. GAO-09-267. Washington, D.C.: January 30, 2009. Combating Terrorism: Increased Oversight and Accountability Needed over Pakistan Reimbursement Claims for Coalition Support Funds. GAO-08-806. Washington, D.C.: June 24, 2008. Preliminary Observations on the Use and Oversight of U.S. Coalition Support Funds Provided to Pakistan. GAO-08-735R. Washington, D.C.: May 6, 2008. Stabilizing Iraq: DOD Cannot Ensure That U.S.-Funded Equipment Has Reached Iraqi Security Forces. GAO-07-711. Washington, D.C.: July 31, 2007. Section 1206 Security Assistance Program—Findings on Criteria, Coordination, and Implementation. GAO-07-416R. Washington, D.C.: February 28, 2007. Security Assistance: State and DOD Need to Assess How the Foreign Military Financing Program for Egypt Achieves U.S. Foreign Policy and Security Goals. GAO-06-437. Washington, D.C.: April 11, 2006. | Congress appropriated approximately $18.8 billion in fiscal year 2012 for various security cooperation and assistance programs that supply military equipment and training to more than 100 partner countries. Amid concerns that traditional security assistance programs were too slow, Congress established several new programs in recent years. DSCA oversees the security assistance process, with key functions in agreement development, acquisition, and equipment delivery performed by U.S. military departments. DOD has undertaken a variety of management reforms since 2010 to improve the security assistance process. GAO assessed the extent to which (1) DOD reforms address implementation challenges faced by security cooperation officials and (2) DSCA performance measures indicate improvement in the timeliness of security assistance. GAO analyzed DOD data and performance measures, conducted focus groups and interviews with security cooperation officials at all six geographic combatant commands, and interviewed SCO staff for 17 countries. Security cooperation officials report three major types of challenges--training and workforce structure, defining partner country requirements, and obtaining acquisition and delivery status information--in conducting assistance programs. Ongoing Department of Defense (DOD) reforms address challenges that DOD security cooperation officials reported in meeting staff training needs and achieving the optimum workforce structure. The Defense Security Cooperation Agency (DSCA) has also initiated efforts to respond to challenges in developing assistance requests resulting from the limited expertise of partner countries and U.S. Security Cooperation Organization (SCO) staff in identifying country assistance requirements and the equipment that can meet them. However, according to DOD security cooperation officials, information gaps in the acquisition and delivery phases of the security assistance process continue to hinder the effectiveness of U.S. assistance. Nearly all of GAO's focus groups and interviews reported persistent difficulties obtaining information on the status of security assistance acquisitions and deliveries because information systems are difficult to access and contain limited information. DOD's existing delivery tracking system provides only limited data on the status of equipment deliveries because partner country agents and DOD agencies are not entering the needed data into the system. Without advance notice of deliveries, SCO staff have been unable to ensure that addresses were correct and that partner countries were ready to receive and process deliveries, resulting in delays or increased costs. DOD is developing a new information system to address information gaps, but it is not expected to be fully implemented until 2020. DSCA data indicate that DOD has improved timeliness in the initial phases of the security assistance process, but these data provide limited information on other phases. The average number of days spent developing a security assistance agreement has improved from an average of 124 days in fiscal year 2007 to 109 days in fiscal year 2011. However, assessing the timeliness of the whole security assistance process is difficult because DSCA has limited timeliness measures for later phases, which often comprise the most time-consuming activities. For example, DSCA has not established a performance measure to assess the timeliness of acquisition, which can take years. In addition, DSCA does not consistently measure delivery performance against estimated delivery dates. Without such performance measures, DSCA cannot assess historical trends or the extent to which reforms impact the timeliness of the security assistance process. GAO recommends that the Secretary of Defense (1) establish procedures to ensure that DOD agencies enter needed acquisition and delivery status data into security assistance information systems and (2) establish performance measures to assess timeliness for additional phases of the security assistance process. DOD concurred with GAO's recommendations. |
From May 2003 through June 2004, the Coalition Provisional Authority (CPA), led by the United States and the United Kingdom, was the UN-recognized coalition authority responsible for the temporary governance of Iraq and for overseeing, directing, and coordinating the reconstruction effort. In May 2003, the CPA dissolved the military organizations of the former regime and began the process of creating or reestablishing new Iraqi security forces, including the police and a new Iraqi army. Over time, multinational force commanders assumed responsibility for recruiting and training some Iraqi defense and police forces in their areas of responsibility. In May 2004, the President issued a National Security Presidential Directive, which stated that, after the transition of power to the Iraqi government, the Department of State (State), through its ambassador to Iraq, would be responsible for all U.S. activities in Iraq except for security and military operations. U.S. activities relating to security and military operations would be the responsibility of the Department of Defense (DOD). The Presidential Directive also established two temporary offices: (1) the Iraq Reconstruction and Management Office to facilitate transition of reconstruction efforts to Iraq and (2) the Project and Contracting Office (PCO) to provide acquisition and project management support for some U.S.-funded reconstruction projects. Other U.S. government agencies also play significant roles in the reconstruction effort. USAID is responsible for projects to restore Iraq’s infrastructure, support healthcare and education initiatives, expand economic opportunities for Iraqis, and foster improved governance. The U.S. Army Corps of Engineers provides engineering and technical services to the PCO, USAID, and military forces in Iraq. On June 28, 2004, the CPA transferred power to an interim sovereign Iraqi government, the CPA was officially dissolved, and Iraq’s transitional period began. Under Iraq’s transitional law, the transitional period covers the interim government phase (from June 28, 2004, to January 30, 2005) and the transitional government phase, which is currently scheduled to end by December 31, 2005. Under UN Resolution 1546, the Multi-National Force - Iraq (MNF-I) has the authority to take all necessary measures to contribute to security and stability in Iraq during this process, working in partnership with the Iraqi government to reach agreement on security and policy issues. The Presidential Directive required the U.S. Central Command (CENTCOM) to direct all U.S. government efforts to organize, equip, and train Iraqi security forces. The Multi-National Security Transition Command-Iraq, which operates under MNF-I, now leads coalition efforts to train, equip, and organize Iraqi security forces. The United States is the primary contributor to rebuilding and stabilization efforts in Iraq. U.S. appropriations have been used largely for activities that include the repair of infrastructure, procurement of equipment, and training of Iraqi security forces. International donors have provided a lesser amount of funding for reconstruction and development activities; however, most of the pledged amount is in the form of loans that largely have not been accessed by the Iraqi government. Iraqi funding, under CPA or Iraqi control, has generally supported operating expenses of the Iraqi government. Finally, Iraqi needs may be greater than the funding currently made available. U.S. appropriated funding has largely focused on infrastructure repair and training of Iraqi security forces and this funding has been reallocated as priorities changed. As of August 2005, approximately $30 billion in U.S. appropriations had been made available for rebuilding and stabilization needs in Iraq, about $21 billion had been obligated, and about $13 billion had been disbursed. These funds were used for activities that included infrastructure repair of the electricity, oil, and water and sanitation sectors; infrastructure repair, training, and equipping of the security and law enforcement sector; and CPA and U.S. administrative expenses. Many current U.S. reconstruction efforts reflect initial plans that the CPA developed before June 2004. As priorities changed, particularly since the transition of power to the Iraqi interim government, the U.S. administration reallocated about $5 billion of the $18.4 billion fiscal year 2004 emergency supplemental among the various sectors (see fig. 1). According to State department documents, these reallocations were made to meet immediate needs: in October 2004, for projects in security and law enforcement, economic and private sector development, and governance; in January 2005, for quick-impact projects in key cities; in April 2005, for job creation and essential services activities; and in July 2005, for security force training and election support. As Figure 1 shows, security and justice funds increased while resources for the water and electricity sectors decreased. International donors have provided about $2.7 billion in multilateral and bilateral grants, of the pledged $13.6 billion, for reconstruction activities; however, most of the pledged amount is in the form of loans that largely have not been accessed by the Iraqis. International reconstruction assistance provided in the form of multilateral grants has been used largely for activities such as electoral process support, education and health projects, and capacity building of the ministries. As of August 2005, donors have deposited about $1.2 billion into the two trust funds of the International Reconstruction Fund Facility for Iraq (IRFFI). Of that amount, about $800 million had been obligated and nearly $300 million disbursed to individual projects. Donors have also provided bilateral assistance for Iraq reconstruction activities; however, complete information on this assistance is not readily available. As of August 2005, State has identified $1.5 billion—of the $13.6 billion pledged—in funding that donors have provided as bilateral grants for reconstruction projects outside the IRFFI. About $10 billion, or 70 percent, of the $13.6 billion pledged in support of Iraq reconstruction is in the form of loans, primarily from the World Bank, the International Monetary Fund (IMF), and Japan. According to a State Department official, Iraq is in discussions with the government of Japan and the World Bank for initial projects of lending programs that total about $6.5 billion. As of October 12, 2005, Iraq had accessed a loan of $436 million from the IMF and an initial loan of $500 million from the World Bank, according to a State Department official. Iraqi funds—under the CPA or Iraqi control—primarily have supported the Iraqi operating budget with some focus on relief and reconstruction projects.Of the Iraqi funds under CPA control from May 2003 to June 2004, about $21 billion came from the Development Fund for Iraq (DFI) and $2.65 billion from vested and seized assets from the previous Iraqi regime. The CPA disbursed these Iraqi funds primarily to support the 2003 and 2004 Iraqi budgets for government operating expenses, such as salary payments and ministry operations, the public food distribution system, and regional government outlays. In addition, CPA used Iraqi funds to support efforts such as the import of refined fuels and electricity restoration projects. On June 28, 2004, stewardship of the DFI was turned over to the Iraqi interim government. Proceeds from Iraqi crude oil exports continue to be deposited into the DFI and represent more than 90 percent of the $23 billion in domestic revenue support for the Iraqi 2005 budget. According to Iraq’s National Development Strategy, the 2005 Iraqi budget planned for nearly $28 billion in expenditure. These expenditures exceed estimated domestic revenues by $4.8 billion. However, higher than anticipated domestic revenues may offset this deficit. Planned expenditures of this budget include about 37 percent for direct subsidies; about 21 percent for capital investment, especially in the oil and gas sector; about 20 percent for employee wages and pensions; nearly 18 percent for goods and services; and about 4 percent for war reparations. Direct subsidies included the import of gasoline and other refined fuel products (projected to cost $2.4 billion) and Iraqs’ public distribution system’s basic food basket (projected to cost $4 billion). The Iraqi government continues to develop plans to reform fuel price subsidies, partly due to an agreement with the IMF to reduce subsidies by $1 billion per year, according to IMF and agency documents. In addition to subsidy expenditures, Iraq has planned for capital investment levels of 21 percent from 2005 to 2007. In 2005, the majority of these funds were planned for the oil and gas sector—about $3 billion of about $5 billion in total for various ministries. Initial assessments of Iraq’s needs through 2007 by the UN/World Bank and the CPA estimated that the reconstruction of Iraq would require about $56 billion. However, Iraq may need more funding than currently available to meet the needs and demands of the country. The state of some Iraqi infrastructure was more severely degraded than U.S. officials originally anticipated or initial assessments indicated. The condition of the infrastructure was further exacerbated by post-2003 conflict looting and sabotage. For example, some electrical facilities and transmission lines were damaged, and equipment and materials needed to operate treatment and sewerage facilities were destroyed by the looting that followed the 2003 conflict. In the oil sector, a June 2003 U.S. government assessment found that over $900 million would be needed to replace looted equipment at Iraqi oil facilities. In addition, initial assessments assumed reconstruction would take place in a peace-time environment and did not include additional security costs. Further, these initial assessments assumed that Iraqi government revenues and private sector financing would increasingly cover long-term reconstruction requirements. However, private sector financing and government revenues may not yet meet these needs. In the oil sector alone, Iraq will likely need an estimated $30 billion over the next several years to reach and sustain an oil production capacity of 5 million barrels per day, according to industry experts and U.S. officials. The United States faces three key challenges in stabilizing and rebuilding Iraq. First, the unstable security environment and the continuing strength of the insurgency have made it difficult for the United States to transfer security responsibilities to Iraqi forces and engage in rebuilding efforts. Second, inadequate performance data and measures make it difficult to determine the overall progress and impact of U.S. reconstruction efforts. Third, the U.S. reconstruction program has encountered difficulties with Iraq’s inability to sustain new and rehabilitated infrastructure projects and to address maintenance needs in the water, sanitation, and electricity sectors. Over the past 2 years, significant increases in attacks against the coalition and coalition partners have made it difficult to transfer security responsibilities to Iraqi forces and engage in rebuilding efforts in Iraq. The insurgency in Iraq intensified in early 2005 and has remained strong since then. Poor security conditions have delayed the transfer of security responsibilities to Iraqi forces and the drawdown of U.S. forces in Iraq. The unstable security environment has also affected the cost and schedule of rebuilding efforts and has led, in part, to project delays and increased costs for security services. The insurgency intensified through early 2005 and has remained strong since then. As we reported in March 2005, the insurgency in Iraq—particularly the Sunni insurgency—grew in complexity, intensity, and lethality from June 2003 through early 2005. Enemy-initiated attacks against the coalition, its Iraqi partners, and infrastructure had increased in number over time, with the highest peaks occurring in August and November 2004 and in January 2005. The November 2004 and January 2005 attacks primarily occurred in Sunni-majority areas, whereas the August 2004 attacks took place countrywide. MNF-I is the primary target of the attacks, but the number of attacks against Iraqi civilians and security forces increased significantly during January 2005, prior to Iraq’s national election for a transitional government that was held January 30, 2005. According to the Director of the Defense Intelligence Agency (DIA), attacks on Iraq’s Election Day reached about 300, double the previous 1-day high of about 150 attacks on a day during Ramadan in 2004. Although the number of attacks decreased immediately after the January elections, the strength of the insurgency in Iraq has remained strong and generally unchanged since early 2005, according to senior U.S. military officers. As shown in figure 2, although enemy-initiated attacks had decreased in February and March 2005, they generally increased through the end of August 2005. According to a senior U.S. military officer, attack levels ebb and flow as the various insurgent groups—which are an intrinsic part of Iraq’s population—rearm and attack again. As DOD reported in July 2005, insurgents share a goal of expelling the Coalition from Iraq and destabilizing the Iraqi government to pursue their individual and, at times, conflicting goals. Iraqi Sunnis make up the largest proportion of the insurgency and present the most significant threat to stability in Iraq. Radical Shia groups, violent extremists, criminals, and, to a lesser degree, foreign fighters, make up the rest. Senior U.S. military officers believe that the insurgents remain adaptive and capable of choosing the time and place of their attacks. These officers have also predicted spikes in violence around Iraq’s upcoming constitutional referendum scheduled for October 15, 2005, and the national elections scheduled for December 15, 2005. The continuing strength of the insurgency has made it difficult for the multinational force to develop effective and loyal Iraqi security forces, transfer security responsibilities to them, and progressively draw down U.S. forces in Iraq. In February 2004, the multinational force attempted to quickly shift responsibilities to Iraqi security forces but did not succeed in this effort. Police and military units performed poorly during an escalation of insurgent attacks in April 2004, with many Iraqi security forces around the country collapsing or assisting the insurgency during the uprising. About that time, the Deputy Secretary of Defense said that the multinational force was engaged in combat in Iraq, rather than in peacekeeping as had been expected. The United States decided to maintain a force level of about 138,000 troops until at least the end of 2005, rather than drawing down to 105,000 troops by May 2004 as DOD had announced in November 2003. The United States has maintained roughly the same force level of 138,000 troops in Iraq since April 2004, as it has sought to neutralize the insurgency and develop Iraqi security forces. In late September and early October 2005, the Secretary of Defense and senior U.S. military officers reported on their strategy to draw down and eventually withdraw U.S. forces as Iraq meets certain conditions. These conditions would consider the level of insurgent activity, readiness and capability of Iraqi security forces and government institutions, and the ability of the coalition forces to reinforce the Iraq security forces if necessary. The ability to meet these conditions will be affected by progress in political, economic, and other areas. According to the commanding general of the multinational force, as conditions are met, multinational forces will progressively draw down in phases around the country. By the time the multinational force’s end state is achieved, U.S. forces will be withdrawn or drawn down to levels associated with a normal bilateral security relationship. The defined end state is an Iraq at peace with its neighbors, with a representative government that respects the human rights of all Iraqis, and with a security force that can maintain domestic order and deny Iraq as a safe haven for terrorists. DOD and the multinational force face a number of challenges in transferring security responsibilities to the Iraqi government and security forces. As we reported in March 2005, the multinational force faced four key challenges in increasing the capability of Iraqi forces: (1) training, equipping, and sustaining a changing force structure; (2) developing a system for measuring the readiness and capability of Iraq forces; (3) building loyalty and leadership throughout the Iraqi chain of command; and (4) developing a police force that upholds the rule of law in a hostile environment. Further, in a July 2005 report to Congress, DOD noted continuing problems with absenteeism in the Iraqi Army, Police Service, and Border Police; among those units conducting operations; and units relocating elsewhere in Iraq. The report also noted that there was insufficient information on the extent to which insurgents have infiltrated Iraqi security forces. However, in an October 2005 report to Congress, DOD noted insurgent infiltration is a more significant problem in Ministry of Interior forces than in Ministry of Defence forces. Moreover, in early October 2005, senior U.S. military officers noted challenges in developing effective security ministries, as well as logistics capabilities of Iraqi forces. Since March 2005, the multinational force has taken some steps to begin addressing these challenges. For example, the multinational force has embedded transition teams at the battalion, brigade, and division levels of Ministry of Defense forces, as well as in the Ministry of Interior’s Special Police Commando battalions, the Civil Intervention Force, and the Emergency Response Unit. Multinational force transition teams conduct new transition readiness assessments that identify the progress and shortcomings of Iraqi forces. According to DOD’s report, these assessments take into account a variety of criteria that are similar but not identical to those the U.S. Army uses to evaluate its units’ operational readiness, including personnel, command and control, training, sustainment/logistics, equipment, and leadership. The assessments place Iraqi units into one of the following four categories: Level 1 units are fully capable of planning, executing, and sustaining independent counterinsurgency operations. Level 2 units are capable of planning, executing, and sustaining counterinsurgency operations with coalition support. Level 3 units are partially capable of conducting counterinsurgency operations in conjunction with coalition units. Level 4 units are forming or otherwise incapable of conducting counterinsurgency operations. The multinational force is also preparing similar readiness assessments on the Iraqi police through partnerships at the provincial levels. These assessments look at factors that are tailored to the tasks of a police force, including patrol/traffic operations, detainee operations, and case management. According to DOD’s October 2005 report and DOD officials, Iraqi combat forces have made progress in developing the skills necessary to assume control of counterinsurgency operations. However, they also recognize that Iraqi forces will not be able to operate independently for some time because they need logistical capabilities, ministry capacity, and command and control and intelligence structures. According to DOD’s October 2005 report, Iraq has 116 police and army combat battalions actively conducting counter insurgency operations. This number corresponds to the number of battalions in levels 1, 2, and 3 described above. Of these battalions, 1 battalion was assessed as level 1, that is, fully capable of planning, executing, and sustaining independent counterinsurgency operations. Thirty-seven were level 2, or capable of planning, executing, and sustaining counterinsurgency operations with coalition support; and 78 were level 3—partially capable of conducting counterinsurgency operations in conjunction with coalition units. The assessment of Iraqi units’ capabilities also considers the threat level they face. According to a senior U.S. military officer, Iraqi forces have more quickly progressed from level 3 to level 2 in areas that have experienced fewer insurgent attacks, such as southern Iraq. GAO’s forthcoming classified report on Iraq’s security situation will provide further information and analysis on the challenges to developing Iraqi security forces and the conditions for the phased draw down of U.S. and other coalition forces. The security situation in Iraq has affected the cost and schedule of reconstruction efforts. Security conditions have, in part, led to project delays and increased costs for security services. Although it is difficult to quantify the costs in time and money resulting from poor security conditions, both agency and contractor officials acknowledged that security costs have diverted a considerable amount of reconstruction resources and have led to canceling or reducing the scope of some reconstruction projects. For example, in March 2005, the USAID cancelled two electrical power generation-related task orders totaling nearly $15 million to help pay for increased security costs incurred at another power generation project in southern Baghdad. In another example, work was suspended at a sewer repair project in central Iraq for 4 months in 2004 due to security concerns. In a September 2005 testimony, the Special Inspector General for Iraq Reconstruction and a USAID official also observed that the cost of security had taken money away from reconstruction and slowed down reconstruction efforts. However, the actual cost that security has added to reconstruction projects is uncertain. We reported in July 2005, that, for 8 of 15 reconstruction contracts we reviewed, the cost to obtain private security providers and security-related equipment accounted for more than 15 percent of contract costs, as of December 31, 2004. Our analysis and discussions with agency and contractor officials identified several factors that influenced security costs, including (1) the nature and location of the work, (2) the type of security required and the security approach taken, and (3) the degree to which the military provided the contractor security services. For example, projects that took place in fixed locations were generally less expensive to secure than a project, such as electrical transmission lines, which extended over a large geographic location. In addition, some contractors made more extensive use of local Iraqi labor and employed less costly Iraqi security guards, while others were able to make use of security provided by the U.S. military or coalition forces. Our analysis did not include increased transportation or administrative expenses caused by security-related work stoppages or delays, or the cost associated with repairing the damage caused by the insurgency on work previously completed. We also excluded the cost associated with the training and equipping of Iraqi security forces and the costs borne by DOD in maintaining, equipping, and supporting U.S. troops in Iraq. In July 2005, to improve agencies’ ability to assess the impact of and manage security costs in future reconstruction efforts, we recommended that the Secretary of State, the Secretary of Defense, and the Administrator, USAID, establish a means to track and account for security costs to develop more accurate budget estimates. State did not indicate whether it agreed with our recommendation, Defense agreed, and USAID did not comment on the recommendation. In addition, the security environment in Iraq also has led to severe restrictions on the movement of civilian staff around the country and reductions of a U.S. presence at reconstruction sites, according to U.S. agency officials and contractors. For example, work at a wastewater plant in central Iraq was halted for approximately 2 months in early 2005 because insurgent threats drove subcontractors away and made the work too hazardous to perform. In the assistance provided to support the electoral process, U.S. funded grantees and contractors also faced security restrictions that hampered their movements and limited the scope of their work. For example, IFES was not able to send its advisors to most of the governorate-level elections administration offices, which hampered training and operations at those facilities leading up to Iraq’s Election Day on January 30, 2005. While poor security conditions have slowed reconstruction and increased costs, a variety of management challenges have also adversely affected the implementation of the U.S. reconstruction program. In September 2005, we reported that management challenges such as low initial cost estimates and delays in funding and awarding task orders have also led to the reduced scope of the water and sanitation program and delays in starting projects. In addition, U.S. agency and contractor officials have cited difficulties in initially defining project scope, schedule, and cost, as well as concerns with project execution, as further impeding progress and increasing program costs. These difficulties include lack of agreement among U.S. agencies, contractors, and Iraqi authorities; high staff turnover; an inflationary environment that makes it difficult to submit accurate pricing; unanticipated project site conditions; and uncertain ownership of projects sites. State has set broad goals for providing essential services, and the U.S. program has undertaken many rebuilding activities in Iraq. The U.S. program has made some progress in accomplishing rebuilding activities, such as rehabilitating some oil facilities to restart Iraq’s oil production, increasing electrical generation capacity, restoring some water treatment plants, and reestablishing Iraqi health services. However, limited performance data and measures make it difficult to determine and report on the progress and impact of U.S. reconstruction. For example, in the water and sanitation, health, and electricity sectors, limited performance data and reporting measures are output focused and make it difficult to accurately measure program results and assess the effectiveness of U.S. reconstruction efforts. Although information is difficult to obtain in an unstable security environment, opinion surveys and additional outcome measures have the potential to help determine progress and gauge the impact of the U.S. reconstruction efforts on the lives of the Iraqi people. In the water and sanitation sector, the Department of State has primarily reported on the numbers of projects completed and the expected capacity of reconstructed treatment plants. However, we found that the data are incomplete and do not provide information on the scope and cost of individual projects nor do they indicate how much clean water is reaching intended users as a result of these projects. For example, although State reported that 143 projects were complete as of early July 2005, it could not document the location, scope, and cost of these projects. Moreover, reporting only the number of projects completed or under way provides little information on how U.S. efforts are improving the amount and quality of water reaching Iraqi households or their access to sanitation services. Information on access to water and its quality is difficult to obtain without adequate security or water metering facilities. However, opinion surveys assessing Iraqis’ access and satisfaction with water sanitation services have found dissatisfaction with these services. The most recent USAID quality of life survey, in February 2005, found that just over half of respondents rated their water supply as poor to fair and over 80 percent rated their sewerage and wastewater disposal as poor to fair. These surveys demonstrate the potential for gathering data to help gauge the impact of U.S. reconstruction efforts. Limitations in health sector measurements also make it difficult to relate the progress of U.S. activities to its overall effort to improve the quality and access of health care in Iraq. Department of State measurements of progress in the health sector primarily track the number of completed facilities, an indicator of increased access to health care. For example, State reported that the construction of 145 out of 300 health clinics had been completed, as of August 31, 2005. However, the data available do not indicate the adequacy of equipment levels, staffing levels, or quality of care provided to the Iraqi population. Monitoring the staffing, training, and equipment levels at health facilities may help gauge the effectiveness of the U.S. reconstruction program and its impact on the Iraqi people. In addition, opinion surveys assessing Iraqis’ access and satisfaction with health services also have the potential for gathering data to help gauge the impact of U.S. reconstruction efforts. For example, the most recent USAID quality of life survey, in February 2005, found that the majority of Iraqis approved of the primary healthcare services they received; although fewer than half of the respondents approved of the level of health care at Ta’mim, Al Basrah, and Maysan governorates. In the electricity sector, U.S. agencies have primarily reported on generation measures such as levels of added or restored generation capacity and daily power generation of electricity; numbers of projects completed; and average daily hours of power. For example, as of May 2005, U.S.-funded projects reportedly had added or restored about 1,900 megawatts of generation capacity to Iraq’s power grid. However, these data do not show whether (1) the power generated is uninterrupted for the period specified (eg., average number of hours per day), (2) there are regional or geographic differences in the quantity of power generated, and (3) how much power is reaching intended users. Information on the distribution and access of electricity is difficult to obtain without adequate security or accurate metering capabilities. However, opinion surveys assessing Iraqis’ access and satisfaction with electricity services have found dissatisfaction with these services. The February 2005 USAID survey found that 74 percent of the respondents rated the overall quality of electricity supply as poor or very poor. The surveys also found that the delivery of electricity directly influenced the perceived legitimacy of local government for many respondents. These surveys demonstrate the potential for gathering data to help gauge the impact of U.S. reconstruction efforts. In September 2005, we recommended that the Secretary of State address this issue of measuring progress and impact in the water and sanitation sector. State agreed with our recommendation and stated that it is taking steps to address the problem. The U.S. reconstruction program has encountered difficulties with the Iraqis’ ability to sustain the new and rehabilitated infrastructure and address maintenance needs. In the water, sanitation, and electricity sectors, in particular, some projects have been completed but have sustained damage or become inoperable due to the Iraqis’ problems maintaining or properly operating them. In the water and sanitation sector, U.S. agencies have identified limitations in the Iraqis’ capacity to maintain and operate reconstructed facilities, including problems with staffing, unreliable power to run treatment plants, insufficient spare parts, and poor operations and maintenance procedures. As of June 2005, approximately $52 million of the $200 million in completed large-scale water and sanitation projects either were not operating or were operating at lower capacity due to looting of key equipment and shortages of reliable power, trained Iraqi staff, and required chemicals and supplies. For example, one repaired wastewater plant was partially shut down due to the looting of key electrical equipment and repaired water plants in one southern governorate lacked adequate electricity and necessary water treatment chemicals. In addition, two projects lacked a reliable power supply, one lacked sufficient staff to operate properly, and one lacked both adequate staff and power supplies. In response, U.S. agencies have taken initial steps to improve Iraqi capacity to operate and maintain water and sanitation facilities. For example, in August 2005, USAID awarded a contract to provide additional maintenance and training support for 6 completed water and sanitation facilities. The U.S. embassy in Iraq stated that it was moving from the previous model of building and turning over projects to Iraqi management toward a “build-train-turnover” system to protect the U.S. investment. However, these efforts are just beginning, and the U.S. assistance does not address the long-term ability of the Iraqi government to support, staff, and equip these facilities. It is unclear whether the Iraqis will be able to maintain and operate completed projects and the more than $1 billion in additional large-scale water and sanitation projects expected to be completed through 2008. Without assurance that the Iraqis have adequate resources to maintain and operate completed projects, the U.S. water and sanitation reconstruction program risks expending funds on projects with limited long-term impact. In September 2005, we recommended that the Secretary of State address the issue of sustainability in the water and sanitation sector. State agreed with our recommendation and stated that it is taking steps to address the problem. In the electricity sector, the Iraqis’ capacity to operate and maintain the power plant infrastructure and equipment provided by the United States remains a challenge at both the plant and ministry levels. As a result, the infrastructure and equipment remain at risk of damage following their transfer to the Iraqis. In our interviews with Iraqi power plant officials from 13 locations throughout Iraq, the officials stated that their training did not adequately prepare them to operate and maintain the new U.S.-provided gas turbine engines. Due to limited access to natural gas, some Iraqi power plants are using low-grade oil to fuel their natural gas combustion engines. The use of oil-based fuels, without adequate equipment modification and fuel treatment, decreases the power output of the turbines by up to 50 percent, requires three times more maintenance, and could result in equipment failure and damage that significantly reduces the life of the equipment, according to U.S. and Iraqi power plant officials. U.S. officials have acknowledged that more needs to be done to train plant operators and ensure that advisory services are provided after the turnover date. To address this issue, USAID implemented a project, in February 2005, to train selected electricity plant officials (plant managers, supervisors, and equipment operators) in plant operations and maintenance. According to DOD, PCO also has awarded one contract and is developing another to address operations and maintenance concerns. Although agencies had incorporated some training programs and the development of operations and maintenance capacity into individual projects, recent problems with the turnover of completed projects, such as those in the water and sanitation and electricity sectors, have led to a greater interagency focus on improving project sustainability. In May 2005, an interagency working group including State, USAID, PCO, and the Corps of Engineers, was formed to identify ways of addressing Iraq’s capacity development needs. The working group reported that a number of critical infrastructure facilities constructed or rehabilitated under U.S. funding have failed, will fail, or will operate in sub-optimized conditions following handover to the Iraqis. They found that a number of USAID and PCO projects encountered significant problems in facility management and operations and maintenance when turned over to the Iraqis or shortly thereafter. To mitigate the potential for project failures, the working group recommended increasing the period of operational support for constructed facilities from a 90-day period to a period of up to one year. According to a State department official, as of September 22, 2005, the recommendations are currently under active consideration and discussion by the Embassy Baghdad and Washington. For the past two and half years, the United States has served as the chief protector and builder in Iraq. The long-term goal is to achieve a peaceful Iraq that has a representative government respectful of human rights and the means to maintain domestic order and quell terrorism. To achieve this goal, the United States has provided $30 billion to develop capable Iraqi security forces, rebuild a looted and worn infrastructure, and support democratic elections. However, the United States has confronted a capable and lethal insurgency that has taken many lives and made rebuilding Iraq a costly and challenging endeavor. It is unclear when Iraqi security forces will be capable of operating independently, thereby enabling the United States to reduce its military presence. Similarly, it is unclear how U.S. efforts are helping the Iraqi people obtain clean water, reliable electricity, or competent health care. Measuring the outcomes of U.S. efforts is needed to determine how they are having a positive impact on the daily lives of the Iraqi people. Finally, the United States must ensure that the billions of dollars it has already invested in Iraq’s infrastructure are not wasted. The Iraqis need additional training and preparation to operate and maintain the power plants, water and sewage treatment facilities, and health care centers the United States has rebuilt or restored. This would help ensure that the rebuilding efforts improve Iraq’s economy and social conditions and establish a secure, peaceful, and democratic Iraq. We will continue to examine the challenges the United States faces in rebuilding and stabilizing Iraq. Specifically, we will examine the efforts to stabilize Iraq and develop its security forces, including the challenge of ensuring that Iraq can independently fund, sustain, and support its new security forces; examine the management of the U.S. rebuilding effort, including program execution; and assess the progress made in developing Iraq’s energy sectors, including the sectors’ needs, existing resources and contributions, achievements, and future challenges. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Subcommittee members may have. For further information, please contact Joseph A. Christoff on (202) 512- 8979. Individuals who made key contributions to this testimony were Monica Brym, Lynn Cothern, Tim DiNapoli, Muriel Forster, Charles D. Groves, B. Patrick Hickey, Sarah Lynch, Judy McCloskey, Kendall Schaefer, Michael Simon, and Audrey Solis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The United States, along with coalition partners and various international organizations, has undertaken a challenging and costly effort to stabilize and rebuild Iraq following multiple wars and decades of neglect by the former regime. This enormous effort is taking place in an unstable security environment, concurrent with Iraqi efforts to complete a constitutional framework for establishing a permanent government. The United States' goal is to help the Iraqi government develop a democratic, stable, and prosperous country, at peace with itself and its neighbors, a partner in the war against terrorism, enjoying the benefits of a free society and a market economy. In this testimony, GAO discusses (1) the funding used to rebuild and stabilize Iraq and (2) the challenges that the United States faces in its rebuilding and stabilization efforts. This statement is based on several reports GAO has issued to the Congress over the past three months. In July, we issued two reports on (1) the status of funding and reconstruction efforts in Iraq and (2) the use of private security providers in Iraq. We issued two additional reports in September on (1) U.S. reconstruction efforts in the water and sanitation sector and (2) U.S. assistance for the January 2005 Iraqi elections. Finally, we expect to issue shortly a report on U.S. efforts to stabilize the security situation in Iraq (a classified report). This statement includes unclassified information only. The United States is the primary contributor to efforts to stabilize and rebuild Iraq. Since 2003, the United States has made available about $30 billion for activities that include the construction and repair of infrastructure, procurement of equipment, and training and equipping of Iraqi security forces. International donors have pledged $13.6 billion in reconstruction funds (from 2004 through 2007), of which about $2.7 billion was provided in multilateral and bilateral grants through August 2005. However, most of the pledged amount''about $10 billion--is in the form of loans on which the Iraqi government largely has not yet drawn. Iraqi funds have primarily supported the country's operating budget, with some focus on capital improvement projects. For 2005, Iraq planned for about $28 billion in expenditures--largely supported by oil proceeds--to fund salaries, pensions, ministry operations, and subsidies. It is likely that Iraq may need more funds than currently available due to the severely degraded infrastructure, post conflict looting and sabotage, and additional security costs. The United States faces three key challenges in stabilizing and rebuilding Iraq. First, the security environment and the continuing strength of the insurgency have made it difficult for the United States to transfer security responsibilities to Iraqi forces and to engage in rebuilding efforts. The security situation in Iraq has deteriorated since June 2003, with significant increases in attacks against the coalition and the coalition's partners. Second, inadequate performance data and measures make it difficult to determine the overall progress and impact of U.S. reconstruction efforts. The United States has set broad goals for providing essential services in Iraq, but limited performance measures present challenges in determining the overall progress and impact of U.S. projects. Third, the U.S. reconstruction program has encountered difficulties with Iraq's ability to maintain new and rehabilitated infrastructure projects and to address maintenance needs in the water, sanitation, and electricity sectors. For example, as of June 2005, U.S.-funded water and sanitation projects representing about $52 million of approximately $200 million spent on completed projects were inoperable or were operating at lower than normal capacity. The United States has made a significant investment in the rebuilding and stabilization of Iraq. To preserve that investment, the United States must address these critical challenges. |
The Federal Employees’ Retirement System Act of 1986 (FERSA) created the Thrift Savings Plan as one of the basic elements of a new retirement system for federal workers to, among other purposes, provide options for retirement planning and encourage personal retirement savings among the federal workforce. Most federal workers are allowed to participate in TSP, which is available to federal and postal employees, members of Congress and congressional employees, members of the uniformed services, and members of the judicial branch. Eligibility depends upon coverage under the Federal Employees’ Retirement System or the Civil Service Retirement System and civilian or military status. Eligible federal employees are able to contribute up to a fixed percentage of their annual base pay or a flat amount subject to IRS limits. Additionally, certain participants are eligible for automatic 1 percent contributions and limited matching contributions from the employing federal agency. TSP provides federal (and in certain cases, state) income tax deferral on employee contributions and their related earnings, similar to those offered by many private sector 401(k)-type pension plans. As with a 401(k) plan, participants are able to contribute a portion of their basic salary into an individual tax-deferred account. Participants have the ability to manage their accounts and conduct a variety of transactions similar to those available to 401(k) participants, such as reallocating contributions, borrowing from the account, making withdrawals, or purchasing annuities. Administration of TSP falls under the purview of the Federal Retirement Thrift Investment Board, an independent agency in the executive branch established by Congress under FERSA. This five-member, presidentially appointed Board’s primary responsibilities include establishing policies for the investment and management of TSP. In addition, the Board must manage the Thrift Savings Fund solely in the interest of the participants and beneficiaries of TSP, as well as create administrative policies for the TSP. In addition to assigning these broad duties, FERSA also charges the Board with appointing an Executive Director and an Employee Thrift Advisory Council (ETAC). The Executive Director and staff are responsible for implementing the Board’s policies and managing the day- to-day operations of TSP, prescribing regulations to administer FERSA, and other duties. The Employee Thrift Advisory Council advises the Board and Executive Director on matters relating to the investment and administration of TSP. While the Executive Director has responsibility for establishing and maintaining TSP participant accounts, these account record-keeping services are currently performed primarily by the U.S. Department of Agriculture’s National Finance Center under a contractual agreement with the Board. Through this agreement, the TSP service office of NFC is responsible for updating participants’ accounts based on data provided by agency payroll offices, processing participant account transactions, and providing customer service support related to these functions. Federal agency payroll and personnel offices also play a role in the administration and customer service activities of TSP. These offices are responsible for helping new employees enroll in TSP when they are hired and assisting existing employees make changes during designated semiannual seasons called open enrollment periods. They also retain control of administrative functions affecting employee payroll deductions, such as the election and alteration of contribution percentages. Additionally, FERSA requires TSP to provide information to employees participating in TSP and requires the Office of Personnel Management to establish a training program for all retirement counselors of federal agencies. Though they emphasize different approaches, TSP and private sector plan managers enable customers to select their preferred means of service from a similar range of service options—including telephone, Internet, and on- site assistance (fig. 1). Both TSP and private plan managers give customers the option of obtaining assistance through automated telephone systems as well as live representatives located at call centers, although they emphasize different standards when evaluating the assistance provided. In addition, both TSP and private managers provide Web sites that deliver plan information and allow participants to conduct personal transactions, but private plan managers emphasize the use of their Web sites as the primary vehicles for delivering retirement education and information to participants. Finally, while both TSP and private managers use on-site coordinators to provide plan information, this customer service function is more heavily used by TSP. Whereas TSP managers said that agency representatives serve as the initial contact points for actively employed TSP participants to learn about TSP, private managers use on-site representatives less and do so to supplement call center representatives and Web-based resources. TSP managers provide an automated, toll-free telephone system and call center staff that help answer participants’ questions, and TSP managers measure the efficiency of the call centers based on quantifiable standards, such as the time it takes to respond to incoming calls. TSP’s automated telephone system, known as the ThriftLine and operated by the National Finance Center, allows participants to access general plan information, including share prices, rates of return, current loan interest rates, current annuity interest rates, and plan news. In addition to obtaining general plan information, participants can gain access to the ThriftLine system to get personal account information, such as their account balance and how they are allocating their contributions, or to manage their account, such as by making interfund transfers or withdrawals. Participants can also use the ThriftLine system to reach a TSP call center representative. TSP managers also maintain a call center where participants can reach service representatives; these representatives answer participants’ questions and can also help them make changes to their accounts. About 155 call center representatives field incoming telephone calls and answer participants’ questions about loans, allocation changes, interfund transfers, and withdrawals. In addition to answering participants’ questions, these call center staff also register requests for assistance and complaints in a record-keeping system, which allows supervisors or lead analysts to review such comments and respond to participants to help them resolve problems. Call center staff said participants most commonly call to ask questions about loans or withdrawals. The staff also indicated that in the past participants with loans were more likely than others to call back seeking more help. Recently, the Board has taken a number of steps to improve telephone service to TSP participants. These included establishing an additional call center to reduce service interruptions and providing toll-free telephone service to facilitate participant access to service representatives. The new call center, located in Cumberland, Maryland, opened in July 2004 to complement the center located in New Orleans, Louisiana, during normal operations and provide backup during weather-related or other local events that could otherwise interrupt service. Also in July 2004, the Board began providing toll-free telephone service to TSP participants where participants will be able to obtain TSP account or transaction information via the automated telephone service 24 hours a day, 7 days a week. TSP managers emphasized the efficiency of call centers based on quantifiable standards—such as the percentage of calls that are answered within a specified time frame—and pointed to their performance on these measures as evidence of call center productivity. For example, the TSP call center tries to answer at least 90 percent of calls within 20 seconds of a participant calling. TSP was meeting this standard in 2003 prior to the implementation of the new record-keeping system and has since begun meeting it again. However, TSP failed to meet this standard for about a 10-month period (May of 2003 through February 2004), when it was experiencing an increase in the volume of calls received prompted by the implementation of the new record-keeping system. Finally, TSP also closely monitors the percentage of callers who hang up before receiving service (known as the call abandonment rate) and the average call length as a measure of service, and each call center representative is required to help an average of at least 50 participants per day. TSP managers said that resolving the participant’s concern on the first call is also a goal, and they have enlisted the help of a private sector contractor to make changes to improve the quality of their call center processes and are developing new standards for call center representatives. Private sector plan managers also provide caller assistance through automated toll-free telephone systems and live service representatives. Some of these managers have more than one defined contribution plan under their management and provide unique toll-free numbers for each plan. Participants complete a log-in process that routes them to an automated menu unique to their particular plan—a system similar to that provided by the ThriftLine. From this menu, participants can then check their account balances, get price quotes, receive plan information, and process account transactions. These automated customer service lines also give participants the option of speaking to a live call center representative who can provide assistance with transactions or answer their questions. In addition to providing basic assistance, service representatives use the call as an opportunity to direct participants to educational and plan materials available on their Web site. While private managers used similar means as TSP to provide caller assistance, they emphasized different types of standards than TSP when evaluating the success of their call centers. Private managers told us that they downplayed the importance of quantitative measures and instead focused on ensuring that service representatives fully satisfied each customer’s needs efficiently and politely in one call. Specifically, private sector managers emphasized this policy of first-call resolution as an overriding goal and said they viewed quantitative measures, such as those emphasized by TSP, as secondary or, in some cases, an inaccurate measure of call center performance. For example, while TSP keeps close watch on the call abandonment rate, private managers we spoke with placed little emphasis on this measure. In fact, one plan manager told us he believed the call abandonment rate to be an inaccurate measure of productivity, as some participants terminate calls after opting to visit the Web site in response to the telephone recording they hear while on hold. Furthermore, unlike TSP managers, who evaluate service representative utilization by representatives’ ability to answer a minimum number of calls each shift, private managers we spoke with did not have required call minimums, instead stressing that service representatives should fully resolve each participant’s needs, regardless of the time required. Finally, although private managers had requirements—such as answering 90 percent of calls within 20 seconds—specified in the contracts with their clients, they did not use such measures as the primary standard for evaluating service representatives or call center effectiveness. However, we did not attempt to assess the possible impact of these differences on the quality of customer service. TSP managers also provide customer service through their Web-based transaction system. TSP’s Web site is maintained and operated by TSP staff and according to TSP managers has received an average of about 15.7 million hits per month and processed an average of 216,696 transactions per month during the first 8 months of 2004. Participants can use the Web site for functions such as accessing plan information, making loan requests, downloading forms, transferring funds, changing allocations between funds, and accessing account balances, among other things. Participants can also conduct many of these transactions over the phone, but by using the Web site, participants can also make a loan request and can submit the form online for quicker processing. According to TSP managers, the Web site has to a great extent replaced paper forms for processing transactions and providing information. For example, 93 percent of interfund transfers were conducted through the Web site during the first 8 months of 2004. TSP managers also said that they have eliminated paper versions of forms, statements, and bulletins that have traditionally been provided through the mail or sent to agency coordinators because most of these documents can be obtained through the Web site. TSP managers stated that their goal in doing so is to improve their level of service while decreasing costs. Private sector plan managers that we spoke with said that in addition to providing a means for participants to process transactions, they also intend their Web sites to be the primary vehicles for delivering retirement education and information to their participants. Like TSP managers, private managers said they allow participants to conduct different transactions, check account balances, and receive plan information electronically. One plan manager that we spoke with said that his Web site processed approximately 9 million transactions per month in 2003. In addition, we found that private sector Web sites provided a range of resources for participants, from Web-broadcast seminars to downloadable brochures and general investment information, and most private plan managers underscored the importance of the Web site as a tool that allows participants to access information on their own. One manager told us that his goal is not only to educate participants but also to influence participant behavior by strategically placing educational information on the Web site so as to link such educational materials to the topic or transaction a participant is pursuing. For example, the plan manager said that research has found that most participants visited the site for information on their account balance or personal rate of return, not to seek out educational materials. In response to findings about participant Web usage, the plan manager redesigned the site so that educational materials are embedded in relevant pages of the site, rather than isolated on a separate page. In contrast, TSP separates all plan and retirement information from account and transaction access. Designated by their federal agencies as liaisons with TSP managers, agency coordinators provide customer service to participants within their particular agencies. Agency coordinators, who are typically located in their agencies’ personnel or benefits offices, are the primary contacts for actively employed TSP participants. Generally, agency coordinators are expected to inform eligible employees of TSP options and benefits; maintain supplies of TSP forms and informational materials; collect, process, and submit TSP election forms to agency payroll offices; and respond to inquiries from active employees. For some agencies, the coordinator’s responsibilities are combined with those of the agency retirement counselor. FERSA authorizes agencies to designate retirement counselors who are responsible for providing employees with benefits information and mandates that OPM establish a training program for these retirement counselors. Agencies, however, have primary responsibility for designing and implementing their programs according to agency- specific needs. TSP managers said that the coordinators are the front line of TSP’s customer service operations. However, they also said that they lack the authority to monitor how frequently, at what point, and for what purpose coordinators contact participants. TSP managers view their role in providing customer service through agency coordinators as limited primarily to assisting these coordinators by distributing information and answering questions as they perform their TSP-related duties. The agency coordinators we spoke with said they were responsible for providing information on TSP to new employees, providing employees information on open season enrollment, conducting retirement seminars, helping with payroll issues, and answering any questions that employees may have about TSP. We found that the information and assistance provided to employees by coordinators in different agencies varied. For example, when we spoke with nine agency coordinators from large and small agencies, three coordinators told us that they had requested TSP staff to conduct seminars at their agencies to provide information on TSP and retirement planning. Of the six other coordinators that we spoke with, all but one said they conducted their own seminars or used an outside contractor. The agency coordinators said they also distribute information electronically or through posted bulletins, informational meetings, orientation sessions, and fliers in office mail. Several coordinators indicated that they refer participants to the TSP Web site or the ThriftLine to obtain plan information or to conduct transactions. The coordinators themselves receive guidance and new information on TSP from TSP managers by attending optional quarterly meetings and coordinator training sessions that are offered monthly and reading TSP bulletins—monthly publications that provide recent information on the TSP and are available to agency coordinators through the TSP Web site and the mail. The agency coordinators we spoke with indicated that they had other responsibilities in addition to their TSP-related duties. Private sector plan managers rely on on-site representatives less and tend to deliver educational and transaction resources directly to participants. As a result, plan managers view call center representatives, automated telephone services, and Web sites as their front line of customer service operations. This emphasis on self-service results in these managers providing all available plan information and retirement education through one or more of these service delivery methods and using on-site representatives only as a backup measure or not at all. Private sector companies have been moving away from face-to-face service delivery approaches and toward live telephone assistance to meet consumers’ expectations for fast and convenient service and to provide this service more cost-effectively. Additionally, by allowing participants to conduct all transactions— including making changes to contribution percentages—through the Web site or telephone services, plan managers also rely less on employers to provide services. One plan manager we spoke with described the use of on-site representatives as flawed, because information provided by different on-site representatives may vary. By providing all information through a Web site or centralized contact center, the manager told us, the plan could ensure consistency of service. Despite this, some plan managers we spoke with did use on-site representatives in a limited capacity. One plan manager provided limited on-site support for its state government and school district clients, but the arrangement was specific to the plan sponsor and the plan manager did not have a companywide practice for utilizing such representatives. Specifically, the number, location, and function of representatives provided to a client varied from plan sponsor to plan sponsor. For its state government and school district clients, the plan manager only provided on-site service when the plan sponsor specifically requested a representative or during specified times, such as open enrollment periods. Private sector plan managers we contacted have adopted various other practices that are not featured within TSP, such as regularly assessing customer satisfaction and using regularly updated technology to improve customer service. Private managers frequently assess their performance by gathering participant feedback on the services provided and use this information to improve their customer service delivery. Although TSP managers have surveyed participants, they have not done so since the early 1990s and currently have no systematic approach to assess whether the plans customer service meets participants’ needs. In addition, private managers that we spoke with appear to utilize more up-to-date technologies to provide customer service than do TSP managers. Private sector plan managers regularly gather participant feedback and use the feedback received to improve their customer service. Since most private plan managers deliver customer service through multiple methods, they use a number of mechanisms to gather participant feedback. These mechanisms are customized to gather information specific to the service method the participant used. For example, some plan managers we spoke with provide a link on their Web site to a survey that allows participants to share their impressions of and experiences with the service provided through the site. Some managers also use short on-the-spot surveys to gather feedback about a participant’s experience on the plan manager’s Web site. These plan managers also survey participants after they have interacted with the plan’s contact center and voice response telephone system through short, automated surveys at the end of a call. Managing officials told us that these surveys include questions regarding the service method’s ease of use, whether or not the participant was able to complete the intended transaction, and sometimes ask whether the participant has suggestions as to how the service method or transaction process could be improved. Traditional survey methods are also used, such as an annual mail or telephone survey to randomly sample participants and assess their overall level of satisfaction with the services provided. However, plan managers said that these surveys take longer and are more costly to administer than other methods. All of the plan managers we spoke with emphasized the importance of incorporating participant feedback into their customer service delivery models in order to better meet the needs of their participants. They told us that it is important to determine what is working and what is not by regularly gathering participant feedback. For example, one plan manager told us that his plan had been mailing participant statements to the employer for distribution for several years. He thought that this method of distribution would be more convenient and cost-effective for the plan and that since no participants were complaining, it must be working efficiently. However, when participants were surveyed, they made it known that the statements were usually not distributed by their employer in a timely manner, if at all, and as a result, statements are now mailed directly to participants. Another plan manager said that his plan had redesigned its entire Web site and added features based primarily on feedback from their participants. TSP managers said that they have not surveyed participants since the early 1990s because they were waiting for a new record-keeping system to be developed, and they said it would not have made sense to survey participants until after the transition issues were resolved and participants had some experience under the new system. Rather than survey participants shortly after the new system was implemented, in 2003, TSP managers have since decided to wait until a new investment option is added to the existing selection of five funds. The Board anticipates implementing the new option in mid-2005. Although the Board has a system for addressing participant complaints, TSP has no systematic mechanism in place for soliciting participant views about the quality of the service they are receiving. Instead, TSP managers rely largely on indirect feedback from NFC and TSP staff and others, such as agency coordinators, who respond to complaints or requests for assistance from participants. Agency coordinators and the call center representatives at NFC provide some feedback to TSP managers regarding areas of potential improvement. TSP managers are also responsible for gathering participant feedback through the Employee Thrift Advisory Council, as required by FERSA. However, while some ETAC representatives provide TSP managers with feedback on draft TSP publications, legislative initiatives, and other issues, ETAC representatives do not systemically solicit feedback from their constituents. Some ETAC representatives may receive sporadic feedback from participants, but ETAC does not conduct surveys of plan participants. As a result, TSP managers are dependent on call center representatives or agency coordinators to forward any feedback they receive from participants. Also, the executive director of ETAC said that TSP participants might be more likely to raise customer service issues with their local representatives, such as union representatives, rather than elevate issues to the national level. Therefore, the extent to which participants within the represented agencies and employee organizations provide feedback to their ETAC representative is unclear. Because TSP relies on customer complaints as an indicator of participant satisfaction, its managers do not have the information necessary to determine the degree to which participants are satisfied with the services. A TSP official said that because participant complaints have decreased significantly and leveled off since the record-keeping system conversion in June 2003, participants are probably satisfied with the services that TSP is providing. However, TSP managers’ reliance on complaints does not take into account participants who are dissatisfied and have not complained or do not know where to complain about the services they received. In other instances, participants have to send letters to TSP managers or the TSP call center, or contact their member of Congress with problems or concerns they have had with the services they received. Private sector plan managers that we spoke with also utilize more up-to- date technologies to improve customer service than TSP does. We found that the privately managed plans’ Web sites provided participants more flexibility and options for managing and learning about their retirement accounts than did the TSP Web site. Private plan managers also told us that one part of providing the best possible customer service is allowing participants to serve themselves in an easy and convenient manner, and they have found that the more up-to-date technology that they have incorporated into their customer service delivery models helps facilitate participant self-service. For example, one plan manager’s Web site allowed participants to instantly create and print account statements for any period of time. This allows participants to easily determine how much they earned, lost, or contributed for any given period of time. Plan managers said that such information helps participants make decisions about how to allocate their funds and about retirement planning. Conversely, the TSP Web site only makes statements available for calendar year quarters, and the participant must wait approximately 2 weeks after the close of the quarter to obtain the statement. Privately managed plans also use their Web sites to help participants understand their retirement plan and the options available to them within the plan. For example, one plan manager we spoke with provided access to prerecorded seminars on the Web sites that cover current account- related topics and features. Participants may view these seminars at any time as an alternative to the printed literature that was also available through the Web site if desired. Conversely, the TSP Web site only offers print copies of its general plan information brochure, available in hard copy or for viewing online, and the brochure on general plan information has not been updated since mid-2001. The private plan managers that we spoke with also utilized more sophisticated calculators on their Web sites than does TSP. For example, one plan manager’s Web site provided a calculator that would project a participant’s account balance given certain hypothetical parameters, such as number of years before retirement, that the participant can select. The calculator is securely linked to the participant’s account, and certain items, such as the participant’s account balance and historical rate of return on assets, are automatically entered into the calculator, simplifying the process for the participant. TSP also offers an account projection calculator, but the calculators they offer are not on a secure server and all parameters must be specified manually, and the participant may not know what assumptions, like the estimated rate of return, are reasonable to specify. The plan managers that we spoke with told us that participants are more likely to use and benefit from tools such as the retirement income projector if they are convenient and easy to use. All of the private plan managers that we spoke with emphasized the importance of keeping abreast of the latest technology and industry trends in order to provide participants with the highest possible level of customer service. For example, two plan managers told us that they regularly and systematically review not only their competitors’ Web sites for new ideas and innovations, but they also study the Web sites of other companies, such as Yahoo and Amazon, to keep abreast of the latest technological developments. Because TSP managers are not taking full advantage of such technology, TSP participants may not receive the benefits it offers. As plan managers have continually updated their Web sites to incorporate more sophisticated features, they have simultaneously experienced increased Web usage and decreased calls to service representatives by participants. Another manager stated that certain Web features could reduce mailing costs by allowing plan managers to provide some information electronically. Although TSP managers have a simple and functional Web site available for participant use, the site does not offer the flexibility and convenience of the Web sites provided by the private plan managers that we reviewed. Although TSP managers told us that they have recently taken steps to learn about industry innovations—including talking to and occasionally visiting private sector plan managers, viewing private sector Web sites, and reviewing the literature from researchers in the field, they have not yet institutionalized the routine collection of information to regularly and systematically assess trends and innovations in the defined contribution industry. TSP has become one of the largest retirement savings plans in the United States, and it must provide customer support and record keeping for millions of participants across many federal agencies throughout the United States and, in some instances, the world. TSP must exchange information with multiple federal agency payroll, personnel, and data- processing representatives and handle millions of participant transactions every month. In addition, TSP relies on other federal agencies to provide participants’ benefit information, including educational and other information, on TSP. Because these characteristics make TSP unique, its approach to customer service is naturally somewhat different from that of other pension plans. Nonetheless, some aspects of private sector practices seem appropriate for TSP managers to consider. For example, as TSP managers continue to modernize their customer service operations, they should look for opportunities to use participant feedback to identify broader innovations to improve their services, in addition to focusing on individual participants’ specific concerns. The practices of private sector plan managers suggest that direct, ongoing participant feedback is invaluable in responding to the changing needs of plan participants. Without obtaining more frequent and representative feedback from participants, TSP managers cannot determine what improvements would best satisfy participants’ needs. While recognizing the recent improvements made to TSP’s customer service—such as the new record-keeping system, the additional call center, and toll-free telephone service—we conclude that TSP could benefit from additional measures that could enhance customer service further. For example, the innovative, technological, and operational practices that have become commonplace in the private sector customer service industry might be means to continual customer service improvements at TSP. Also, TSP could benefit from regular and sustained examination of private sector customer service practices. As TSP participants become more familiar with online financial transactions and the services provided by private plan managers, participants may come to expect services not currently provided by TSP. However, the costs of making changes to TSP’s services would have to be balanced against the potential benefits such changes could provide to participants and the plans themselves. To help ensure that federal workers have the options needed to effectively plan and to encourage them to save for retirement, TSP managers should continually seek new ways to improve their customer service operations. Both the potential costs and benefits should be weighed in making decisions about changes to service. Therefore, we recommend that the Federal Retirement Thrift Investment Board direct the Executive Director to take the following actions: Develop and implement an evaluation effort to systematically assess the level of customer satisfaction and to identify, as needed, areas of potential improvement for the ThriftLine, caller assistance center, Web-based transaction system, and TSP coordinator program. It should consider a variety of different approaches, including traditional methods such as surveying participants annually through the mail or telephone to assess their overall level of satisfaction with the services provided, supplemented by other approaches, such as exploring the use of Web-based and automated telephone call evaluation tools to randomly survey TSP participants. Institutionalize the routine collection of information from the largest private sector plan managers to keep up with current industry trends and assess whether the new and existing practices used by private managers would prove advantageous and cost-effective to TSP. We obtained written comments on a draft of this report from the Federal Retirement Thrift Investment Board and made changes to the report as appropriate. The full text of these comments is reproduced in appendix II. In response to the Board’s comments, we added information regarding its plans for surveying participants. We also added information regarding the Board’s recent announcements that it will review the service provided by private plan managers. The Department of Labor was also provided a draft of this report for review and advised us that it had no comments. In its written comments, the Board disagreed with our recommendation regarding the implementation of an evaluation effort to assess the level of customer satisfaction. The Board stated that our recommendation was moot because the Board had previously announced its intention to survey TSP participants on a variety of topics, including client satisfaction. However, in its February 2004 monthly meeting, the Board said that such a survey would not be conducted for at least another 2 years. As we state in our report, the private sector plan managers that we spoke with believe that direct, ongoing participant feedback is needed to respond to the changing needs of plan participants. Without obtaining more frequent feedback from participants, TSP managers cannot determine what improvements would best satisfy participants’ needs. The Board also suggested that certain aspects of plan performance indicate increased participant satisfaction. In particular, the board stated that TSP’s high participation rate, increased number of transactions, and low account withdrawal rate among inactive participants, such as retirees or former employees, imply a high level of participant satisfaction. Although such performance measures provide an indication of participant satisfaction, more sophisticated efforts, such as participant surveys, are needed to provide direct information on the changing needs of plan participants and what improvements would best satisfy those needs. We recognize that these efforts may require substantial resources, which is why other means, such as exploring the use of Web-based and automated telephone call evaluation tools to randomly survey TSP participants, are needed as supplementary efforts. Without a systematic effort to obtain feedback from participants, the Board is not in the best position to determine participant satisfaction. The Board also disagreed with our recommendation that it should routinely solicit information from the largest private plan managers to assess whether their practices would be advantageous and cost-effective if adopted by TSP. The Board said that it has conducted and will continue to conduct on-site reviews of the largest private plan managers to determine the services they provide and the technological capabilities they plan for the future. While the Board’s recent visits to such plan managers are a positive step, we believe that a routine and systematic effort to survey practices used in the private sector customer service industry should be institutionalized as a regular aspect of TSP’s operations. Further, in considering which new practices to adopt, the Board would need to weigh the costs of making changes to TSP’s services against the potential benefits to participants and the plans themselves. Conducting such cost/ benefit analyses is a management function and would be inappropriate for GAO to initiate. The Board suggested that the section of our report on customer service options questions the federal government’s reliance on agency retirement counselors for the delivery of TSP information and service. This section of the report was descriptive, illustrating the different ways in which private sector entities and TSP managers provide customer service. While we noted that some federal agencies use on-site coordinators, we observed that, in general, the private sector tended to rely less on coordinators and more on call center representatives, automated telephone services, and Web sites to deliver informational and transactional resources to the participants. The Board stated that it rejects the suggestion that the focus of retirement education be shifted away from retirement counselors. We are not suggesting that the agencies move away from retirement counselors, but we emphasize that providing retirement information through the use of retirement counselors does not preempt TSP managers from also providing such information through call centers and Web sites. The Board questioned the feasibility and value of implementing several customer service features used as examples in the draft report. For example, we included a discussion about a private sector Web site that could display a graph of the projected growth rate of a participant’s account at the current contribution rate as well as at a slightly higher hypothetical rate to demonstrate the effect of higher contributions. This example was used only to illustrate the type of educational features used by the private sector and was not necessarily intended as a specific recommendation to be implemented. However, we maintain that the private sector offers a range of Web features, particularly those concerning education and retirement information, that are not available to TSP participants. Similarly, other examples, including on-the-spot participant surveys and more flexible options for statement preparation, were intended to illustrate the types of advanced tools private sector plan managers provide to their participants rather than as specific recommendations to be implemented. The Board also stated that the draft ignored TSP’s superior performance results as measured by key standards that demonstrate that TSP significantly outperforms the private sector. In response, we noted that TSP had generally met its standard for call center performance measurement, such as the percentage of all calls answered within 20 seconds, except for a period of about 10 months of increased caller volume as the Board implemented its new record-keeping system. However, as presented in the draft, private managers told us that they downplayed the importance of quantitative measures and instead focused on ensuring that service representatives fully satisfied each customer’s needs efficiently and politely in one call. Specifically, private sector managers emphasized this policy of first-call resolution as an overriding goal and viewed the quantitative measures, such as those emphasized by TSP, as secondary or, in some cases, an inaccurate measure of call center performance. Finally, the Board stated that the draft failed to address three of the five areas of interest to the committee. At the conclusion of the design phase of our study, GAO and the committee staff agreed to the objectives stated at the beginning of this report. We eliminated the other areas of interest largely because they overlapped with the scope of an ongoing audit by the Department of Labor of TSP’s customer service. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Labor, the Federal Retirement Thrift Investment Board, appropriate congressional committees, and other interested parties. In addition, we will make copies of this report available at no charge on GAO’s Web site at http://www.gao.gov/. If you have any questions concerning this report, please contact me at (202) 512-7215 or Tamara Cross at (202) 512-4890. Other major contributors include Daniel Alspaugh, Amy Buck, Richard Burkard, Erin Daugherty, Malcolm Drewery, Michael Morris, Corinna Nicolaou, and Roger Thomas. To describe how Thrift Savings Plan (TSP) managers provide customer service, we interviewed TSP managers at the Federal Retirement Thrift Investment Board headquarters in Washington, D.C., and officials at the Department of Agriculture’s National Finance Center (NFC), which has been the major contractor for account maintenance and participant support since TSP began operations in 1987. In addition, we reviewed documentation governing TSP’s customer service operations, attended monthly Board meetings, and interviewed officials at the Department of Labor and the Employee Thrift Advisory Council (ETAC). Specifically, to describe the customer service operations of the ThriftLine and call center, we also observed center operations at the NFC in New Orleans, Louisiana, including customer service representatives taking telephone calls from TSP participants; we also collected and reviewed information on the number of calls made to the ThriftLine, the percentage of calls answered within specified time frames, and the number and duration of calls referred to a customer service representative. To describe the customer service operation of the TSP Web site, we also observed and documented the information and features available on the Web site through our personal computers at GAO and obtained information from TSP managers on the number of visits made and transactions conducted over the TSP Web site. We also interviewed TSP managers and NFC officials regarding the reliability of their call center or Web site data, such as the number of calls received or the number of interfund transfers transacted through the Web site. However, they were unable to provide information about any reliability tests that might have been performed. Call center and Web site data are presented here for illustrative purposes. Because these data are not central to the findings, conclusions, or recommendations of this report we did not assess their reliability. To describe the role of the agency coordinator, we also interviewed nine TSP coordinators, selected on the basis of a nonrandom sample of federal agencies with 5,000 or more employees (U.S. Postal Service, Department of Defense, Social Security Administration, and General Services Administration) and federal agencies with fewer than 5,000 employees (Government Accountability Office, Federal Trade Commission, Office of Personnel Management, Small Business Administration, and the Railroad Retirement Board). We also reviewed the Federal Employees’ Retirement System Act and Department of Labor guidance on responsibilities of the federal agencies participating in TSP. To describe how private sector plan managers provide customer service through call center representatives, automated telephone systems, Web sites, and on-site representatives, we visited managers of other large defined contribution plans, conducted interviews, observed their customer service operations, and in some cases obtained documentation on their operations. Specifically regarding private call center representatives and automated telephone systems, we obtained information regarding their customer service philosophies, use of call center benchmarks, call center staff training and evaluation policies, call center technology, and call volume management. In some cases, we obtained call center representative training manuals and call center statistics. We also listened to live calls with call center representatives and observed the software tools used by these representatives to assist callers at some of the plan managers that we visited. We also observed their Web site pages and features, such as their retirement calculators and education materials and discussed how often they update their Web sites and how they determine what to include on their sites with management and information technology specialists. To identify customer service practices used by private sector plan managers that TSP managers could consider to improve its services, we also obtained copies of both online and printed customer service-related survey instruments and, in some cases, survey results, and discussed with plan managers how they used this information to determine what changes would be made to their customer service delivery. To select managers of similar defined contribution plans for comparison, we selected 4 of the top 10 private sector plan managers and 1 of the top 3 state and local government agency plans based on total plan assets under management as of December 31, 2002, as reported by Pensions & Investments and Plan Sponsor Magazine. The five plan managers that we spoke with had an average of $169.8 billion under management and serviced an average of 4.8 million participants. They ranged in size from $9.8 billion to $282.0 billion under management and from 150,000 to 10.3 million participants. We also spoke with private and public pension industry groups such as the Profit Sharing Council of America and the National Association of Government Defined Contribution Administrators about our selection methods. | Intended to resemble private sector 401(k) pension plans, the federal government's Thrift Savings Plan (TSP) held more than $128 billion in retirement assets for over 3 million participants at the end of 2003. Customer service-related difficulties during the Federal Retirement Thrift Investment Board's (TSP's governing body) record-keeping system conversion in 2003 led the Chairman of a Senate Committee to ask GAO to examine the customer service provided to TSP participants. This review describes (1) customer service provisions within TSP and those offered by private sector managers and (2) customer service practices used by private sector plan managers that could be considered for use in TSP. TSP managers and private managers (servicing multiple pension plans) enable participants to select their preferred means of customer service from a similar range of options--such as telephone, Web sites, and on-site representatives--but each emphasizes different approaches. Both TSP and private plan managers provide customer service through automated telephone assistance as well as live representatives located at call centers. Both TSP and private managers also use standards to measure the efficiency and effectiveness of their call centers. However, TSP managers emphasize the efficiency of call centers based on quantifiable standards, such as the time it takes to respond to incoming calls, while private plan managers place a greater emphasis on the policy of satisfying each customer's needs in one call. Both TSP and private sector plan managers also use Web sites to deliver plan information and allow participants to conduct personal transactions, and private plan managers emphasize the use of their Web sites as the primary vehicles for delivering retirement education and information to participants. Finally, while TSP managers said that agency representatives serve as the initial contact points for TSP employees to learn about TSP and receive counseling, private plan managers use on-site representatives less to supplement services provided by call center representatives and Web-based resources. Private sector plan managers we contacted have adopted various other practices that are not featured within TSP, such as regularly assessing customer satisfaction and using regularly updated technology to improve customer service. These managers gather participant feedback on their voice response system via short, automated surveys at the end of participants' calls and use short, on-the-spot surveys to gather information on participants' experience with their Web site. These plan managers emphasized the importance of incorporating participant feedback into their customer service delivery model in order to better meet the needs of their participants. Although TSP managers have surveyed participants in the past, they do not have a systematic approach to assess whether their customer service meets participants' needs. TSP managers rely largely on indirect feedback from customer service staff, agency coordinators, and others who respond to complaints or requests for assistance from participants. The privately managed plans we studied also appear to utilize more up-to-date technologies to provide customer service, such as allowing participants to create account statements for any period of time or offering seminars over the Web on different plan topics that participants can access anytime. The TSP Web site provides fewer options and relies more on basic features. |
DOD’s current policy calls for each military service to determine its requirements and acquire sufficient war reserve materiel for the execution of current wartime scenarios and to be able to sustain these operations until being re-supplied. Thus, in developing their plans, the services must consider the availability of spare parts in their peacetime operating stocks, their war reserve spare parts inventories, and from the industrial base, and then estimate what additional materiel they need to buy. The Army’s industrial base and stationing strategies and DOD’s regulations reflect the importance of the industrial base in supporting wartime operations and require the services to rely on the industrial base to the maximum extent possible. In addition, the Army is required to maintain a viable capability to monitor and assess the health of the industrial base and identify potential risks. The U.S. Army Materiel Command is responsible for determining the Army’s requirements for war reserve spare parts, as well as the Army’s estimate of what private industry can be expected to provide during wartime, in order to derive the war reserve spare parts shortfall. It receives technical expertise from the Army Materiel Systems Analysis Agency in determining its war reserve requirements and an estimate of what can be expected from private industry. The Command’s major subordinate commands are responsible for purchasing specific types of materiel, such as aviation, tank, automotive, and communications parts, and they have a limited number of industrial base specialists who can be assigned to provide data for assessments. Figure 1 illustrates the steps that the Army follows to determine its war reserve shortfall. To plan how much war reserve materiel it needs to buy, the Army develops estimates of when spare parts will be available from the industrial base during wartime so that it can determine how much war reserve materiel it needs to buy and put into its war reserve inventory. In preparing its estimates, the Army first calculates the total amount of war materiel that it needs to support current wartime scenarios. Specifically, it calculates its requirements by using a computer model that considers several factors, such as spare parts usage and breakage rates. Next, it determines the amount of peacetime and war reserve inventories that are available to meet that requirement. The Army then applies the amount it estimates the industrial base can be expected to provide during wartime. The remaining amount is considered the total spare parts shortfall. The total shortfall can then be divided into the amount for which Congress has authorized funding, any amounts budgeted for future years, and an additional amount the Army has not yet requested from Congress. As table 2 shows, in preparation for its fiscal year 2003 budget submission to Congress (part of the fiscal year 2003-2007 out-of-cycle Program Objective Memorandum), the Army calculated that it required $3.30 billion for its wartime spare parts. Of this amount, it estimated that $1.93 billion worth of spare parts would be available from peacetime and war reserve inventories. Another $0.13 billion expected to be available from private industry was applied. The resulting total spare parts shortfall was $1.24 billion. Of this amount, the Army has been funded $0.11 billion for fiscal years 2000-2002 and expects to request $0.47 billion in fiscal years 2003-2007. Overall the Army reports a total spare parts shortfall of approximately $0.66 billion. The Army’s approach for assessing wartime spare parts industrial base capability still does not use current data from industry. Rather, the Army’s assessments of industry’s capability to produce spare parts in wartime depend on historical data and lead-time factors that the Army develops itself. Without current data on industry’s capability, assessments could be unreliable, resulting in reduced readiness due to critical spare parts shortfalls in wartime or inflated and costly war reserve spare parts inventories in peacetime. Moreover, the Army’s budget requests to Congress for war reserve spare parts risk being inaccurate. In the past, the Army collected data directly from private industry through paper questionnaires to industry representatives that were up to 22 pages long. It stopped this practice primarily because of the poor response rates. According to Army Materiel Command officials, industry representatives said they saw no apparent direct benefit from filling in the lengthy questionnaires and, moreover, felt they should be compensated for their time and effort. We were told that command officials themselves do not believe that collecting current data from industry is cost-effective. Now, rather than collecting current data from private industry, the Army uses data that it acquired several years ago from private industry to create lead-time factors for estimating its wartime industrial base capability. These factors are based on out-of-date industry data. Furthermore they were developed from a limited range of spare part items but were applied to all parts needed for war. For example, in developing its fiscal year 2003 budget submission to Congress, the Army used a formula with wartime lead-time factors that were derived from estimated accelerated peacetime administrative lead times and production lead times. These accelerated lead-time factors of 85 and 61 percent, respectively, were based on data obtained prior to 1998 for specific items, such as howitzers, that were managed by the Army Tank and Automotive Command’s Rock Island facility. According to an Army document, this method of calculating lead times fails to account for variations that exist from item to item and can lead to unrealistic industrial base capability estimates. For example, a 1998 Army study found that 44 of 86 parts assumed to be supported by industry could not be and that 176 of 218 parts that were assumed not to be supported by the industrial base were. Partly in response to the recommendation in our prior report, the Army has several initiatives underway to improve its industrial base capability assessments, but these initiatives continue to focus on historical, rather than current industry data. In one initiative, the Army is developing a new approach to calculate its wartime spare parts requirements, in part, from data collected from private industry during 1998. In another, the Army Materiel Command has designed a tool—called the Industrial Base Hub— that brings together in one Web-based automated system a broad range of existing industrial base data. The data consist of war reserve requirements, producer capabilities, contract awards and actions, contractor businesses, and commercial businesses and finances. The Industrial Base Hub relies on historical data rather than on current data from industry. In a third initiative, the Army Materiel Systems Analysis Agency has proposed periodically collecting data on production lead times for the 100 costliest spare parts, which account for 70 percent of the total dollar value of the entire wartime spare parts requirement. The Army Materiel Systems Analysis Agency believes that collecting current data periodically from the private manufacturers of the top 100 costliest spare parts could be a reasonable way to get a cost-effective, reliable industrial base offset estimate. The Army could improve the reliability of its industrial base assessments by considering several key attributes present in DLA’s industrial base assessment program. These include the collection of up-to-date industry data, the timely analysis of data to develop current and reliable industrial base assessments, and the use of analytical data to create management strategies aimed at reducing spare parts costs and the risk of shortfalls. To improve its management of spare parts for the services, and thus reduce costs and inventory, DLA re-engineered its industrial base capability assessment program. DLA’s assessment program, called the Worldwide Web Industrial Capabilities Assessment Program, was started in the fall of 1999. It consists of a data collection tool and an analytical tool, which is used to create management strategies. (See appendix I for a more detailed description.) The data collection tool provides the capability to gather new and updated information directly from private companies via the Internet. Company representatives voluntarily respond to a series of on-line survey questions that, depending on how answered, are self- tailored to that company to simplify and speed up the survey process. Private companies provide information on what spare part items they can provide (or are willing to provide); what quantities they can produce; how long it will take to produce them under different scenarios (e.g., normal or crisis conditions); and what potential bottlenecks (e.g., availability of certain materials, or equipment constraints) exist that could limit the production of certain spare parts. DLA validates this information as part of its assessment process before acting on the information. The program’s analytical tool provides analysts with immediate access to the automated data collected from industry. This provides the capability to develop timely and reliable assessments of industry’s ability to provide various spare parts in peacetime as well as wartime. In addition, it provides the capability to use the analytical data to identify actual or potential parts availability problems (e.g., items with unusually long lead times or items that are involved in bottlenecks) and, based on this information, to create a management strategy for resolving these problems, for example, by changing its acquisitions procedures or targeting investments in material and technology resources to reduce production lead times. Although DLA’s industrial base assessment program is relatively new, it provides a number of examples that illustrate the effectiveness of collecting current data directly from the industrial base. Table 3 shows the impact on production lead time when it is based on up-to-date industry data. For example, clamp couplings for tanks, aircraft, and aircraft engines have a production lead time of 35 days during a crisis (surge) situation rather than a lead time of 156 days (lead time of record) previously estimated by DLA for normal, or peacetime, situations. This more reliable information could result in greater economy in purchasing decisions. For example, private industry says it can provide a resilient mount within 70 days during a crisis rather than in the 163 days that DLA previously estimated. The war reserve requirement for this item occurs during the first 3 months of a war. The reduction in production lead time from 163 to 70 days means that the third month could be covered with a savings of $4,810 by not buying the items. Likewise, the war reserve requirement for the centrifugal fan spreads over the first 6 months of a war with the bulk occurring during the last 3 months. The lead-time reduction from 109 days to 56 days means that months 2-6 could be covered with a savings of $62,560 by not buying the items. Additional benefits from the assessment program stem from evaluating currently collected and analyzed information to identify potential problems with production and create various management strategies to resolve them. For example, by identifying an unusually long lead time for a cesium lamp and examining the reasons for this, DLA was able to ultimately reduce the lamp’s lead time of 360 days to only 30 days. The lamp is used on several types of Navy, Marine Corps, and Air Force aircraft in electronic counter measure systems to defeat infrared missiles. The lamp cartridge, which is a critical element used in these systems, is made of exotic materials and operates at extreme temperatures and power levels. An industrial capabilities assessment concluded that the lead time of record for this item was 360 days. Negotiations with the vendor, however, reduced this to 300 days. The lead time of 300 days is due to the use of highly technical processes and several long-lead-time materials in its production. Because of the unique nature of the cesium lamp, additional measures were needed to reduce the lead time further. As part of a targeted investment, DLA awarded a contract to preposition and rotate long-lead materials and partially finished components, resulting in a further 270-day reduction in lead time to 30 days. As a result, DLA is spending $530,000 for this investment, compared with the $1.1 million it would cost to purchase and store an equivalent amount of finished product to meet war reserve requirements, saving approximately $600,000. The Army’s approach for assessing wartime spare parts industrial base capability can be improved. A comparative analysis of DLA’s program to the Army’s approach shows opportunities to improve, specifically in the areas of data collection, data analysis, and management strategies. Table 4 compares the DLA and Army industrial base assessment approaches for the three key attributes. By focusing on the above attributes, DLA’s industrial base capability assessment program has become an improved, simplified, time-saving process for companies to provide current production capability data. For example, the process uses a streamlined Internet based data collection tool that industry representatives say is an improvement over the old paper process. Also DLA uses follow-up letters and phone calls to encourage use of the online data collection tool. Companies can then participate with DLA in creating management strategies to reduce lead times, which can reduce required war reserve inventories. Industrial base capability assessments designed to have current data such as DLA’s create opportunities for sound decision making regarding the planning for and purchase of Army war reserve spare parts. The Army’s approach to industrial base capability assessments lacks key attributes that include the collection of current industry data, the analysis of that data and the creation of management strategies for improving wartime spare parts availability. Out-of-date data could result in reduced readiness and inflated or understated war reserve spare parts funding requests within budget submissions to Congress. Without a process that provides such analysis, the Army cannot identify long lead times and create management strategies to reduce lead times and thus the amount of inventory needed. In order to improve the Army’s readiness for wartime operations, achieve greater economy in purchasing decisions, and provide Congress with accurate budget submissions for war reserve spare parts, we recommend that the Secretary of Defense direct the Secretary of the Army to have the Commander of Army Material Command take the following actions to expand or change its current process consistent with the attributes in this report: establish an overarching industrial base capability assessment process that considers the attributes in this report; develop a method to efficiently collect current industrial base capability data directly from industry itself; create analytical tools that identify potential production capability problems such as those due to surge in wartime spare parts demand; and create management strategies for resolving spare parts availability problems, for example, by changing acquisition procedures or by targeting investments in material and technology resources to reduce production lead times. DOD partially concurred with the overall findings and recommendations. However, it nonconcurred with specific points in several of our recommendations relating to the need to improve the capability of the Army’s approach to assessing industrial base capabilities. Our evaluation of the Department’s specific comments on each recommendation follows. DOD agreed with the overall point of our first recommendation that it establish an overarching industrial base assessment process relying on the most accurate information available. However, it did not concur that the Army should change its current process to be consistent with attributes of the DLA program. It stated that the Army’s current system already applies many of these attributes and must have the flexibility to do so in its own manner consistent with its specific requirements and resources. As we reported, our analysis shows the Army’s program does not have all the key attributes such as collecting current industrial base capability data from industry. Furthermore, we considered the Army’s need for flexibility in managing and executing its program when developing our recommendation by stating that the Army should be consistent with—not necessarily mirror the attributes of DLA’s program. Therefore, we continue to believe our recommendation is appropriate. DOD agreed with the underlying premise of our second recommendation that the most accurate data lead to the most accurate estimates. However, it stated that we provided no evidence that more current data would result in a more accurate forecast of industry’s capability to provide parts for war. As pointed out in our report, DLA provided examples of how it could save money by using current data it collected from industry, such as over $62,000 on the centrifugal fan. Furthermore, we noted that a study done by the Army in 1998 showed that data collected at that time about actual industrial base capability significantly disagreed with the Army’s estimated industrial base capability. The department also did not agree to a comprehensive data collection effort because keeping more current data does not warrant additional resources and stated that it will direct the Army examine the feasibility of attempting to proactively collect production data for a limited number of items. We recognized the potential for such an initiative in our report and stated that the Army Materiel Systems Analysis Agency believes that periodically collecting current data on the top 100 costliest spare parts could be a reasonable approach. Although this is a good first step, a comprehensive effort to collect current industrial base capability data directly from industry is basic to the recommendation’s underlying premise and is a best practice. Therefore we continue to believe that our recommendation has merit. DOD concurred with the point of our third recommendation that there is a need to identify potential production capability problems such as those resulting from a wartime surge in demand for spare parts. However, it did not agree that the Army does not have such a process. While the Army’s approach may have many analytical features, it does not provide specific analyses of production capability. Such analyses contribute to identifying possible production capability problems and could enhance the Army’s management decisions. Therefore, we continue to recommend that the Army create such analytical tools. Furthermore, in response to DOD’s comment about the need to validate survey data on production capability before taking action, we added information to our report stating that DLA does validate its industry surveys as part of its process. With regard to our fourth recommendation, DOD concurred with the concept that management strategies are needed to resolve spare parts availability problems. But, it disagreed with the implication that the Army has no such strategies. While the Army does have some processes at the individual command level that identify and address spare parts availability problems, we did not find an overarching process to create management strategies designed to reduce lead times and inventories. Therefore, we continue to believe that our recommendation is appropriate. To determine whether the Army is using current industrial base data for assessing wartime spare parts industrial base capability, we interviewed Army officials responsible for war reserve spare parts planning, requirements development, and estimation of industrial base capability in the Office of the Army Deputy Chief of Staff for Logistics in Washington, District of Columbia; the Army Materiel Command in Alexandria, Virginia; the Army Aviation and Missile Command at Redstone Arsenal, Alabama; and the Army Materiel Systems Analysis Agency at Aberdeen Proving Grounds, Maryland. To determine whether opportunities exist to improve the reliability of the Army’s industrial base capabilities assessments, we compared the Army’s approach to key attributes of the DLA’s program by interviewing DLA officials in the Supplier Assessment and Capability Division at Fort Belvoir, Virginia, and the Defense Supply Centers in Richmond, Virginia, and Columbus, Ohio, that are responsible for an industrial base data collection and analysis activity using information from private industry to improve spare parts management. We also reviewed the processes used by the Army and DLA to assess industrial base capability. We performed our review between October 2001 and May 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense and the Secretary of the Army. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8412 if you or your staff has any questions concerning this report. The Defense Logistics Agency’s (DLA) industrial base assessment program operates within the Supplier Assessment and Capability Division in the Acquisition Management and Logistics Policy Directorate. Among the division’s objectives are: (1) to provide information tools to assess the capabilities of suppliers and (2) to identify potential readiness shortfalls and mitigate them through various business practices such as investing in long-lead materials and by taking advantage of manufacturing commonalities. To achieve these objectives, the division has developed a variety of tools to assess the supplier base in each of its major product categories— weapon systems and hardware, construction, medical supplies, subsistence items, and clothing and textiles. Using these tools, DLA is able to evaluate suppliers’ capabilities to provide items in both peacetime and wartime, to take actions to mitigate quantifiable risks, and to examine broad industrial base issues and trends, using statistically valid information. The tools allow assessments to be made by individual item or grouped by items, product family, sector or subsector, weapon system or platform, or supplier. One of these tools, the Worldwide Web Industrial Capabilities Assessment Program, was designed for assessing supplier resources available to the Defense Department. The program is an automated, interactive, Web- based program that allows the gathering of information from industrial suppliers and the use of this data to assess the industrial sector’s capabilities for supplying various items. It also enables information to be analyzed in a wide variety of formats in order to identify strategies directed toward reducing costs and providing wartime readiness. Developed in 1997, the Worldwide Web Industrial Capabilities Assessment Program replaced the old data collection process, which relied on mass mailings of lengthy (up to 22 pages), cumbersome questionnaires that suppliers had to fill out by hand. The response rate from industry was typically too low to allow any statistically relevant analysis. In addition, the narrative answers to key questions were incompatible with computer analysis, and thus the information that industry provided could not be acted upon. The development of the program changed both the way information is collected from industrial suppliers and the way that information is used to conduct industrial base assessments and analyses. The data collection tool was built specifically for industrial base assessments. It resides on a Web site that can be easily accessed by industry representatives. It uses an interactive survey format to collect information directly from a company about its ability to supply certain items. A company’s representative checks in and fills out, or updates, a survey questionnaire for each item or group of items that is supplied. Depending on how a user answers a question, the questionnaire automatically adjusts itself to remain as short as possible but still collect the essential information that is needed for analysis. The survey information is saved in a permanent database, which eliminates the need for a company to reenter information when it is updated. The program identifies each item by the supplier’s own part number grouped by an industry standard classification code. This simplifies input of information for multiple items that might use the same production line or equipment. It requests a wide range of information about the industry’s ability to supply an item, including high and low estimates of production time, capacity, potential constraints and bottlenecks, and inventory on- hand. See table 5 for a list of the data fields. While the data collection tool interfaces with industry via the Web to gather data, the analytical tool, also Web-based, is a centralized tool that is available to all approved personnel regardless of location. The analytical tool allows analysts to assess what is needed in the way of industrial items and what the industrial base is capable of providing. It does this by combining the current information supplied by industry with existing DLA legacy data (e.g., item purchase histories, and previous item shortfalls). Analysts can use this integrated database to examine information at various levels (e.g., individual item, family groups, sector and subsector, weapon system and platform, or supplier) and to graphically depict this information in a range of formats and export the data to external files for further complex analysis. They can create statistically valid samples of discrete data to analyze. With this information, they are able to identify acquisition strategies that take advantage of similar manufacturing processes and affect changes in peacetime buying practices as a low-cost way of providing wartime readiness. Key contributors to this report were Richard Payne, Paul Gvoth, Leslie Gregor, Douglas Mills, and Nancy Benco. | The Army's approach to assessing wartime spare parts industrial base capability does not use current data from industry. Instead, the Army uses historical parts procurement data because its prior efforts to collect current data from industry were not successful due to poor response rates. GAO identified a program in the Defense Logistics Agency (DLA) that has several attributes reflecting sound management practices for reliable industrial base capability assessments. Although DLA's program is in its early stages of implementation, DLA has been able to successfully collect current data directly from private industry on thousands of parts. Further, DLA is analyzing that data to identify actual or potential parts availability problems. |
The Clean Water, Safe Drinking Water, and Clean Air Acts emphasize the importance of state involvement in protecting the environment and public health and allow EPA to authorize states to implement their own programs in lieu of the federal program—referred to as program authorization. From 1986 to 1990, Congress amended these three acts to authorize EPA to treat Indian tribes in the same manner as states for purposes of program authorization. Under EPA’s implementation of the Clean Water Act, a tribe may submit a request to EPA for TAS status and then submit a request for approval of its adopted water quality standards, or submit both the TAS request and the water quality standards approval request at the same time. Section 518 of the Clean Water Act lists the eligibility criteria EPA will use to approve TAS status and to authorize Indian tribes to administer Clean Water Act programs. In applying for TAS under the Clean Water Act, a tribe, among other things, submits a descriptive statement that includes a map or legal description of the area over which the tribe intends to assert jurisdiction. For purposes of this discussion, Indian lands can be separated into three general categories: (1) lands within the exterior boundaries of a formal reservation, (2) tribal trust lands lying outside formal reservation boundaries, and (3) individual allotments lying outside reservation boundaries. EPA considers lands within the boundaries of a formal reservation and tribal trust lands lying outside of formal boundaries to be reservations for purposes of section 518 of the Clean Water Act. For the third category—individual allotments— EPA has not historically considered the Clean Water Act to cover allotments outside of reservations. EPA follows similar processes for TAS under the Clean Water, Safe Drinking Water, and Clean Air Acts. EPA’s approval process for tribal requests for TAS begins in its regional offices, where officials verify that the requests meet eligibility requirements. EPA also requires its headquarters staff to review the first TAS request received and approved in each region under the Clean Water and Safe Drinking Water Acts and to review all other TAS requests, that appear to be nationally significant because, for example, of new legal issues. Where practical, regional and headquarters reviews are conducted concurrently, according to EPA officials. Except for specific tasks, such as the 30-day public comment period, EPA has not established time frames or goals for the length of its review process. In addition to relevant statutory and regulatory guidance, EPA may refer to federal case law concerning Indian tribes when reviewing a tribe’s TAS request, which we refer to as Indian case law in this report. For example, EPA uses Indian case law to determine whether a tribe has the authority to regulate an activity on land owned by nonmembers but located within a reservation. In particular, in 1981, the Supreme Court held that, as a general rule, absent delegation by federal statute or treaty, Indian tribes lack authority to regulate the conduct of nonmembers on non-Indian land within reservation boundaries, except when (1) nonmembers enter into a consensual relationship with the tribe or (2) activities by nonmembers on lands within the reservation threaten or have a direct effect on the political integrity, economic security, or health or welfare of the tribe. This ruling is known as the Montana test. With respect to program authorization, EPA’s review process is generally the same for tribes and states. Specifically: For the Clean Water Act, EPA determines, for example, whether the (1) water uses are consistent with the requirements of the act, (2) adopted criteria protect the designated water uses, and (3) appropriate technical and scientific data and analyses have been used. The Clean Water Act allows states and tribes to establish water quality standards that are more stringent than federal requirements. Under the Safe Drinking Water Act, EPA requires states and tribes to demonstrate the capability to administer primary enforcement responsibility in a number of key areas. Among other things, EPA determines whether the state or tribe has (1) adopted drinking water regulations that meet or exceed EPA’s national primary drinking water regulations; (2) adopted and is implementing adequate procedures for enforcing its regulations, including demonstrating authority to assess penalties for violations; and (3) adopted and can implement an adequate plan to provide safe drinking water under emergency circumstances, such as hurricanes and other natural disasters. Under the Clean Air Act, EPA can authorize states and tribes to issue and enforce federal air permits. For this authority, the tribe must, among other things, submit a legal opinion, stating that the laws of the Indian tribe provide adequate authority to carry out all aspects of the delegated program. EPA and the eligible tribe then sign a Delegation of Authority Agreement, which specifies the provisions that the tribe is authorized to implement on behalf of EPA. EPA is responsible for announcing this delegation in the Federal Register. Since 1988, 57 of the 562 federally recognized tribal entities in the United States have submitted 61 requests seeking TAS for program authority under the three acts; these entities are from 17 states. Of the 61 TAS requests, EPA has approved 32: 30 under the Clean Water Act, 1 under the Safe Drinking Water Act, and 1 under the Clean Air Act. The remainder are under review. Of the 32 approved TAS requests, 26 have also been approved for program authority—24 for Clean Water, 1 for Safe Drinking Water, and 1 for Clean Air. Figure 1 shows the states where tribes have submitted and been approved for TAS status under the three environmental acts, the number of TAS submittals, and the number of TAS approvals in each state. EPA relies on grants as one of its primary ways to carry out its mission of protecting human health and safeguarding the environment. Each fiscal year, EPA awards approximately $4 billion in grants to state and local governments, tribes, educational, nonprofit organizations, and other entities for projects that range from conducting environmental research to constructing wastewater treatment facilities to developing regulatory programs. The funds are generally based on formulas laid out under each law or regulation. To be eligible for most EPA grant programs, a tribe must be federally recognized. In addition, for some grant programs, such as section 106—for the prevention, reduction, and elimination of water pollution--under the Clean Water Act, a tribe must also have obtained TAS status to be eligible. For other grants, such as section 105—to develop and administer programs that prevent and control air pollution or implement national air quality standards--under the Clean Air Act, a tribe is not required to have TAS status, but TAS status has a substantially lower matching contribution requirement (from 5 percent with TAS status to 40 percent without). The grants’ TAS criteria are less demanding and thus the review process is less rigorous than the review process for TAS for program authority. In addition, the grant decision is based solely on EPA’s expertise, and EPA does not generally get public comments on whether the tribe has jurisdiction. Approval for TAS for grant purposes does not qualify tribes for TAS for program authority purposes; however, tribes may use their TAS grant status to help demonstrate capability to administer a program when applying for program authority TAS. Finally, for other grant programs, such as the Indian General Assistance Program, no TAS requirement exists. Financial assistance for tribal environmental programs is funded under EPA’s State and Tribal Assistance Grants appropriation. The funds are generally based on specific formulas laid out in law or regulation, and regions that have the largest number of tribes receive the largest proportion of grant awards and grant dollars. The five states receiving the most tribal grants—Alaska, Arizona, California, New Mexico, and Oklahoma—are located in EPA’s Regions 6, 9, and 10. Of the 1,343 grants awarded to Indian tribes under the Clean Water, Safe Drinking Water, Clean Air, and Indian General Assistance Program Acts between fiscal years 2002 through 2004, about 99 percent were awarded by EPA’s regions. Each grant program has its own request and award process and grant opportunities are based on funding availability. As a result, a tribe may receive a grant in one year and not in another. While funding of tribal grants has remained relatively constant, according to EPA officials, the agency’s outreach to tribes and the growing awareness of environmental issues among tribes, has led to steadily increasing numbers of requests and grants being awarded. For the 20 cases we examined in detail, EPA followed its processes for approving tribal requests for TAS and for program authorization, except for adhering to the 30-day time frame for notifying governmental entities. However, for these 20 cases, as well as for another 12 tribal requests for TAS that EPA approved, the TAS review process was often lengthy. In addition to those 32 TAS approvals, EPA is currently reviewing 29 TAS requests, 27 of which were submitted more than a year ago. EPA officials agreed that more could be done to improve the timeliness of the review process, and the agency has recently begun working with its regions to determine the status of outstanding requests and how best to expedite reviews. The officials stated that evolving Indian case law and complexities associated with some jurisdictional issues sometimes required them to spend more time evaluating tribal TAS requests. Delays in the approval process may hinder a tribe’s efforts to control its environmental resources. Furthermore, as we learned during our review, lengthy delays and a lack of transparency in the review process may discourage tribes from even submitting requests for TAS status. In terms of tribal requests for approval of water quality standards, EPA approved most tribal requests in less than 1 year but the agency generally did not meet its own standard for approval within 60 days. According to our review of 20 approved cases in Regions 6, 9, and 10, EPA generally followed its established processes for reviewing and approving TAS requests. For example, EPA’s files included the required documentation to support its decision to approve a TAS request. First, EPA always ensured that the tribe included a statement that the tribe is recognized by the Secretary of the Interior. Second, we found that EPA always ensured that tribes provided a statement that their governing body is carrying out substantial governmental duties and powers. To meet this requirement, tribes (1) described the form of tribal government; (2) described the types of governmental functions currently performed by the tribal governing body; and (3) identified the source of the tribal government’s authority to carry out these governmental functions. Among other things, tribes provided tribal constitutions, by-laws, and treaties to demonstrate that they were carrying out substantial governmental duties and powers. Third, the cases we reviewed showed that EPA always ensured that the tribe documented its jurisdiction. Specifically, the files showed that EPA collected a map or legal description of the area over which the tribe intended to regulate—surface water quality, drinking water, or air quality; a statement by the tribe’s legal counsel describing the basis for the tribe’s assertion of authority; and documentation identifying the resources for which the tribe proposed to establish environmental standards. Some cases indicated that EPA followed up with a tribe when the request lacked adequate documentation to meet this requirement. Finally, EPA ensured that tribes submitted a narrative statement describing their capability to administer the program to which they were applying. For example, EPA ensured that tribes submitted a description of their previous management experience; existing environmental or public health programs administered by the tribal governing body and copies of related tribal laws, policies, and regulations; the entity that exercises the executive, legislative, and judicial functions the existing, or proposed, agency that will assume primary responsibility for the environmental standards; and the staffs’ technical and administrative capabilities for managing an effective program, and a plan for how the tribe will acquire and fund additional expertise. Additionally, EPA is required to promptly notify the tribe when the agency has received the TAS request. In three cases, EPA did not have evidence showing that it had notified the tribe that it had received the tribe’s request. In these cases, an EPA regional official told us, the agency may have telephoned the tribe to acknowledge receipt of the tribe’s request, and this information would not necessarily be documented. The only two time frames EPA has established require the agency to provide (1) appropriate notice to affected governmental entities within 30 days of receipt of a tribe’s request for TAS and (2) interested parties with 30 days to comment on the tribe’s request. For the 20 cases we reviewed, EPA always provided affected governmental entities and interested parties 30 days to comment. However, in 17 of the 20 cases, EPA did not notify affected governmental entities within its established 30-day time frame of a tribe’s TAS request, but instead took about 5 months, on average. EPA officials told us that, in most cases, they worked with Indian tribes to develop their TAS applications prior to the tribe’s submission of its application. However, they said that in some cases, applications were still not complete when they were received, resulting in delays in providing notification to governmental entities. EPA said it does not like to notify affected governmental entities of a tribal request until EPA agrees with the tribe that the application is complete. Figure 2 shows the review times for the 32 TAS requests approved from 1991 through June 2005. Appendix II provides additional details on the 32 tribal entities that were approved for TAS as of June 2005, the dates that the requests were submitted, and the date EPA approved them. Review times for the 32 requests ranged from 3 months to nearly 7 years. As figure 2 shows, 19 of the TAS reviews took 1 year or more for approval. Specifically, for the 20 cases we examined, 10 took more than 1 year for approval, with 2 taking more than 4 years. EPA regulations require that the agency process TAS requests in a “timely” manner and internal guidance issued in 1998 emphasizes the importance of an efficient review process. However, EPA has never developed a written strategy that clarifies what it means by timeliness, including performance goals, and does not routinely track the time it takes to complete its review of these requests. Figure 3 shows the 29 TAS requests under review as of June 2005 and the time elapsed between the request and June 2005. As the figure shows, the time required for reviewing these TAS requests is generally taking 1 or more years, with 24 of the TAS requests under review for more than 2 years; 2 of the 24 requests have been under review for over 10 years. See appendix III for the details on the dates that requests were submitted. The number of TAS requests awaiting EPA approval has increased along with the average review time. Specifically, as of 1998, 12 requests were under review, and by June 2005, this number had increased to 29. In addition, the average review time for TAS requests approved as of 1998 was 12 months and the average review time for TAS requests approved between 1998 and June 2005 was 28 months. The average review time for the 29 TAS requests pending as of June 2005 was about 63 months (or over 5 years). Figure 4 shows the number of requests submitted and the number that remained under review at the end of each year, from 1992 through June 2005. According to EPA officials, 15 of the 29 TAS requests currently awaiting approval require some type of action on the part of the tribe, such as providing additional documentation on the tribe’s jurisdiction. The other 14 requests are awaiting EPA action, such as analysis and discussion with the tribe, consideration of comments received, and final regional and headquarters review. According to EPA officials, several factors contribute to lengthy TAS reviews. First, both regional offices and headquarters often review the requests. Regional offices have primary responsibility for reviewing and approving TAS requests, but EPA headquarters may repeat the review to ensure that the regional review fully addressed all legal requirements. EPA’s policy is for headquarters to review the first TAS request received and approved in each region under the Clean Water and Safe Drinking Water Acts and to review all other TAS requests, that appear to be nationally significant because, for example, of new legal issues. In this regard, officials cited evolving Indian case law and complexities associated with some jurisdictional issues as significant contributing factors to added review time. In some cases, EPA officials explained, multiple reviews occur because, for example, a tribe may assert jurisdiction over lands outside of its recognized boundaries. These assertions have led to disagreements among the states and tribes, contributing to delays in EPA’s review process. Moreover, EPA has never disapproved a tribe’s TAS request. Rather than disapprove a tribe’s request, EPA continues working with the tribe until it meets all the eligibility requirements, which could contribute to delays. EPA officials explained that to the extent possible, it conducts its regional and headquarters reviews concurrently. Second, EPA did not emphasize timely review of TAS requests for some of the 20 cases we reviewed. For example: In one case, 20 months after receiving a tribe’s TAS request, EPA asked for necessary information on the tribe’s water bodies, water uses, and land status. This information should have been included in the original request and followed up on at the time. EPA provided a variety of reasons for delays in this tribe’s review, including a lack of timely communication between the tribe and EPA. Based on the problems experienced in this case, EPA’s responsible regional office reported that it has taken steps to increase its tribal outreach activities. In another case, 23 months after receiving supportive comments from governmental entities and over 1 year after regional counsel agreed that the tribe met all the legal requirements, EPA continued to request additional information regarding the tribe’s jurisdiction. According to EPA officials, the agency inadvertently misfiled part of the tribe’s application paperwork and was waiting for the tribe to provide a replacement copy of the jurisdictional map so EPA could complete its review. Finally, in one case under review for more than 4 years, the tribe amended its request in response to public comments. However, EPA was still requesting basic documentation that should have been included in the original request—2 years into the process. Furthermore, more than 1 year before approving the tribe’s TAS request, EPA determined that the request raised no nationally significant issues and stated that the tribal boundaries were clear. EPA officials agreed that there was a delay, but stated that they were not requesting basic documentation, such as the tribal constitution and codes, for the first time after the case had been in review for 2 years. Rather, the region had misplaced the original information provided by the tribe, and EPA was requesting that the tribe provide replacement copies of important information. In addition, according to EPA and tribal officials, some of the delays during the review process occurred because of turnover in tribal or EPA staffing. Specifically, we were told that some tribes have experienced staff turnover in their environmental departments that affected their capability to administer the environmental program. For example, in one region, EPA officials cited tribal turnover as a cause for delay in 3 of the 10 requests under review. Furthermore, some tribal officials said that changes in their leadership sometimes shift their priorities away from following through with their TAS request. Finally, some EPA regional offices have experienced staff turnover, which caused some delay in reviewing requests because the new staff needed time to become acquainted with the tribes and to establish a relationship. For example, in one regional office, officials said that certain staff positions—those that deal directly with tribes—have changed about every 2 years. According to tribal officials, changes in both tribal and EPA regional staff have made it difficult to keep the continuity that the tribes believe they need to successfully administer a federal environmental program. According to EPA headquarters officials, in response to renewed concerns from tribes and within EPA, the agency has held management-level discussions with its regions to determine the status of outstanding requests and to determine how best to address the growing backlog. In October 2005, EPA headquarters officials stated that they had completed discussions with its regions and was analyzing the results to determine whether there are any systemic reasons for the lengthy review times. Some tribal officials told us that they have not submitted TAS requests because the process has become so lengthy. These officials, who represented five tribes in one western state, have observed the delays that other tribes in the state have experienced. They questioned the value of spending time and resources for such a lengthy process. Moreover, tribes cannot always determine the status of a particular request, the aspect of the review that may be delaying its approval, or the length of time it will take EPA to complete its review. This lack of transparency may hinder a tribe’s understanding of what issues are delaying EPA’s approval and what actions, if any, may be needed to address these issues. In one case, the regional office approved the request and sent it to headquarters for concurrence. While the request was in headquarters for about 2 years, regional officials told us they could not determine the status of the request and could not provide the tribe with adequate updates regarding their request. Tribal officials said that, even when asked, EPA could not provide the tribe with a comprehensive list of documents needed to complete the review. The request was under review at the time we completed our work—6 years after it was submitted. As specified in the regulations for the Clean Water Act, a tribe must provide appropriate notice to governmental entities and hold a public hearing to discuss its proposed water quality standards. The standards may change in response to hearing comments. Thirty days after the tribe approves the proposed water quality standards, it must provide the regional office with a transcript of the hearing, responses to comments, the tribal-approved standards, and a certificate from a responsible legal authority documenting that the water quality standards have been adopted in accordance with tribal law. Following approval of a tribe’s TAS application, EPA’s guidelines call for it to approve a tribe’s water quality standards within 60 days of the tribe’s official submission of its water quality standards. For the 18 cases we reviewed under the Clean Water Act, EPA met its 60- day requirement for approving water quality standards for 7 of the submissions. However, it did not meet its requirement for the other 11 cases. Figure 5 shows the review times for the 18 tribes submitting water quality standards from 1992 through June 2005. See appendix IV for the details on the dates that tribes submitted their water quality standards and EPA approved the standards. As figure 5 shows, 11 of the reviews for water quality standards took 60 days or more, with 4 taking 1 year or more for approval. For fiscal years 2002 through 2004, EPA provided Indian tribes about $360 million in grants for a broad range of environmental activities. Of this total, 1,343 grants totaling approximately $253 million went to 461 Indian tribes under four major acts including the Indian General Assistance Program— which helps tribes develop their capacity to administer environmental programs—and three environmental acts—the Clean Water, Safe Drinking Water, and Clean Air Acts—which help tribes manage their environmental programs. Furthermore, during these three fiscal years, EPA awarded an additional $106 million under other statutory authorities, including the Toxic Substances Control Act, the National Environmental Education Act, and the Comprehensive Environmental Response, Compensation, and Liability Act. Half of the $360 million was distributed through two specific programs: (1) the Indian General Assistance Program to help tribes to plan, develop, and establish environmental protection programs—approximately $114 million and (2) the Clean Water Act to help tribes prevent, reduce, and eliminate water pollution—approximately $66 million. Funds provided under the Clean Water, Safe Drinking Water, and Clean Air Acts may be used for such things as research, construction, and the development of regulatory programs. However, according to EPA officials, only a small part of the grant funds are used by tribes to apply for and develop regulatory programs under the various statutes. Although some, but not all, grants require TAS status, the standards of evidence EPA requires for TAS for grants are not as stringent as the standards for TAS for program authority. For example, the TAS grant decision is based on EPA’s knowledge of the tribe and the tribe’s area of jurisdiction. These decisions do not require EPA to seek comment from affected states and generally do not require a public comment period. Table 1 shows the amount of grant funding awarded under the Indian General Assistance Program and the three environmental acts for fiscal years 2002 through 2004. In general, tribes initially apply for funding under EPA’s Indian General Assistance Program before applying for funds under the agency’s environmental programs. The Indian General Assistance Program provides financial assistance to help tribes build capacity in order to administer their environmental programs. The Indian General Assistance Program grant does not require a tribe to have TAS status. The duration of these grants (up to 4 years) provides tribes with a stable funding source, which is useful to tribes without tax revenues. The tribes have used these grants to, for example, hire, train, and maintain their own environmental experts, and to plan, develop, and establish environmental protection programs. Grants for some environmental programs, such as section 106 of the Clean Water Act and section 1443 of the Safe Drinking Water Act, have special provisions for TAS status. For example, EPA requires that tribes receive TAS status for section 1443 grants, while EPA regulations provide that tribes with TAS status contribute less in matching funds for section 106 grants. The four TAS criteria for grants are similar to those for program authority under the three acts--specifically, a tribe must be federally recognized, have a governing body carrying out substantial duties and powers, have adequate jurisdiction, and have reasonable capabilities to carry out the proposed activities. The primary difference between TAS for grants and TAS for program authority is that EPA does not generally seek public comments on tribal requests for grants. In addition, there is generally no need to determine tribal regulatory jurisdiction for TAS eligibility for grants. To encourage tribes to apply for these funds, EPA provides fact sheets about the various financial assistance programs, sends them grant solicitations, and provides training to help them develop their grant requests. Nearly all tribal requests are reviewed and funded at the regional level. Since the three environmental acts were amended to allow tribes to receive TAS status and to implement EPA programs, some tribes, states, and municipalities have disagreed over tribal land boundaries and environmental standards that may differ from state standards. However, neither EPA nor any of the entities we contacted could identify the number of disagreements that have arisen between tribes, states, and municipalities over environmental issues. Generally, the disagreements have been addressed through litigation, cooperative agreements, or legislation. In terms of litigation, for example: In City of Albuquerque v. Browner, the city challenged EPA's approval of the Pueblo of Isleta’s water quality standards, which are more stringent than New Mexico’s. The city asserted that EPA lacked the authority under the Clean Water Act to either (1) approve tribal water quality standards that are more stringent than required by the statute or (2) require upstream users such as the city to comply with the standards set by the Pueblo of Isleta, which is downstream from Albuquerque. A federal appellate court upheld EPA’s authority to approve the Pueblo’s standards. Among other things, the court noted that EPA is authorized to require upstream dischargers to comply with downstream standards. In Montana v. EPA, the state challenged EPA regulations allowing tribes with TAS authority to issue water quality standards applicable to all dischargers within a reservation, even those on land owned by nonmembers of the tribe. Montana argued that the regulations permit tribes to exercise authority over nonmembers that are broader than the inherent tribal powers recognized by the Supreme Court as necessary to self-governance. A federal appellate court held that EPA’s regulations properly delineated the scope of inherent tribal authority. It noted that the Supreme Court had held that a tribe could regulate the conduct of nonmembers when that conduct threatens or has some direct effect on the political integrity, the economic security, or the health or welfare of the tribe. EPA had found that pollution of tribal water resources by nontribal members posed such serious and substantial threats to tribal health and welfare that tribal regulation was essential. In this case, the court held that EPA’s regulations are a valid application of inherent tribal authority over nonconsenting nonmembers. Some tribes and states have addressed issues more collaboratively. For example: The Navajo Nation’s Environmental Protection Administration and the Arizona Department of Environmental Quality entered into a cooperative agreement in which, among other things, the state recognizes the jurisdiction of the Navajo Environmental Protection Administration over all lands within the Navajo Reservation and does not assert authority over those lands. In addition, Arizona and the Navajo Environmental Protection Administration agreed to share in the cost of pilot projects, including in-kind contributions and technical assistance. As a result of this collaborative effort, the tribe and state have been able to, among other things, share staff for training and assist one another with permit violations. In one instance, the tribe and the state investigated and found several areas of potential contamination of illegal petroleum leaks and spills. EPA ordered the company to stop its illegal actions and prepare an environmental cleanup plan. Many different parties, including tribal, federal, state, and local environmental groups, collaborated in an air toxics study, begun in 1999, to help assess the impacts of hazardous air pollutants in the Phoenix metropolitan area. The study, which is still ongoing, will review the status of air toxics studies nationally and identify potential approaches that may be useful in the Phoenix area. In some cases, EPA facilitates a resolution of disagreements between states and tribes during the review process. In these cases, EPA works collaboratively with the tribe to facilitate a resolution. For example, in one case, after discussing its application with EPA, a tribe amended its TAS submission by clarifying that it was not seeking approval to administer Clean Water Act programs on a portion of an adjacent river where jurisdictional issues had been raised and stated that it would continue its efforts to work cooperatively with the affected parties. Legislatively, a statute enacted in August 2005 addressed some of the jurisdictional concerns in Oklahoma over TAS for program authority. Specifically, to be approved for TAS, the law requires Indian tribes and the state to enter into a cooperative agreement in which they agree to TAS status and develop a plan to jointly administer program requirements. This agreement is subject to the review and approval of EPA’s Administrator after notice and an opportunity for a public hearing. The only tribe in Oklahoma that currently has TAS status for administering programs is the Pawnee Nation. According to EPA officials, tribes and states have not used the dispute resolution mechanism EPA established under the Clean Water Act in 1987 to address disagreements over water quality standards. Under this mechanism, EPA can attempt to resolve disputes when, for example, (1) differing water quality standards have been adopted pursuant to tribal and state law and approved by EPA; (2) a reasonable effort to resolve the dispute without EPA involvement has been made; and (3) a valid written request for dispute resolution has been submitted by either the tribe or the state. We could not determine why states and tribes have not used this mechanism to resolve disagreements. According to a U.S. Institute for Environmental Conflict Resolution official, states and tribes have not used the Institute to resolve disagreements over the Clean Water, Safe Drinking Water, or Clean Air Acts. Congress established this institute in 1998 to help parties resolve environmental, natural resource, and public lands conflicts. The U.S. Institute serves as an impartial, nonpartisan entity that provides professional expertise, services, and resources to all parties to a dispute. The U.S. Institute helps parties determine (1) whether collaborative problem solving is appropriate for specific environmental conflicts, (2) how and when to negotiate, and (3) whether a third-party facilitator or mediator may be helpful in assisting parties in their efforts to reach consensus or to resolve conflict. The U.S. Institute also established the Native Dispute Resolution Network to provide an alternative for American Indians, Alaska Natives, and Native Hawaiians facing environmental conflicts. In commenting on a draft of this report, EPA advised us that they had recently contacted the Institute for assistance in discussions between tribal and state officials in Idaho on revising a lake management plan. We recognize that a tribe’s initial request for TAS may not include all required documentation and that EPA’s analysis of critical components of that request, such as the tribe’s jurisdiction over its land, water, and air, may take some time. However, EPA has generally not laid out a written strategy, including an estimated time frame, for the TAS review process. Such a written strategy would help better focus EPA’s efforts and provide greater transparency for the tribes, on the status of EPA’s review. We note that EPA has established time frames for completing some of its TAS processes, such as those for seeking public comment. We also note that, without a written strategy, the average approval time for TAS requests has increased from 12 months in 1998 to over 2 years as of June 2005. Moreover, in some cases, neither EPA regional officials nor the tribe know the status of the tribe’s TAS request. Without time frames or transparency in the review process, Indian tribes may be discouraged from even applying for TAS and program authority. To better facilitate the timely review of tribal requests for TAS status for program authorization and to increase the transparency of the process to tribes, we recommend that the Administrator of EPA develop a written strategy, including estimated time frames, for its tribal request review process and for providing periodic updates to the tribes on the status of their requests. We provided EPA with a draft of this report for its review and comment. In commenting on the draft report, EPA agreed with our findings and emphasized their commitment to carefully considering these issues. EPA also provided technical comments, which we have incorporated into this report as appropriate. Appendix V contains the full text of the agency’s comments in a letter dated October 19, 2005. We are sending copies of this report to appropriate congressional committees; the Administrator, EPA; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The Chairman of the Senate Committee on Environment and Public Works and the Chairman of the Senate Committee on Indian Affairs asked us to report on (1) the extent to which the Environmental Protection Agency (EPA) has followed its processes for reviewing and approving tribal requests for treatment in the same manner as a state (TAS) and program authorization under the Clean Water, the Safe Drinking Water, and the Clean Air Acts, (2) EPA’s programs for funding tribes’ environmental programs and the amount of dollars provided to tribes in fiscal years 2002 through 2004, and (3) types of disagreements that have occurred between parties over EPA’s approval for granting tribes TAS status and program authorization and the methods that have been used to address these disagreements. Although our review focused primarily on the Clean Water Act, we also reviewed EPA’s process for reviewing and approving tribal requests under the Safe Drinking Water and Clean Air Acts. In addressing these issues, we collected information through case file reviews and interviews. To determine the extent to which EPA followed its processes for reviewing and approving TAS and program authorization requests, we reviewed EPA’s statutory and regulatory authorities, and guidance. Based upon this review, we developed a structured review guide for our case file reviews—a total of 20 reviews. We selected EPA’s regions 6 (Dallas), 9 (San Francisco), and 10 (Seattle) for our case file review because, collectively, these regions had 77 percent of all approved tribal requests for program authorization under the three acts (20 of 26). These regions also had the largest number of approvals for program authority—18 approvals under the Clean Water Act, and 1 each under the Safe Drinking Water and Clean Air Acts. We reviewed in detail EPA’s TAS and program authorization process under the Clean Water Act because most activity has occurred under the act. We also reviewed EPA’s process for reviewing and approving tribal requests under the Safe Drinking Water and Clean Air Acts. Furthermore, we reviewed data provided by EPA on another 12 TAS and/or program authority approvals, bringing the total number of TAS approvals to 32. In reviewing the case files, we ensured that documentation existed to fulfill the statutory and regulatory requirements, compared length of reviews with statutory deadlines, and examined the cause of delays. With EPA officials in headquarters and in regions 6, 9, and 10, we used semistructured interviews to obtain their understanding of the TAS and program authorization processes under the three environmental acts. EPA also provided data on the 57 tribes that had applied for TAS status and/or program authorization, and the dates of request and approval (when applicable). We cross-checked this information with the case file documents for the 20 cases we reviewed. We also conducted interviews with selected officials from the Department of the Interior’s Bureau of Indian Affairs, affected states, and representatives of Indian tribes in Arizona, New Mexico, Oklahoma, and Washington to discuss their knowledge of, and concerns about, EPA’s processes for reviewing and approving tribal requests for TAS status and program authorization. To examine EPA’s programs for funding tribes, we obtained data from EPA’s Integrated Grants Management System, a computer database used by the agency to manage and report on information about grants, to determine the number of federally recognized Indian tribes receiving funding for fiscal years 2002 through 2004. Specifically, we analyzed information on the number of grants and the dollars awarded under specific statutory authorities for cases where the recipient type was listed as “Indian tribe.” This recipient type only applies to grants awarded to federally recognized tribes or intertribal consortia. According to EPA officials familiar with the data, tribes that are not federally recognized can receive grants, however, only federally recognized tribes are categorized as “Indian tribes” in the data element “recipient type.” We assessed the reliability of EPA’s Integrated Grants Management System data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data, including past GAO reports and workpapers on the system, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we reviewed and documented the various programs available to Indian tribes under the Indian Environmental General Assistance Program Act of 1992 and the Clean Water, Safe Drinking Water, and Clean Air Acts for fiscal years 2002 through 2004; and EPA’s guidelines for providing funding to tribes through these programs. To examine the types of disagreements that have occurred between parties over EPA’s approval for granting tribes TAS status and program authorization and the methods that have been used to address these disagreements, we reviewed EPA’s statutory and regulatory processes for resolving disputes between different parties under the Clean Water Act. Although the dispute resolution provision specified in the Clean Water Act regulations has not been officially used, EPA staff provided us with other examples of tribes and outside parties creating collaborative agreements and resolving disputes. We also interviewed selected EPA, state, and tribal officials. In addition, we interviewed an official from the U.S. Institute for Environmental Conflict and Resolution to gain an understanding of the entity’s objectives, roles, and responsibilities. We performed our work between November 2004 and October 2005, in accordance with generally accepted government auditing standards. Time elapsed (months) Confederated Tribes of the Chehalis Reservation (WA) Confederated Tribes of the Warm Springs Reservation (OR) Other tribal entities receiving TAS approval GAO did not review in detail (12) Mole Lake Band of the Lake Superior Tribe of Chippewa Indians, Sokaogon Chippewa Community (WI) Assiniboine and Sioux Tribes of the Fort Peck Indian Reservation (MT) 32.9 Time elapsed (months) Confederated Salish and Kootenai Tribes of the Flathead Reservation (MT) The Navajo Nation’s TAS request was submitted under the Clean Air Act. The Navajo Nation’s TAS request was submitted for the public water systems program under the Safe Drinking Water Act. Time under review (months) Ak Chin Indian Community of the Maricopa (Ak Chin) Indian Reservation (AZ) Agua Caliente Band of Cahuilla Indians (CA) Paiute-Shoshone Indians of the Bishop Community (CA) Assiniboine-Sioux of Ft. Peck Indian Reservation (MT) Salt River Pima-Maricopa Indian Community (AZ) Three Affiliated Tribes of the Fort Berthold Reservation (ND) Confederated Tribes and Bands of the Yakama Indian Nation (WA) 127.9 The Navajo Nation’s TAS request was submitted for the underground injection control program under the Safe Drinking Water Act. According to EPA, the Coeur D’Alene tribe was approved for TAS in August 2005. Elapsed time (days) Confederated Tribes of the Chehalis Reservation (WA) Confederated Tribes of the Warm Springs Reservation (OR) In addition to the individual named above, Ronald E. Maxon, Jr., Assistant Director; Tyra DiPalma-Vigil; Chad Factor; Doreen Feldman; Richard Johnson; Crystal Jones; Jeff Malcolm; Rebecca Shea; and Carol Herrnstadt Shulman made key contributions to this report. | The Clean Water, Safe Drinking Water, and Clean Air Acts authorize the Environmental Protection Agency (EPA) to treat eligible Indian tribes in the same manner as a state (referred to as TAS) for implementing and managing environmental programs on Indian lands. Some states are concerned that tribes receiving authority to manage these programs may set standards that exceed the state standards and hinder states' economic development. GAO was asked to report on the (1) extent to which EPA has followed its processes for reviewing and approving tribal applications for TAS and program authorization under the three acts, (2) programs EPA uses to fund tribal environmental activities and the amount of funds provided to tribes between fiscal years 2002 and 2004, and (3) types of disagreements between parties over EPA's approval of TAS status and program authorization and methods used to address these disagreements. EPA generally followed its established processes for reviewing and approving tribal requests for TAS and program authority under the three acts, according to GAO's analysis of approved requests. However, the review time for approving these requests generally took from about 1 year to more than 4 years. In addition, nearly all of the requests currently under review were submitted more than 1 year ago. Key factors contributing to the lengthy reviews include the multiple reviews required by the agency's regional and headquarters offices, a lack of emphasis within the agency to complete the reviews in a timely manner, and turnover of tribal and EPA staff. Moreover, EPA has not developed a written strategy that establishes overall time frames for reviewing requests. EPA officials agreed that more could be done to improve the timeliness of the review process but said that complex issues--including evolving Indian case law and jurisdictional issues--may have contributed to the lengthy reviews. Furthermore, EPA's review process is not always transparent on the status of tribes' TAS requests. Lack of transparency limits tribes' understanding of what issues may be delaying EPA's approval and what actions, if any, may be needed to address the issues. EPA provided Indian tribes about $360 million in grants to fund a broad range of tribal environmental activities from fiscal years 2002 through 2004. About half of these funds were distributed through two acts: the Indian Environmental General Assistance Program Act (about $114 million)--to help build capacity to administer environmental programs--and the Clean Water Act (about $66 million)--to help prevent and reduce water pollution. Since 1986, when Congress began amending the three environmental acts to allow TAS for tribes, disagreements over land boundaries and environmental standards have arisen between tribes, states, and others. Disagreements have been addressed through litigation, collaboration, and federal laws. |
Two Department of Agriculture (USDA) programs—the Office of the General Sales Manager (GSM)-102 and GSM-103 export credit guarantee programs—are intended to promote the export of U.S. agricultural commodities. The programs’ goals—under the 1990 Food, Agriculture, Conservation, and Trade Act, also known as the 1990 Farm Bill—are to develop, expand, or maintain U.S. agricultural markets overseas by facilitating commercial export sales of U.S. agricultural commodities.However, the statute also provides that the Secretary of Agriculture may not issue credit guarantees in connection with sales of agricultural commodities to any country that the Secretary determines cannot adequately service the debt associated with such sale. The law also prohibits issuing the credit guarantees for foreign aid, foreign policy, or debt-rescheduling purposes. More than $16 billion in export credit guarantees were provided by the GSM-102/103 programs during fiscal years (FY) 1990 through 1992. The former Soviet Union (FSU) and two of its successor states, Russia and Ukraine, received 43 percent of the guarantees that were made available in fiscal year 1992. During fiscal year 1991 and fiscal year 1992, the FSU obtained more GSM credit guarantees than any other country in the world. In deciding whether to extend GSM export credit guarantees to any country, USDA considers the prospects for developing and maintaining U.S. markets in that country, on the basis of our review of USDA documents and interviews with agency officials. Before providing guarantees, USDA also assesses the country’s overall creditworthiness. Creditworthiness, in the context of this report, concerns a country’s ability and willingness to service or make timely payments on its current and future foreign debt obligations. Assessing the creditworthiness of nations generally involves a technical analysis of economic and financial indicators of the risk of nonpayment due to insufficient foreign currency and the likelihood that political or other nonfinancial events may disrupt payments. Some of the factors that affect creditworthiness are a country’s level of indebtedness relative to its economic and financial resources, the ability of the country’s government to effectively manage the domestic economy, and the general economic and political situation in the country. Creditworthiness also involves temporal considerations. For example, some debtors may be judged not creditworthy over the short run due to a lack of readily available foreign exchange but may be considered capable of servicing their debts if additional time is allowed for them to organize and marshal their resources. The Department of Agriculture’s GSM-102 and GSM-103 programs are aimed at facilitating the export of U.S. agricultural commodities to developing countries and middle-income countries with hard currency shortages. They are intended to help importing nations make a transition from concessional financing to cash purchases, as well as to maintain import levels during periods of financial difficulties. Under the two programs, the U.S. government agrees to pay U.S. exporters or their assignees—U.S. banks or U.S. subsidiaries of foreign banks—in the event that a foreign buyer defaults on its loan obligation. By reducing the risk involved in selling U.S. agricultural products, USDA encourages exporters to explore new foreign market opportunities. The USDA’s Commodity Credit Corporation (CCC), which administers the GSM programs, attempts to share some of the credit risk with the exporter or the exporter’s assignee (a bank or other financial institution). It does so usually by guaranteeing 98 percent of the value of the sale plus a portion of the interest payable. The exporter or the exporter’s assignee is at risk for 2 percent of the principal and a portion of the interest payable. However, CCC has flexibility to adjust the amount of guarantee coverage it provides. USDA considers the GSM programs to be fully “commercial” in that they assist sales that are made by the private sector, and the interest rates are at “prevailing market levels.” However, there is an important element of concessionality in the programs because recipient countries could not make the purchases without credit and loan guarantees. Furthermore, if the countries were able to obtain financing on commercial markets, they would have to pay a premium above the rates that they obtain from the GSM programs, since a U.S. government guarantee reduces the risk to the lender. Section 202(f) of the 1990 Food, Agriculture, Conservation, and Trade Act prohibits the Secretary of Agriculture from issuing export credit guarantees in connection with sales of agricultural commodities to any country that the Secretary determines cannot adequately service the debt associated with such a sale. The provision was established in response to a situation that developed in the late 1980s and early 1990 when creditworthiness considerations were minimized for foreign policy objectives in order to provide Iraq with GSM export credit guarantees. Following the allied response to Iraq’s invasion of Kuwait in 1990, Iraq defaulted on outstanding guaranteed GSM loans. As of August 17, 1994, CCC had received claims from 10 banks regarding Iraq’s defaults. These claims totaled about $2.2 billion, and CCC had paid claims to nine of these banks, totaling $1.7 billion. Section 202 (f) requires USDA to determine whether a country is capable of servicing the debt that would result from providing export credit guarantees for agricultural commodities. If a determination is negative, the provision prohibits USDA from making credit guarantees available to that country. USDA advised us that before making any loan guarantee commitments, it assesses the creditworthiness of intended recipients of guaranteed sales and uses the information in deciding whether to provide guarantees to specific countries. In contrast to a direct loan, a credit guarantee does not involve dollar outlays to either the lender or the borrower when the loan is made. Nonetheless, budgetary outlays are required for loan guarantees when defaults occur and claims are made. To better account for the costs of federal credit programs, the Federal Credit Reform Act of 1990 required, beginning with fiscal year 1992, that the President’s budget reflect the costs of the loan guarantee programs. To this end, new loan guarantee commitments can be undertaken only if appropriations of budget authority are made to cover their costs, including estimated payments by the government to cover defaults and delinquencies. The act exempted all then-existing CCC credit guarantee programs from the appropriations requirement. However, CCC advised us that it is establishing what it calls an “allowance reserve” to cover its estimate of possible defaults on GSM loans. As of June 30, 1992, CCC had approximately $9.04 billion outstanding in GSM-102 and 103 guarantees on loan principal and $4.51 billion in accounts receivable from loan guarantee payouts on delinquent GSM-102 and 103 guaranteed loans. In a December 1992 report, we estimated the cumulative costs of the programs at about $6.5 billion, or 48 percent of the total $13.55 billion, if the programs had been terminated on June 30, 1992. In the past, decisions to provide GSM loan guarantees to countries were influenced by foreign policy considerations. Principal recipients of guarantees were often countries that had significant foreign policy relationships with the United States. However, the 1990 Farm Bill stipulated that GSM export credit guarantees could not be used for foreign aid, foreign policy, or debt-rescheduling purposes. So, for example, if the Secretary of Agriculture determines under the debt-servicing requirement that a country cannot adequately service the debt that would arise from receiving agricultural export credit guarantees, no credits are to be extended—even if the president believes that such an extension would be in the national interest. Despite the problems that arose from the Iraqi loan guarantees, as cited earlier, many Members of Congress have expressed the view that GSM credit guarantee decisions should take account of foreign policy and national interest considerations. For example, in May 1991 the Senate approved a nonbinding resolution (S. Res. 117) recommending that the administration extend another $1.5 billion in agricultural credit guarantees to the Soviet Union—assuming the administration found the country could service the debt—if certain foreign policy objectives would also be realized. In 1991, attempts were made to provide more flexibility in granting export credit guarantees; amendments to the 1990 Farm Bill were proposed that would have allowed the president to provide guarantees when he believes they are in the national interest, regardless of the debt-servicing requirement and foreign aid/policy restrictions. However, these amendments were subsequently withdrawn. Similarly, the administration’s 1992 bill for authorizing assistance to the former Soviet republicsincluded a provision allowing the Secretary of Agriculture to take into account major economic reforms underway in those states in making a determination about the ability of the states to repay debt associated with GSM sales. However, this provision was struck from the bill that Congress passed in October 1992. USDA officials told us that although the 1990 Farm Bill prohibits the Secretary of Agriculture from issuing export credit guarantees for foreign aid or foreign policy purposes, the law does not mean that such assistance cannot simultaneously serve foreign policy objectives. They noted that during congressional hearings held in late 1991 and in related briefings provided by USDA to congressional staff, the principal congressional focus with regard to agricultural credit guarantees was on keeping U.S. food moving to the FSU rather than on the risks associated with providing the guarantees. Table 1.1 provides information on GSM program sales by country for fiscal years 1990, 1991, and 1992. As shown, total GSM-102 credit guarantees were $4.6 billion in fiscal year 1990, $5 billion in fiscal year 1991, and $6.1 billion in fiscal year 1992. The GSM-103 program accounted for about $1 billion in guarantees during fiscal years 1990 through 1992. The table also shows that whereas the FSU received no GSM credit guarantees in fiscal year 1990, it was the major recipient in fiscal year 1991. In fiscal year 1992, the FSU and Russia, collectively, received more guarantees than any other nation. Table 1.2 depicts the distribution of GSM-102 and 103 sales by type of commodity for fiscal years 1990, 1991, and 1992. As the table shows, wheat, yellow corn, and soybeans and soybean meal accounted for the majority of sales. Within USDA, the Trade and Economic Information Division (TEID) of the Foreign Agricultural Service (FAS) is responsible for analyzing the ability and willingness of countries that have requested GSM-102 export credit guarantees to meet their current and future external debts, including potential GSM debt. TEID evaluates creditworthiness in terms of whether a country is able and willing to service its current and future foreign debt obligations. Access to sufficient hard currency is seen as the key to whether a country is capable of servicing debt (principal and interest payments). TEID notes that external debt can be serviced through revenues derived from a country’s current account, from foreign exchange received from debt and investment inflows, or from a drawdown of a country’s existing stock of foreign exchange reserves. Important factors that affect a country’s ability to service its debts, TEID says, include the status of the current account balance; the volume of trade; the variability in current receipts; the size of international reserves; the country’s access to capital account inflows, either from net direct investment, foreign borrowing, or foreign aid; and the country’s ability to reduce its imports of goods and services. TEID’s approach also includes a review of the general economic and political situation of countries. If a country’s economy is in a steep decline, its ability to earn foreign exchange from exports may be severely impaired. If the political system is unstable or if a country is subject to external threats to its sovereignty, concerns may arise about the country’s willingness or ability to meet its future debt obligations. The 1990 Farm Bill provision restricting when GSM-102 credit guarantees can be extended does not include a general creditworthiness standard. Rather, it requires that the Secretary of Agriculture determine whether a prospective borrowing country is capable of adequately servicing the debt associated with a specific, proposed GSM-102 sale to that country before issuing a credit guarantee. However, if a country is experiencing problems in servicing its debts or is likely to in the near future, any particular debt obligation (including a GSM loan), in our view, could result in default. A country with low creditworthiness may be able to adequately service the debt associated with a particular GSM-102 sale if the country is willing to assign a priority to repayment of that debt (i.e., to not treat other creditors equally). In fact, USDA officials told us that the U.S. decision to extend substantial export credit guarantees to the FSU during 1991 was partly based on the assumption that the Soviet government would give preferential treatment to GSM-102 debt. U.S. officials reasoned that food was a high-priority item. Without adequate supplies of food, political stability could be threatened. Moreover, Soviet and, subsequently, Russian leaders knew that if they fell into arrears on payments for GSM-102 guaranteed sales, the GSM-102 program would be suspended. Thus, they had an incentive to keep current on GSM-102 debt repayments if they wanted to secure future GSM credit guarantees. However, even if a borrowing country is willing to give preferential treatment to particular debts, creditor nations must pay close attention to its creditworthiness more generally. For example, if a prospective borrower country has low creditworthiness and its problems worsen, it may find it necessary to reschedule its debts. Under the normal rules of international debt restructuring, all official creditors (i.e., creditor country governments) are to be treated equally. Therefore, particular debts should not receive preferential treatment. In addition to evaluating the creditworthiness of potential GSM credit guarantees, TEID establishes annual and total risk exposure guidelines to provide USDA with a yardstick for limiting its risk exposure in specific countries. The total exposure guideline for a country is TEID’s recommended maximum dollar amount of GSM-guaranteed principal, rescheduled principal, interest arrears, and claims that the country should owe CCC at a given time. TEID’s country risk grades and exposure guidelines are used in USDA’s decisionmaking process for allocating credits to requesting countries. However, TEID’s recommendations about whether to extend credit and, if so, how much, are not binding on the agency. USDA considers not only the risk of providing loan guarantees but also the potential for expanding or maintaining U.S. markets overseas. A Reconciliation Committee, consisting of representatives from TEID and several other USDA offices, meets and discusses both the risk of lending to a foreign country and the prospects for developing and maintaining U.S. markets in that country. A recommendation is developed in committee. According to USDA officials, decisions about which countries should receive credit guarantees and in what amounts may be made by the Assistant General Sales Manager or the General Sales Manager, but sometimes the decisions are elevated to the level of the Under Secretary for International Affairs and Commodity Programs. The National Advisory Council (NAC) on International Monetary and Financial Policies also provides advice to USDA on GSM credit guarantee actions. USDA sends all GSM-102/103 proposals to NAC for review. Proposals are submitted after USDA has conducted its own risk analysis on a country in question. NAC’s recommendations are only advisory in nature and do not necessarily reflect fiscal risk. However, we were told that USDA does not typically challenge NAC recommendations unless the Treasury or the State Department are not in the majority when a vote on a recommendation is taken. The Ranking Minority Member of the Senate Committee on Agriculture, Nutrition, and Forestry asked us to assess the creditworthiness of the FSU and its 15 successor states. The FSU’s 15 republics became independent states between August and December 1991, as part of the historic change that swept across the Soviet Union in the late 1980s and early 1990s. This change culminated in the collapse of the Soviet empire and the demise of the Communist Party. The successor states are Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova, Russia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan.(See fig. 1.1 for a map showing the location of the states). We analyzed the creditworthiness of the FSU and its 15 successor states from a variety of perspectives, including debt burden, gross financial requirements, liquidity, secondary market valuations of FSU debt, and country risk analyses. In addition, we (1) considered the general economic and political environment in the FSU and its successor states; (2) reviewed how the Soviet debt crisis developed and the relationship between debt problems, on the one hand, and economic reform and creditworthiness on the other; (3) examined how USDA assessments of creditworthiness and market considerations affected USDA’s decisions on providing the FSU/successor states with credit guarantees; and (4) estimated the exposure of the GSM-102 portfolio to default by the FSU and its successor states. To assess the creditworthiness of the FSU and its successor states, we (1) analyzed their debt burden and liquidity situations, using historical and forecast data; (2) considered the importance of arrears, debt relief, and International Monetary Fund (IMF) arrangements as measures of creditworthiness; (3) applied USDA/TEID criteria for measuring the relative creditworthiness of countries to the successor states; (4) reviewed secondary market prices of FSU loans and bonds; and (5) analyzed several country risk ratings of the creditworthiness of the FSU and its successor states, including preparing a composite rating for each of the states. To examine the role of U.S. agricultural exports to the FSU and its successor states and the use of GSM-102 credit guarantees to promote such trade, we analyzed USDA data and related information on U.S. agricultural exports generally and GSM export credit guarantees more specifically. To assess the development of the Soviet debt crisis, we reviewed information on debt-servicing problems and efforts by the Group of Seven (G-7) industrialized nations and international institutions to provide financial assistance to help alleviate those problems. In reviewing the economic and political situation in the FSU and its successor states, we examined data and information on economic conditions and political events during recent years and considered the views of a number of experts and U.S. government agencies. To assess the exposure of the GSM-102 portfolio to default by the FSU and its successor states, we analyzed GSM-102 data on outstanding principal owed by all GSM-102 recipients, including weighting the data according to country risk evaluations of the creditworthiness of the recipients. In addition, we used data on secondary market prices of FSU loans and country risk ratings to estimate the risk of default on external debt of the successor states. The secondary market expresses the value of a country’s loan as a percentage of its face value. The extent to which a loan’s value is discounted in the secondary market indicates how the financial market assesses the risk of default. We estimated the default risk by subtracting a country’s secondary market price from $1 and dividing the result by 100. A similar technique was used for country risk ratings. In conducting our review, we interviewed representatives of the U.S. Departments of Agriculture, State, and the Treasury. We also conducted field work in five of the FSU’s successor states (Belarus, Kazakhstan, Russia, Ukraine, and Uzbekistan) and in Paris, Bonn, and Brussels. In the latter three cities, we met with representatives of the French and German governments and the European Union. We also reviewed U.S. legislation relating to GSM-102/103 programs, as well as other government documents and documents of the World Bank and IMF, and private sector ratings of country risk and creditworthiness. In addition, we reviewed information on violent conflicts that have affected Russia and other successor states (see app. I) and compared the debt burden of countries to debt payment problems, IMF arrangements, and debt relief agreements (see app. II). We obtained comments from the Department of Agriculture on a draft of this report (see app. III). We summarize and evaluate most of USDA’s comments in chapter 6, and some comments are addressed directly in other chapters of this report. We did our work between April 1992 and December 1994 in accordance with generally accepted government auditing standards. The FSU has been a major purchaser of U.S. bulk agricultural exports since 1972; until 1991, it usually made such purchases with cash. However, owing to its increasing financial difficulties (see ch. 3), in the latter part of 1990 the Soviets sought export credits to finance their imports of commodities and food products. The United States responded positively. Between December 1990 and September 30, 1993, the United States offered to provide up to $5.97 billion in GSM-102 export credit guarantees to the FSU and those successor states that qualified for the program. As of September 30, 1993, $5.02 billion had been provided, $0.949 billion was no longer available to the FSU, and $3.85 billion was still owed to the United States on FSU credit-guaranteed purchases. As a result of these sizable guarantees, the GSM-102 program has become heavily exposed to default by Russia, which has undertaken the responsibility as guarantor of the debt of the FSU. Guaranteed sales to the FSU and/or its successor states accounted for 38 percent of all GSM sales in fiscal year 1991 and 43 percent in fiscal year 1992 (including Russia and Ukraine). The FSU and its successor states now hold the largest portion of all outstanding GSM-102/103 loan guarantees. In late November 1992, Russia began missing payments due on GSM-102 debt for the FSU. By the end of September 1993, Russian defaults on FSU and Russian debt totaled nearly $1.1 billion. At that time, the United States agreed to reschedule $1.1 billion of GSM-102 debt. In January 1994, Russia again fell into default on GSM-102 loans. By early June 1994, the United States had agreed to reschedule another $882 million. As discussed in our 1993 report, the FSU was the world’s largest producer of wheat and one of the world’s largest producers of grains overall. It was also a major producer of potatoes, sugar beets, cotton, and sunflowers. Despite its vast production of crops, however, the FSU was a net importer of food. Its imports averaged just under $20 billion per year, about half of which was for grains and sugar. The need for extensive imports continued following the dissolution of the Soviet Union in December 1991. We found that extensive food imports were necessary because the states had been unable to efficiently harvest, store, process, and distribute much of what was grown. Difficulties associated with each of these steps of the food production system combined to create huge losses due to spoilage after crops were initially produced. For example, approximately 25 to 30 percent of grain and 30 to 50 percent of potatoes and vegetables produced in the FSU and its successor states were lost annually because of these problems. Moreover, in absolute terms, aggregate annual grain loss of the successor states (excluding the Baltic states) was, on average, about 30 million to 40 million metric tons (mmt), which was roughly equal to the size of their aggregate annual grain imports. After 1985, the Soviet Union announced a series of initiatives to reform agriculture, ranging from making changes in land ownership laws to paying farmers hard currency for high-quality crops sold to the state in excess of original state-contracted amounts. However, for a variety of reasons, these initiatives did not produce substantive results. By 1991, the availability of basic food staples, such as potatoes, meat, and bread, was far worse than before agricultural reforms were initiated. As discussed in our 1993 report, following the dissolution of the Soviet Union, agricultural reforms in the successor states proceeded slowly, with varied progress among the states. Reforms considered or undertaken in these states included the liberalizing of food prices; restructuring of state and collective farms; and privatizing of food production, wholesale and retail trade, processing, storage, and transport. However, agricultural reform has been slow, in part because successor state governments fear that rapid reform might lead to significant production shortfalls and unemployment. In turn, such disruptions could cause food shortages and discontent that would threaten the political and social stability needed by these governments to proceed with reforms. Our report also found that agricultural reforms have been impeded by (1) bureaucratic resistance from some persons with vested interests in the old central command system and (2) fear of change by workers on state and collective farms. Since the early 1970s, the FSU was a major customer for U.S. bulk agricultural commodities, although U.S. exports fluctuated greatly from year to year. The trade relationship was fostered by a series of U.S.-Soviet long-term bilateral grain agreements. Annual sales varied in response to a variety of factors, including fluctuations in Soviet agricultural production, changes in the nature of the U.S.-Soviet political relationship, and competition from other exporters of agricultural commodities. Table 2.1 shows U.S. agricultural exports to the FSU and its successor states as a percent of U.S. agricultural exports to the world, between 1976 and 1993 (percents were calculated on the basis of the dollar value of the exports). For the entire period, exports to the FSU and its successor states accounted for 5.5 percent (on an average annual basis) of the total value of U.S. agricultural exports to the world. For the 1991-93 period, the average annual share was 5.3 percent. Grain and soybeans/soybean products accounted for the large majority of U.S. agricultural exports to the FSU and its successor states. Table 2.2 shows U.S. corn, wheat, and soybean/soybean product exports to the FSU and its successor states as a percent of the total value of such commodity exports to the world between 1976 and 1993. The table shows that corn and wheat exports generally accounted for 10 percent or more of total U.S. exports of the same commodities. Beginning in the early 1970s, the FSU became more or less a permanent grain importer. Generally, through 1990, it was a cash customer when it purchased U.S. agricultural commodities. However, during the last months of 1990, the Soviets sought to line up export credits to purchase needed food from countries that traditionally exported agricultural commodities to the Soviet Union. The Soviets needed such credits because they had a limited amount of hard currency reserves available to purchase food and other items and to finance their continuing trade deficit with nonsocialist countries. On December 12, 1990, the Secretary of Agriculture announced that $1 billion in GSM-102 export credit guarantees would be made available for agricultural sales to the Soviet Union. (January 1991 was the first month in which guarantees were allocated for actual commodity purchases.) The decision was made during a time of a food shortage in the Soviet Union.The U.S. response to the Soviet Union’s request for assistance took place in the context of growing western efforts to help the Soviet Union. As of December 1990, Canada, France, Germany, Italy, and Spain had announced about $3 billion in agricultural credits to the Soviet Union for purchases in the following months. The United States had also stated its support for the Soviet President’s “perestroika” (restructuring) measures and fundamental economic reform objectives. When the GSM credits were announced in December 1990, the White House Press Secretary said they reflected the administration’s desire to promote a continued positive evolution in the U.S.-Soviet relationship. In April 1991, the Soviet Union requested another $1.5 billion in GSM-102 export credit guarantees for 1991. However, the request occurred at a time of growing concern about Soviet creditworthiness. In hearings held in May 1991, the Deputy Under Secretary for International Affairs and Commodity Programs indicated to a congressional committee that under the 1985 Farm Bill it was quite possible to consider the creditworthiness of a country suspect but still proceed with a full GSM program for the country because market development considerations outweighed the financial risk. However, he said, USDA believed that such an approach was no longer possible and that as a result, USDA’s flexibility in operating the credit guarantee programs had been considerably limited. As discussed in chapter 1, the 1990 Farm Bill added the requirement that the Secretary of Agriculture may not issue credit guarantees in connection with sales of agricultural commodities to any country that he determines cannot adequately service the debt associated with such a sale. During May 1991, there was extensive debate in the Senate about whether to provide additional credit guarantees to the Soviet Union. The debate focused on (1) whether the Soviet Union was creditworthy, (2) what factors should be considered in assessing a country’s ability to repay GSM debt, (3) whether credit guarantees should be provided or denied to the Soviet Union because of foreign policy considerations, and (4) what the impact on the United States and U.S. farmers would be if guarantees were not provided and sales not made. On May 15, 1991, the Senate approved (by a vote of 70 to 28) a nonbinding resolution (S. Res. 117) that said that as the administration evaluates the Soviet request, it should consider in its evaluation such factors as whether the Soviets were able to service current debt and whether the absence of U.S. guarantees would jeopardize U.S. market development. Similarly, on May 31, 1991, 37 members of the House Committee on Agriculture sent the Secretary of Agriculture a letter that said the Secretary should consider, among other factors, the U.S. ability to access, maintain, and develop markets for U.S. agricultural products, including an assessment of whether the absence of U.S. credit guarantees would jeopardize the ability to access and develop such markets. In June 1991, the President approved the Soviet request for $1.5 billion in additional credit guarantees. Subsequently, in July 1991, USDA’s Assistant General Sales Manager told us that the Senate resolution and letters that USDA had received from Members of Congress had provided helpful guidance on how to interpret the requirements of the 1990 Farm Bill. He said that without that guidance, it is not clear whether USDA would have made a determination that the Soviet Union was creditworthy. The official noted that the entire GSM program was predicated on making credit available beyond what the private sector would provide. During the summer of 1991, increasing apprehension about the Soviet Union’s creditworthiness affected its ability to obtain financing for U.S. agricultural commodities—even for purchases backed by U.S. credit guarantees. According to an official of Vnesheconombank (VEB), the Soviet Bank for Foreign Economic Affairs, U.S. commercial banks no longer felt comfortable with the 98-percent guarantee coverage that USDA typically offers. The banks, he said, wanted CCC to provide 100-percent coverage of the principal or have the exporters absorb some of the risk. On September 24, 1991, CCC agreed to guarantee 100 percent of principal. Between then and the end of fiscal year 1993, all of the credit guarantees to the FSU, Russia, and Ukraine included 100-percent coverage of the principal. In comparison, none of the other countries that received GSM-102 credit guarantees in fiscal year 1991 through March 4 of fiscal year 1994 were provided more than 98-percent coverage. In November 1991, the White House announced that another $1.25 billion in credit guarantees was being made available, along with $250 million in food aid and technical assistance. The White House said that the President’s decision would help the Soviet Union, its republics, and their peoples cope with immediate food shortages and aid in the longer term restructuring of the country’s food distribution system. However, owing to concern about whether the Soviet Union would continue to exist, this commitment was not made until after Russia and the other republics agreed to “joint and several liability” for the debts of the Soviet Union. In December 1991, the Soviet Union did dissolve. At that time, a considerable portion of the November GSM credit guarantees had not been allocated. CCC announced that the unused guarantees (about $650 million) would be available for sale to any of the 12 republics. However, all sales for the unused guarantees went to the 12 states of the FSU as a group. In April 1992, the President announced that another $1.1 billion in credit guarantees would be made available. However, because several of the successor states (particularly Russia and Ukraine) wanted separate programs, USDA indicated that the guarantees would henceforth be made on a bilateral basis. Of the $1.1 billion announced at that time, USDA said that $600 million was being designated for Russia. The remaining $500 million would be available for Ukraine and the other states provided they met GSM-102 program qualifications. Only Ukraine received any of this commitment—$109 million. According to a USDA official, the remaining $390 million is no longer available. In September 1992, USDA announced that $900 million in new guarantees was being made available to Russia for fiscal year 1993. One month later, USDA announced a commitment of $200 million for Ukraine for fiscal year 1993. Between January and September 1993, USDA announced small amounts of credit guarantees for Estonia ($5 million) and Uzbekistan ($15 million). Table 2.3 summarizes information on all GSM-102 commitments made to the FSU and its successor states through September 30, 1993. As the table shows, U.S.-announced guarantees totaled $5.97 billion. As of September 30, 1993, $5.021 billion had been registered for export. Of this, the FSU received $3.74 billion, Russia $1.06 billion, Ukraine $199 million, and two other states a total of $20 million. The remaining $949 million in announced guarantees is no longer available because the period of time during which the guarantees could have been registered for export has expired. During fiscal year 1994, USDA announced the availability of GSM-102 credit guarantees for successor states as follows: Kazakhstan, $15 million; Ukraine, $40 million; Turkmenistan, $10 million; and Uzbekistan, $15 million. In each of these cases, CCC agreed to cover only 98 percent of the principal. For Kazakhstan, Turkmenistan, and Uzbekistan, USDA required that principal repayments be made every 6 months. For Ukraine, however, USDA indicated principal repayments could be made annually. Although USDA said it would make up to $40 million available for Ukraine, only $20 million was authorized during the fiscal year. In addition to the above states, in September 1994, USDA announced it was authorizing $20 million in GSM-102 credit guarantees for sales to private sector buyers in Russia for fiscal year 1994. The guarantees offered to Russia were significantly different from guarantees previously made available to the FSU and its successor states, including Russia. Under the terms of the announcement, the guarantees could be effective for up to 90 days only rather than up to 3 years. The offer followed a long period during which Russia did not receive any guarantees as a result of substantial defaults on its GSM-102 credit-guaranteed loan payments (see later discussion in this chapter). Total GSM-102 guarantees offered to the successor states in fiscal year 1994 equalled $100 million, a small fraction of the amounts offered in fiscal years 1991, 1992, and 1993. (See table 2.3 and the accompanying discussion.) In commenting on a draft of this report, USDA said it is important to clarify that there were no food shortages in the sense that the USSR did not produce enough food. Before the breakup of the USSR, USDA said, shortages and long lines existed in state stores where prices were controlled, resulting in “surplus demand” at those locations. No shortages existed in the farmer markets where prices were relatively freely set and substantially higher than state-set prices. After the breakup of the Soviet Union, USDA said, price liberalization led to higher prices, increasing the availability of food by decreasing surplus demand. The primary problems facing the FSU in terms of food supply, USDA said, are disruptions due to military conflict and the reduced purchasing power of consumers, which has put some groups (such as the elderly and unemployed) at risk. Humanitarian assistance could address, and in many cases has addressed, these problems. USDA also said that it would not characterize 1990 as a time of food shortages. It noted that the 1990 grain crop was one of the largest in history and reiterated that the food problems in the Soviet Union were a function of controlled prices and surplus demand, not a “shortage of supply.” We do not disagree with USDA’s description of what caused the food problems in the FSU but believe that our use of the term “food shortages” to characterize the situation in the FSU is consistent with what both USDA and others were saying at the time. For example, on December 12, 1990, the White House issued a fact sheet that said that the GSM-102 export credit guarantees being made available at that time were a form of food assistance that “will help the Soviet authorities address current food shortages.” A Congressional Research Service report of December 3, 1990, concluded that even with bumper grain crops in the Soviet Union in the fall, food losses due to harvesting and spoilage problems and a breakdown in food processing and distribution systems had resulted in shortages of food products, particularly in the larger Soviet cities and in remote areas far from the major food producing regions. Similarly, a May 1991 USDA report on Soviet agriculture used the term food shortages several times in describing food problems in the Soviet Union. Since the beginning of 1991, most U.S. agricultural exports to the FSU and its successor states have been financed through GSM-102 credit guarantees. For example, the United States exported $6.6 billion in agricultural commodities to the FSU during 1991 through 1993. Between January 1991 and September 30, 1993, CCC registered for export $5 billion in GSM-102 credit-guaranteed food. Table 2.4 shows that the principal GSM-102 commodity exports to the FSU, Russia, and Ukraine have been grains (i.e., yellow corn and wheat) and soybeans and soybean products (i.e., soybean meal and soybean oil). The table also shows that the GSM commodity exports to the three countries accounted for a considerable portion of total GSM-102 and 103 commodity exports to the world during fiscal years 1991 and 1992. We examined USDA documents recording the results of meetings of USDA’s Reconciliation Committee on whether to provide credit guarantees to the FSU and its successor states. We found that FSU and, subsequently, Russian and Ukrainian creditworthiness were assessed as very risky by the committee during the 2-year period when USDA’s CCC made available more than $5 billion in credit guarantees to these countries. We also found that there was considerable disagreement between those participants whose primary concern was to assess financial risk and those responsible for assessing market opportunities. The committee sought to develop recommendations that would reconcile or balance the financial risks and market development opportunities. The Assistant General Sales Manager or the Acting General Sales Manager chaired the committee. Other participants included representatives from several offices under USDA’s Under Secretary for International Affairs and Commodity Programs. These included (1) the Financial Management Division of the Agricultural Stabilization and Conservation Service (ASCS); (2) the CCC Operations Division (CCCD); (3) the Commodity and Marketing Programs (C&MP), including its Grain and Feeds Division (G&FD); (4) the Program Development Division (PDD); and (5) the Trade and Economic Information Division. The Reconciliation Committee met on August 21, 1990, to consider a recommendation to provide GSM-102 export credit guarantees to the FSU. Members presented very strong arguments for both a $2-billion program based on market development opportunities and a limited $500-million (or less) program based on financial and political risks. The Reconciliation Committee Chairman suggested a $1-billion program as a good balance between market development and risk. However, given strongly opposing views, the committee decided to further assess the situation. The committee met again on September 19, 1990. TEID, responsible for assessing country risk, reported that there was continued deterioration in the Soviet economic situation and a further unraveling of the political situation. It recommended a maximum exposure of no more than $500 million in credit guarantees for the Soviet Union. ASCS, responsible for establishing limits on the amount of credit guarantees that could be handled by Soviet banks, said that on the basis of bank financial risks, macroeconomic issues, and political problems, it was strongly against a GSM program higher than $280 million. It noted a rapid and major decline in Soviet creditworthiness, reports of military movements outside of Moscow, and open discussion of a coup in the Soviet Union. Both TEID and ASCS cited a plan being circulated in the Soviet Union for changing the balance of power between central authorities and the republics, and they expressed concern about who would be responsible for repayment if republics split from the Soviet Union. ASCS said that if the Soviet plan were implemented, the Soviet banking system would be decentralized and that VEB, the only Soviet bank authorized to issue letters of credit to obtain credit guarantees, might not have sufficient hard currency to repay its obligations. CCCD asked whether the committee could even consider anything over $500 million, given the negative financial information presented by TEID and ASCS. The Reconciliation Committee Chairman agreed that based on financial information only a smaller program was warranted. However, he reminded the participants that the purpose of the committee was not only to consider financial risks but also to reconcile these risks with market development opportunities. He said the committee had the responsibility of sending forward a recommendation that was balanced between risks and market development. C&MP, which had urged at least a $2-billion program for the Soviet Union in August 1990, again urged a large program to meet many potential market development opportunities. It said (1) the Soviet Union represented the largest market development opportunity in the world and also had the greatest future potential, (2) there were many competitors lined up to extend credit to the Soviets, (3) any loss in U.S. market share would be severely detrimental to U.S. agricultural business and producers, and (4) loss of the Soviet market would result in higher domestic program costs for CCC now and in future years. In addition, C&MP said the Soviet Union’s ability to pay should not be based solely on current market information. It noted that the country had the largest untapped resource base in the world and that even though its financial condition was bad, it was still paying its agricultural debts to the United States. In spite of the wide differences of view, the committee recommended a $1.25-billion program for the Soviet Union—subject to certain conditions, such as releasing the guarantees in segments and securing a credit guarantee assurance letter from the Soviet federal government. In addition, the committee said it wanted reviewers of its recommendation to know that there was extreme risk in its proposal and a substantial chance that CCC would have to make outlays. The committee said it felt that CCC should accept a degree of country risk that it would refuse for another country because (1) the Soviet Union was our largest export market for grains, and the health of the American farm community was directly dependent on maintaining market share in that market; and (2) in the event of a loss of the Soviet market, CCC would also make substantial outlays under domestic programs. According to documents provided to us by USDA, the Reconciliation Committee did not meet again to discuss the Soviet situation between the time of its September recommendation and December 12, 1990. On the latter date, the Secretary of Agriculture announced that the President had waived the Jackson-Vanik amendment to the 1974 Trade Act emigration requirements in order for the Soviet Union to purchase U.S. agriculture commodities using USDA credit guarantees. The Secretary said $1 billion in guarantees would be made available to fulfill a request made by the Soviet government. In May 1991 the Reconciliation Committee met twice to consider a Soviet request for an additional $1.5 billion in credit guarantees. In a May 2 meeting, TEID advised the members that the Soviet economic and political situation was rapidly deteriorating and questioned who would repay Soviet loans if the Soviet Union did not survive. TEID said that on the basis of a “best-case” scenario, it could propose no more than $300 million in added credit guarantees. PDD said it favored postponing any further credits. Among some of the reasons it offered were that (1) section 202 (f) of the 1990 Farm Bill stipulated that credit should not be extended to countries that cannot adequately service the debt, (2) CCC exposure was already over CCC’s guideline, (3) Soviet instability seemed likely to continue, (4) foreign exchange reserves would likely decline, and (5) there was uncertainty about implementation of the Jackson-Vanik amendment. ASCS recommended no new credits. In contrast, C&MP told the committee that the Soviets could use all of the $1.5 billion in credit guarantees and that if the United States did not extend credit, it would be out of the market. C&MP warned that the U.S. reputation as a reliable supplier would suffer, and long-term trade repercussions would follow. G&FD questioned whether there was not room to work out a solution. It noted that the Soviet Union had huge resources and there had been no occasion of its delaying payment. G&FD warned that the President and the Secretary of Agriculture might be embarrassed if guarantees were not extended, since the Soviet state had an obligation to feed its people and the credit guarantees would be used to purchase grain, a fundamental staple. The meeting ended with the committee recommending that no further credit be extended to the Soviet Union for the time being. However, it said a reexamination should be initiated if, for example, Congress acted to clarify the interpretation of section 202(f) of the 1990 Farm Bill. As previously discussed, during May 1991 the Senate approved a nonbinding resolution, and 37 members of the House sent the Secretary of Agriculture a letter indicating that USDA should assess the impact on U.S. commodity exports if additional credit guarantees were not extended to the Soviet Union. On May 28, 1991, the Reconciliation Committee met to review its previous position and to be briefed by the Chairman and the General Sales Manager on the results of a presidential mission that had visited the Soviet Union between May 17 and 26, 1991, and of which the Chairman and the manager were members. The Chairman reported there were no real signs of hunger or food shortages in the Soviet Union. The General Sales Manager told the committee that Soviet officials had advised USDA that they could not provide firm financial figures relative to Soviet creditworthiness. As a result, he said, Soviet figures had lost their credibility, and a Soviet request for additional credits would have to be viewed as a political rather than commercial request. TEID agreed. Although the committee noted that the Soviet Union had passed a new immigration law that would make the Jackson-Vanik requirement less of a problem in the future, the committee concluded that there was no basis for changing its previous recommendation not to extend further credit guarantees to the Soviet Union. The committee’s recommendation notwithstanding, on June 11, 1991, the President announced his decision to extend another $1.5 billion in loan guarantees to the Soviet Union. According to the White House Press Secretary, the President’s assessment of the Soviet Union’s creditworthiness was based on the following: (1) its record of never defaulting on an official loan involving the United States; (2) its positive repayment history on several hundred million dollars in loans through the 1970s, primarily from the U.S. Export-Import Bank; (3) the judgment of the USDA team that had visited the Soviet Union in May; (4) the subsequent review by the Secretary of Agriculture; (5) the administration’s discussions with Soviet officials; and (6) the commitment of President Gorbachev to move toward a market economy. In July 1991, the Assistant General Sales Manager told us that food was a priority item for the Soviet government, since without adequate food supplies political stability could be threatened. The government had an incentive to stay current on GSM debt payments, he said, because Soviet officials knew that if the government did not remain current, the GSM-102 program would be suspended. The Reconciliation Committee met on August 12, 1992, to discuss a possible fiscal year 1993 GSM-102 program for Russia. PDD recommended a $1.2-billion program. It noted that Russia had not missed a payment on any of the FSU’s GSM-102 debt and that VEB and Russian officials had continually said Russia would honor all of its GSM-102 obligations. C&MP said Russia’s commodity import needs greatly exceeded the $1.2-billion recommendation and that GSM-102 was essential to maintain the U.S. share of the Russian market. It warned that U.S. exports to Russia would be needed to help offset U.S. farm program costs. TEID objected to the proposed $1.2-billion program. It advised the committee that CCC was vastly overexposed and at substantial risk of realizing large losses on the FSU and Russian programs. TEID said that Russia’s ability and commitment to resume full debt servicing in fiscal year 1993 were very doubtful and that FSU debt was likely to be rescheduled following Russian negotiation of a standby agreement with the IMF. TEID said it was impossible to establish a meaningful debt exposure guideline for additional credits, since Russia was not creditworthy for the size of its existing program. TEID recommended that Russia be extended other assistance of a more concessional nature. ASCS also objected to the proposed program level due to the substantial credit risk. It noted that the debt exposure level for VEB was well over the established bank limit now set by ASCS at $130 million, that VEB continued to fall behind on its interest payments to other creditors, and that VEB had been late on a number of GSM debt payments to banks. In addition, responsibility for the FSU debt on the part of each former republic had yet to be settled. In the absence of a committee consensus, on August 28, 1992, the Committee Chairman recommended a $1.2-billion Russian program to the Acting Under Secretary for International Affairs and Commodity Programs. The Chairman detailed the differing views of the committee members. He noted that the proposed program might slightly reduce CCC’s total exposure and indicated that if there were no fiscal year 1993 program for Russia, there would be a highly damaging impact on farm prices and resulting outlays under U.S. domestic commodity support programs. On August 26, 1992, the Reconciliation Committee met to discuss a fiscal year 1993 GSM-102 program for Ukraine. PDD recognized continued deterioration of the Ukrainian economy but recommended a $200-million program. It said GSM-102 financing would be needed to maintain U.S. market share and that Ukraine officials had stated they would honor all of their GSM-102 obligations. C&MP estimated Ukraine’s credit needs as closer to $300 million and said that the market would provide significant U.S. sales opportunities well into the latter half of the 1990s. In contrast, TEID recommended against a fiscal year 1993 program. It found Ukraine overexposed based on its fiscal year 1992 program and its share of repayments of the FSU program. It warned that unless Ukraine’s record significantly improved, TEID believed Ukraine would not obtain sufficient international financing and foreign exchange earnings to pay for its imports and service its foreign debts. ASCS, which had established a bank limit of $0 for the State Export-Import Bank of Ukraine, also recommended against any further CCC credits. In the absence of a committee consensus, in September 1992 the Chairman recommended a $150-million Ukrainian program to the Acting Under Secretary. The Chairman said he was again trying to balance financial risk against market development opportunities. He said providing credits would mark the first time that USDA had made credit guarantees available to a country whose current risk rating was below grade (i.e., not creditworthy), but he also said the proposed program would represent a minimal presence in a major market where the United States had a strong interest. On September 14, 1992, USDA announced that during fiscal year 1993 it would provide Russia with $900 million in GSM-102 credit guarantees and $250 million in food aid. On October 19, 1992, USDA announced an allocation of $200 million in export credit guarantees to Ukraine for fiscal year 1993. As we have previously reported, the progress of agricultural reform in the successor states might be hindered by the provision of export credit guarantees by the United States and other countries. Credit guarantees allow the successor states to continue to import billions of dollars of foreign grain and other food commodities. Because these commodities are generally purchased, processed, and distributed by state-owned enterprises, these structures are likely to survive longer as state monopolies than might otherwise be the case, although we were unable to quantify this effect. It is these inefficient state enterprises that successor state reformers seek to privatize or replace with alternative, nonstate structures, such as commodity exchanges and private food processors, distributors, and wholesalers. In addition, credit guarantee-assisted food imports might hinder domestic food production and the efficient processing and marketing of this food by keeping down prices offered to successor state farmers and food processors and distributors. At the same time, however, a number of successor state officials we contacted felt that credit guarantee-assisted food imports had benefited the overall economic reform process in the states more generally. According to these officials, the food imports helped to preclude food shortages and thereby contributed to the political and social stability needed to advance the overall economic reform process. In commenting on a draft of this report, USDA indicated that credit-guaranteed assistance has adversely affected reform in the FSU. According to USDA, although widespread dislocation in the food supply never occurred, the West continued to provide assistance (credits and food aid) to the FSU, which accepted it to the likely detriment of economic reforms (increased debt and continued state control of agricultural marketing that lowers productivity, increases waste, and possibly undercuts domestic production). According to USDA, FSU leaders figured it was in their best interest to accept western assistance since repayment, if any, would be delayed. USDA also noted that before its breakup, the Soviet Union also imported large amounts of grain rather than pay farmers more to increase domestic grain procurement and to reduce waste. In the space of 2 years, the GSM-102 program quickly became heavily exposed to debt of the FSU and its successor states. As table 1.1 showed, in fiscal year 1990 there were no GSM sales to the FSU. In fiscal year 1991, GSM guaranteed sales to the FSU were $1.9 billion, which equalled 38 percent of all GSM-102 sales that year. In fiscal year 1992, the FSU, Russia, and Ukraine together accounted for $2.6 billion, or 43 percent of all GSM-102 guaranteed sales. Not surprisingly, the FSU and its successor states accounted for the single largest portion of all outstanding loan guarantees from the GSM-102/103 programs combined. As of the end of January 1993, CCC had $8.8 billion in outstanding loan guarantees (principal only) from these programs. Of this amount, $3.6 billion, or 40.9 percent, was accounted for by guarantees provided to the FSU, Russia, and Ukraine. When the outstanding principal owed by GSM recipient countries is weighted by the risk of countries defaulting on their debt payments, we estimate that the exposure to default by the FSU, Russia, and Ukraine is even greater. See table 5.12 and accompanying discussion in chapter 5. Table 2.5 provides the repayment schedule for the FSU’s GSM-102 loans as of the end of February 1993. As the table shows, combined principal and interest payments due from the FSU, Russia, and Ukraine for 1993 equaled nearly $1.65 billion; in 1994, $1.62 billion; and in 1995, about $0.72 billion. The figures do not reflect, as discussed below, USDA’s April 1993 and June 1994 agreements to reschedule a considerable amount of FSU GSM-102 debt. Since Russia is the only successor state making payments on GSM-102 debt for the FSU and since Russia accounts for nearly all credit guarantees committed since the program was converted to a bilateral mode, the GSM program is particularly vulnerable if Russia is not able or willing to make payments on GSM debt. As table 2.5 shows, Russia began defaulting on scheduled payments for both the FSU and Russia itself beginning in the fourth quarter of 1992. As a result of these defaults, USDA suspended Russia’s participation in the GSM-102 program. By March 31, 1993, FSU and Russian defaults totaled nearly $648 million (see table 2.6). On April 2, 1993, the United States reached a provisional agreement with Russia to reschedule approximately $1.1 billion of GSM-102 debt. The agreement covered FSU principal and interest arrears, as well as payments coming due in calendar year 1993, on export contracts made in 1991. The rescheduled debt would be repaid over 7 years, with a 2-year grace period on principal repayments. Not covered by the rescheduling were $287 million in FSU arrears accumulated through March 31, 1993. Under the proposed agreement, Russia was to eliminate the arrears by June 30, 1993, and stay current on GSM-102 payments as they came due. However, rather than eliminating the arrears, Russian defaults increased. By the end of September 1993, net defaults totaled nearly $1.13 billion, and the Commodity Credit Corporation had paid out $1.1 billion in net claims to U.S. banks that had made the loans (see table 2.6.). Nonetheless, on September 30, 1993, the United States and Russia concluded a debt rescheduling agreement along the lines of the April proposal. Under the agreement, Russia accepted responsibility for all of the GSM-102 debt of the FSU. The agreement provided for rescheduling an estimated $1.07 billion of GSM-102 debt, including a considerable amount of the arrears. In addition, Russia agreed to repay approximately $444 million in unrescheduled arrears in three installments by the end of 1993. Table 2.7 shows the schedule for Russia’s repayment of FSU and Russian GSM-102 debt following the September 1993 debt rescheduling. As the table shows, the debt was to be fully repaid by the year 2000. In addition, Russia was required to continue to make payments, as they come due, for FSU and Russian GSM-102 export contracts made after 1991. According to USDA, a determination on resumption of the GSM-102 program for Russia could not be made until the debt issues were fully resolved and all arrears were eliminated. Russia did pay the arrears by the end of 1993. However, during early 1994, Russia again fell into default on GSM-102 debt. For example, during the first 70 days of 1994, Russia was in default about 51 days. In February, USDA agreed to reschedule $344 million in 1991 payments coming due during the January 1 through April 30, 1994, time period. On June 4, 1994, USDA agreed to reschedule another $517 million in payments due during the May 1 through December 31, 1994, period, as well as $22 million in deferred interest. Repayment terms for the principal included a 2.75-year grace period followed by an 8-year repayment period. The deferred interest on the rescheduling agreement is to be repaid over a 5-year period. Russia was still required to pay approximately $360 million owed to CCC and U.S. banks for the January through December 1994 period. The amount of GSM-102 principal outstanding for Russia subsequent to the June 4 rescheduling agreement was $2.85 billion. This figure included principal amounts, interest, and capitalized interest due under the FSU program and the Russian program. At the time of the June rescheduling Russia was in arrears, and the rescheduling enabled Russia to become current on those arrears. The rescheduling occurred as part of a broader agreement concluded in Paris between Russia and its official creditors. The creditor countries also agreed to meet with Russia later in 1994 to discuss a longer term and more comprehensive rescheduling to address Russia’s severe financial problems. Meanwhile, on June 2, 1994, Ukraine began defaulting on its GSM loan payments. As of August 17, 1994, its defaults totaled about $31.1 million and CCC had paid $21.6 million for claims made by lenders. According to USDA, providing export credit guarantees to banks willing to extend loans to foreign purchasers of U.S. agricultural commodities increases the demand for U.S. exports. This increase, in turn, results in higher commodity prices for U.S. farmers and lower costs for U.S. government commodity support programs. Proponents of the GSM-102 credit guarantees point out that these reduced program costs offset the risk of default on the guaranteed debt. We reviewed USDA estimates of the cost savings associated with the extension of export credit guarantees to the FSU and its successor states in fiscal years 1991 and 1992. The FSU and its successor states received GSM-102 export credit guarantees for the purchase of U.S commodities and freight, and they also secured lower prices for certain commodities as a result of USDA Export Enhancement Program (EEP) bonus payments to U.S. exporters. USDA initially provided us with two estimates of savings in commodity support programs associated with extending GSM-102 export credit guarantees to the FSU. The first estimate, which was made in conjunction with a proposed GSM-102 package in the spring of 1991, indicated that if CCC did not provide $1.5 billion in additional export credit guarantees to the FSU between January and July 1, 1991 ($1 billion in guarantees had already been extended between January and March 1991), CCC domestic support payments for wheat, corn, and soybeans could increase between $360 million and $755 million. The higher estimate was arrived at by assuming 100-percent program additionality—that is, that alternative export markets would not exist for GSM-102 guaranteed exports to the FSU. Thus, commodities not sold to the FSU would have to be sold in the U.S. market or added to unsold carryover stocks. For the lower estimate, USDA made two different key assumptions. The first assumption was that 25 percent of the commodities could be exported to other countries. The second assumption was that $100 million of the guarantees for the FSU would be used for high-value products for which USDA does not provide deficiency payments or nonrecourse loans. Therefore, this $100 million in GSM-102 guarantees would have no impact on the costs of USDA’s domestic commodity support programs. Both of USDA’s estimates deducted the expected cost of EEP bonus payments provided for wheat exports under the proposed GSM sales. USDA’s second estimate, made in February 1993, assessed changes in support costs for commodity programs if the United States did not export a projected 6 million tons of corn and 6 million tons of wheat to the FSU during 1993 and 1994. According to this estimate, support payments for corn would increase by $499 million and wheat payments by $685 million. Soybean costs were not included in the estimate. The estimate assumed that none of the corn and wheat would be sold into alternative export markets (i.e., 100-percent program additionality). Expected EEP bonus payments, however, were not netted out. The USDA estimates of increased commodity support costs depend importantly on the assumption that alternative markets would not be generally available if the commodities were not exported to the FSU. USDA did not give us the basis for this assumption. If the commodities in question were exported to other nations, USDA’s estimates of farm price changes and program savings would be less than it estimated. For a variety of reasons, USDA’s assumption about 100-percent additionality is debatable: (1) special features of the GSM-102 program that were made available to the FSU and its successor states could have been attractive if offered to other importing nations; (2) competitor exporting nations may have displaced U.S. exports in other markets; and (3) CCC program costs depend on commodity farm prices that, in turn, are the result of many factors that influence global supply and demand conditions. As previously noted, countries that participate in the GSM-102 program are able to obtain better interest rates on their credit than would be the case in commercial markets, as they are in effect using the repayment guarantee of the U.S. government to obtain the credit. In addition, most of the guarantees to the FSU and its successor states in fiscal year 1991 and fiscal year 1992 included coverage for 100 percent of the value of the commodities (rather than 98 percent, which is typical for the GSM-102 program). The 100-percent guarantee should also lower borrowing costs to prospective buyers. Also, the GSM-102 program for the FSU and its successor states included guarantees for freight costs. In fiscal years 1991 and 1992, freight coverage equaled nearly $443 million, or about 10 percent of the value of all GSM-102 credit guarantees offered the FSU and its successor states. The coverage of freight costs meant that each dollar of GSM-102 commitment to the FSU and its successor states supported only 90 cents’ worth of commodity exports. Also, EEP bonus payments to the importing countries for selected commodities lowered the cost of importing these commodities, which in turn should have resulted in additional exports. Total EEP bonus payments for GSM-102 exports to the FSU and the successor states in 1991 and 1992 were about $579 million. We estimated that the combination of freight cost financing and EEP bonus payments alone made the additionality attributable to the GSM program for the FSU and its successor states in fiscal years 1991 and 1992 equal at most to about 77 percent. We believe that if USDA had offered GSM-102 credit guarantees to other potential buyers with similar generous terms, it is possible that the United States could have found alternative export markets for at least some of the GSM sales that were made to the FSU and its successor states. The behavior of competitor exporters is also relevant to the question of program additionality. For example, if exporters from other nations responded to the GSM-102 guarantees that were made available to the FSU and its successor states by offering similar incentives to non-FSU importers, the exporters may have displaced potential U.S. exports to these other markets. Displaced U.S. exports would have reduced additionality resulting from increased exports to the FSU and its successor states. Alternatively, if the United States did not provide the guarantees to the FSU and its successor states but other exporter nations did, global commodity prices would presumably be about the same. As a result, there would be little or no reduction in USDA commodity support payments to farmers. Actual support cost is also affected by commodity prices. Commodity prices are the result of many factors that influence global supply of and demand for commodities. These include, among others, the overall economic performance of the United States, as well as the global economy; the weather and growing conditions for crops in the United States and competitor nations; purchasing decisions in importing countries; the prices of competing commodities; and the production and consumption subsidies of the United States and its competitors. These factors could cause commodity prices to reach levels that would reduce or eliminate the need for additional commodity support payments to U.S. farmers—even if the United States did not export to the FSU and its successor states. However, the complexity and variety of factors that could influence commodity prices make the isolation of the effect of a single factor difficult. Without explicit and detailed investigation of the behavior of exporters and importers and specification of other macroeconomic and microeconomic variables, discerning the additionality of the GSM-102 program is difficult. In the absence of reliable data on the additionality of GSM-102 exports, we believe that estimated savings in commodity support programs associated with extending GSM-102 export credit guarantees to the FSU and its successor states should consider a range of additionality levels. USDA provided more than $5 billion in export credit guarantees to the FSU and its successor states in 1991-92. It did so when its own assessments indicated that these were high-risk countries from a creditworthiness perspective. According to documents of USDA’s Reconciliation Committee, which makes recommendations concerning whether to provide credit guarantees to specific countries and, if so, in what amounts, the committee saw a need to balance debt-servicing considerations against the need to maintain and expand overseas markets. On two occasions when the committee was unable to reach a consensus, the Chairman made recommendations that he believed balanced financial risk and market development considerations. Since the 1990 Farm Bill does not specify criteria to be used in assessing debt-servicing ability, USDA has considerable discretion and, thus, can provide large amounts of credit guarantees to high-risk countries, increasing the risk of defaults on GSM-102 loans. Between November 1992 and the end of September 1993, Russia defaulted on more than $1.1 billion in GSM-102 loans made to the FSU and Russia. Under a September 30, 1993, agreement, the United States agreed to reschedule about $1.1 billion in GSM-102 debt, provided that Russia repaid $444 million of arrears (as of the end of 1993). Russia did repay the arrears on schedule. However, in January 1994, Russia again fell into default on GSM-102 loans. Between February and early June 1994, the United States agreed to reschedule approximately $882 million in additional payments due to CCC and U.S. banks under the GSM-102 program. Following the June 1994 rescheduling, there was approximately $2.9 billion in outstanding GSM-102 principal still owed by Russia on GSM-102 credit-guaranteed loans. According to USDA estimates, export credit guarantees provided to the FSU and its successor states resulted in higher commodity prices and, in turn, lower costs for U.S. commodity support programs. Proponents of the credit guarantees assert that the reduced program costs help offset the risk of default on guaranteed debt. However, the estimated savings in commodity support costs depended importantly on an assumption that alternative markets would not be generally available if the commodities were not exported to the FSU. We disagree with analyses that assume only 100-percent additionality, and we believe that any estimated savings in commodity support programs should consider a range of additionality levels. During the latter part of the 1980s, a serious debt situation arose in the Soviet Union. As the situation evolved, western commercial lenders scaled back and then virtually halted lending to the Soviet Union. Western governments provided loans and credit guarantees to help fill the gap. By late 1991, the Soviet Union’s debt problem had reached crisis proportions. At the same time, the country was in the final stages of political disintegration. The debt crisis was temporarily eased in November 1991, when official western creditors agreed to a 1-year deferral of principal payments on pre-1991 debt. Eight of the Soviet republics agreed to joint and several liability for the outstanding debt of the Soviet Union and to carry out economic reforms recommended by the IMF. Following the dissolution of the Soviet Union in December 1991, the former republics sought membership in the IMF and the World Bank. The Group of Seven (G-7) nations concluded that the international financial institutions could be used to promote economic reform in the FSU and to coordinate western financial assistance. They encouraged the new states to undertake substantial economic reforms designed to stabilize their economies. Doing so could lead to substantial new financial assistance from abroad and help the new states to improve their creditworthiness. Specifically, the G-7 nations said they would support a $24-billion financial assistance package for Russia, contingent on Russian progress in stabilizing and reforming its economy. However, Russia did not stabilize its economy, and the FSU debt arrears situation worsened. In April 1993, Russia’s official creditors (i.e., creditor country governments) found it necessary to reschedule a significant amount of Russian and FSU debts due in 1993. At the same time, the G-7 promised a new package of economic support for Russia. However, debt relief and other financial assistance have remained largely contingent on economic stabilization and reform. During the first half of 1994, Russia’s official creditors rescheduled additional FSU debt and agreed to meet later in the year to consider longer and more comprehensive rescheduling. For decades the Soviet Union was a conservative user of western credits and was regarded by western government and commercial lenders as an excellent credit risk, given its huge gold reserves and other exportable raw materials. During the first 9 years of the 1980s, the Soviets usually ran hard currency current account surpluses. In the mid- and late 1980s, political detente and increasingly lax fiscal policies of the Soviet government led to a rapid increase in commercial lending to the Soviet Union, according to the World Bank. It estimates that the Soviet Union’s gross hard currency debt more than doubled, from $38.3 billion at the end of 1987 to $81.5 billion at mid-1993 (see table 3.1). In 1989, the Soviet Union experienced a negative hard currency trade balance of $2.4 billion due to surging imports. The imbalance was financed by hard currency borrowing. During 1989 and 1990, a growing debt burden, debt-servicing problems, and increasing world concern about a collapsing Soviet economy and political disintegration began to affect the country’s access to the commercial financial market. According to the World Bank, the surge of imports caused a severe liquidity crisis, leading to a buildup of arrears to western banks and suppliers. The liquidity crunch was exacerbated, in part, because the government had extended authority to Soviet enterprises to negotiate overseas business. According to a World Bank analysis, the Soviet Union was $4.5 billion in arrears at the end of 1990 (see table 3.1). In an earlier analysis, the bank estimated that about 10 percent of year-end 1990 arrears was guaranteed debt to official creditors, 30 percent unguaranteed debt to commercial banks, and 60 percent debt to others (mainly suppliers). As arrears accumulated, commercial banks reduced and then stopped new lending. The Soviet Union was faced with large net repayment obligations, which it financed to a great extent out of its deposits in western banks. As a result, these liquid reserves fell sharply, from $14.6 billion in December 1989, to $8.6 billion in December 1990, and to $6.4 billion by March 1991. At the end of 1991, estimated reserves were only about $5.1 billion, which represented less than 2.5 months’ coverage of import costs. According to one source, a 3- to 6-month coverage is generally considered adequate. Gold reserves, the other major component of the FSU’s international reserves, were apparently either drawn down to very low levels or already at low levels. Thus, they were not available for financing the liquidity problem. According to 1991 Central Intelligence Agency (CIA) estimates, Soviet gold reserves had ranged between 1,679 metric tons in 1980 to 2,105 metric tons in 1990, with a peak level of 2,366 metric tons in 1985.However, during the latter part of 1991 an economic adviser to the Soviet President asserted that the Soviet government had been selling off large amounts of gold reserves for several years. He added that only 240 tons of reserves were left (valued at less than $3 billion). In May 1992, the U.S. Minister Counsellor for Economic Affairs at the U.S. embassy in Moscow told us that Russia had only 220 metric tons of gold. He said these amounts were minimal reserves. In early February 1993, a former Soviet prime minister said that the Soviet Union had squandered its gold reserves before President Gorbachev took over in 1985 but had managed to keep the matter secret until 1991. Due to its problems with debt servicing, by 1990 the Soviet Union had become a high-risk country for lenders. The worsening political, economic, and liquidity conditions virtually halted the flow of commercial financing. This halt in commercial financing became an impetus for western governments to undertake considerable official financing. Consequently, the structure of Soviet debt was substantially altered, as evidenced by changes in the source and maturity of the Soviet debt. Whereas commercial banks and other private creditors had accounted for 78 percent ($44.1 billion) of the Soviet Union’s convertible currency debt at the end of 1989, they held only 41 percent ($25 billion) at the end of 1991. Conversely, Soviet official bilateral debt increased from $12.4 billion, or 22 percent of convertible currency debt, to $36.5 billion, or 59 percent, during the same period. A substantial change in the maturity of FSU debt accompanied the change in the sources of FSU debt. As commercial lenders increased their efforts to collect on their short-term loans, western governments extended medium- and long-term credit or credit guarantees. Whereas in 1988 about 27 percent of the debt was short term, at mid-1992 short-term debt was estimated at only 17 percent. As table 3.2 shows, in mid-1991 Germany was by far the largest creditor of the FSU, accounting for more than two-fifths of FSU external debt. However, much of the debt to Germany (about 40 to 43 percent at the end of March 1991) was owed to the former German Democratic Republic (East Germany). The United States was not among the six biggest creditors. Table 3.3 provides information on both loans and grant assistance pledged to the FSU by the G-7 nations and the European Community during 1990 through 1992. It shows that Germany pledged the largest amounts of both loans and grants during this period—$54 billion out of $81 billion. The United States was second, with combined pledges totaling about $9.2 billion. The year 1991 marked a turning point for Soviet debt as well as for Soviet territorial, economic, and political integrity. The hard currency situation worsened significantly as a result of capital flight and declining exports, particularly oil. According to a World Bank report, capital flight for 1991 was estimated at about $15 billion. That figure represents a staggering 88 percent of Soviet contractual debt service for the year and 61 percent of estimated merchandise exports. According to the Wharton Econometrics Forecasting Associates (WEFA) Group, in 1991 Soviet oil exports fell by about 50 percent from the previous year’s level; iron ore, by 64 percent; steel mill products, 70 percent; timber, 50 percent; and diesel fuel, about 25 percent. Declining exports adversely affected hard currency earnings. Whereas 1990 hard currency earnings from merchandise exports were $36 billion, according to WEFA, in 1991 the earnings equaled only about $24.8 billion. According to WEFA’s figures, most of the decrease can be accounted for by reduced fuel exports. In terms of dollar earnings, the value of Soviet fuel exports to nonsocialist countries fell from an estimated $21.8 billion in 1990 to $9.2 billion in 1991. Soviet merchandise imports also declined markedly—from $39.5 billion in 1990 to $25.4 billion in 1991—due to the lack of hard currency and the unwillingness of foreign commercial banks to grant short-term credits. Following the failed coup in August 1991, Soviet economic deterioration and political disintegration accelerated. A crisis point was reached in autumn, when the Soviet Union found itself unable to repay all of its debts and to secure new credits badly needed to purchase food imports. Owing to concern about whether the Soviet Union would continue to exist, officials of the G-7 nations indicated that additional western loans would not be forthcoming unless the various Soviet republics pledged to honor the Soviet Union’s debt, according to various Soviet media reports. On November 21, 1991, six Soviet republics signed an agreement with the G-7 nations affirming joint and several liability for the outstanding debt of the Soviet Union, based on an October 28, 1991, memorandum of understanding. Subsequently, two other republics also signed the document. (See table 3.4.) The eight also agreed to carry out economic reforms recommended by the IMF, including reducing fiscal deficits, public expenditures, and monetary growth and liberalizing prices and the foreign exchange rate. The hard currency crisis was temporarily alleviated on November 21, 1991. On that day representatives of the G-7 nations reached agreement with the Soviet government and governments of 8 of the 12 republics on a financial package designed to ease Soviet liquidity problems. According to the U.S. Department of the Treasury, the package included the deferral of about $3.6 billion in principal payments on medium- and long-term debt contracted before January 1, 1991, and falling due to official creditors in G-7 nations before the end of 1992; the maintenance of short-term credit lines by G-7 nations’ export credit agencies; and the possible emergency financing of up to $1 billion in the form of a loan secured by gold. In December 1991, the Soviet Union was formally dissolved. During that month, eight republics of the Soviet Union reached preliminary agreement among themselves on how to share the external debt and assets of the FSU. (See table 3.4.) The signatory republics agreed to accept responsibility for a portion of the Soviet Union’s overall foreign debt and to set up an interrepublic debt management committee to oversee handling the Soviet Union’s debts and assets. The committee calculated preliminary debt shares for the 15 republics on the basis of each republic’s economic stature within the union. Table 3.4 shows which republics signed the agreement and the debt shares of the republics. Also during December 1991, the Soviet Union suspended all principal payments to commercial creditor banks for loans made before January 1, 1991. An agreement between the commercial bankers and officials of a majority of the successor states stipulated that talks on the issue resume in March 1992. On January 4, 1992, 17 principal creditor nations, including the G-7 countries, met in Paris with VEB, the designated debt manager for the 8 former republics that signed the October 28, 1991, memorandum of understanding previously discussed. The creditors agreed to defer principal payments on medium- and long-term official debts contracted before January 1, 1991, and falling due from December 5, 1991, to the end of 1992. They said the deferral would continue beyond March 31, 1992, provided satisfactory progress were made by the debtor countries in mobilizing foreign exchange and adopting comprehensive macroeconomic and structural adjustment programs, in full consultation with the IMF. Shortly before and following the dissolution of the Soviet Union, Ukraine, Russia, and other republics requested membership in the IMF and the World Bank. Membership in the international financial institutions could lead to substantial new financial assistance from abroad and help the new states to improve their creditworthiness. The United States and other members of the G-7 nations concluded that the international financial institutions could be used to promote economic reform in the FSU and to coordinate western financial assistance. They encouraged the new states to develop economic adjustment programs that could be supported by the institutions. On March 31, 1992, the IMF endorsed a draft economic reform program that had been prepared by the Russian government. A suitable reform program was a condition for extension of the moratorium on debt service payments for the FSU. Among the program’s main points were that the Russian government would (1) reduce its budget deficit to 1 percent of gross national product (GNP) by the end of 1992 through cuts in military spending and subsidies to enterprises; (2) tighten central bank monetary and credit policies; (3) adopt new taxes, including restoring a 28-percent value-added tax; and (4) target social subsidies more precisely, so aid could be provided to the most needy and the unemployed. Russian officials claimed the program would slow the rate of inflation to 1 to 3 percent by the end of 1992. They also said the reform program would have to be cut back if Russia did not receive substantial foreign financial aid, including debt relief, a stabilization fund for the ruble, and balance-of-payments support. On April 1, 1992, the United States and other G-7 nations announced support for a $24 billion international aid program for Russia. The package was to include rescheduling of $2.5 billion of official debts of the Russian government. It also was to include about $11 billion in bilateral aid (export credits and humanitarian and other foreign aid) from the G-7 nations; $4.5 billion in loans from the IMF, World Bank, and the European Bank for Reconstruction and Development; and a $6-billion fund to stabilize the ruble for Russia and other former Soviet republics that continued to use the ruble. The stabilization fund was to be funded by IMF member loans to the IMF. However, implementation of much of the aid package was seen as contingent on Russian progress in stabilizing and reforming its economy, and including IMF approval of an IMF standby program. In mid-April 1992, the IMF Managing Director reported that the challenge involved in trying to engineer a transformation from command to market economies in the former Soviet republics would cost billions of dollars in outside aid from the IMF, the World Bank, the governments of industrial nations, and private investors over the next 4 years. After taking into account the expected level of exports, the obligation to service the debt, the need to replenish international reserves, and the allowance for a stabilization fund for the ruble (about $6 billion), the financing requirement for 1992 in Russia alone, the Director said, could be $20 billion to $25 billion. IMF projections for the other republics, he said, indicated an external financing requirement of about $20 billion in 1992. He estimated that the IMF could provide $25 billion to $30 billion over the next 4 to 5 years. Other sources indicated the World Bank might provide as much as $12 billion to $15 billion for development projects over the same period. The IMF Director noted that if the transformation is to succeed, private capital will eventually have to play the leading role. On April 27, 1992, the IMF and the World Bank approved membership for Russia and most other republics. Russia became a member on June 1, 1992. However, most aid was held up pending further agreement between Russia and the IMF concerning the nature of the Russian reform program and the terms for an IMF standby loan to Russia. Concerns were raised about whether Russia remained committed to and would implement an adequate reform program. In fact, in the spring of 1992, the Russian reform program was relaxed in response to attacks by domestic critics, particularly in the Russian parliament. For example, the government gave miners large pay increases; promised new bank loans to help state enterprises teetering on the edge of bankruptcy; and agreed to delay until June 1992 an 80-percent increase in oil, gas, and electrical prices scheduled for April. With final agreement on a reform package still not achieved between Russia and the IMF by June 1992, the IMF proposed a compromise approach: it would advance Russia $1 billion of a planned $4-billion standby loan before agreement was reached on the reform program. In July, the G-7 nations expressed support for the proposal. On July 5, the IMF agreed to release $1 billion to Russia; however, the $1 billion was to be retained in Russia’s international reserves. In spite of the 1991 deferrals, the successor states have not kept current on servicing the FSU debt. As table 3.1 shows, during 1992 arrears on the Soviet debt more than doubled (from $4.8 billion at the end of 1991 to $11.8 billion at the end of 1992). Capital flight continued to contribute significantly to the FSU’s liquidity problems. The World Bank estimated capital flight during the first 8 months of 1992 at $5 billion to $8 billion ($7.5 billion to $12 billion on an annualized basis). In June 1992, Russian officials requested that the G-7 agree to a 5-year moratorium on repayments of principal and interest on all FSU debt. In July, G-7 leaders discussed a 10-year restructuring of both principal and interest payments, with a 3- to 5-year grace period. At the July 1992 G-7 meetings, they expressed support for the Russian President’s proposal to defer Russia’s share of the FSU debt. However, they also made it known that the deferral issue had to be addressed by the Paris Club. Following the G-7 meeting, the Russian President indicated that his country would also consider proposals for swapping debt relief for Russian land, buildings, raw materials, and oil and gas exploration rights. Meanwhile, the December 1992 agreement that had deferred principal payments on commercial FSU debt through March 1992 was reextended in June and again in September 1992. Negotiations with Russia on debt restructuring got underway with the Paris Club in late 1992. According to the World Bank, as a result of capital flight, lower gold sales, and already depleted reserves, FSU external debt servicing reached only $1.3 billion during the first 3 quarters of 1992—far short of scheduled obligations. Under the G-7 Debt Allocation Treaty of December 4, 1991, Soviet successor states were to transfer foreign exchange to VEB for FSU debt payments. However, in December 1992, the World Bank reported that Russia had been the only contributor since late 1991. One reason why other republics had not contributed was a lack of agreement on the disposition of the FSU assets. An official of the Kazakhstan bank responsible for guaranteeing foreign loans told us that none of the former republics, except Russia, were making payments on the FSU debt. He said payments were not being made because (1) Russia had not divided up the FSU assets and (2) VEB had frozen the hard currency accounts of enterprises located in the other states. Ukrainian officials also told us that the assets had not been divided and hard currency accounts had been frozen. Ukrainian officials also said it was not true to claim that the other states were not paying their debt shares because Russia might be using or might already have used hard currency from the other states’ frozen enterprise accounts to make payments on the debt. Ukrainian officials said that Ukraine (1) had accepted responsibility for 16.4 percent of the FSU’s debt, (2) was ready and wanted to begin paying off its debt, and (3) was willing to pay 20 percent of the total debt if states other than Russia could not pay their share. However, the officials indicated Ukraine was not willing to make payments on Ukraine’s share through VEB, because there was no assurance that the latter would use the monies to pay off Ukraine’s debt: VEB might instead use the funds to pay off Russia’s debt or the debt of some other republic. Consequently, the officials said, Ukraine wanted to deal directly with its creditors. Kazakh officials said Kazakhstan had tried to arrange to pay debt owed to Germany’s Deutsche Bank directly to the bank rather than through VEB Moscow, but Deutsche Bank had not agreed to such an arrangement. Uzbek officials said the debt share apportioned to their country was not fair. They said Uzbekistan did not even know how much of the FSU’s debt had been expended on Uzbekistan and that Uzbekistan’s appropriate share of the old debt was still under discussion. They also said the Uzbekistan government was prepared to pay its share of debt once it was provided accurate data on how Uzbekistan’s share was calculated. Russian officials denied that the enterprise accounts of other former republics were frozen. Rather, they said, all of the funds in the enterprise accounts (estimated at $10 billion for all republics, including Russia) had been spent by the Soviet government before the dissolution of the Soviet Union. The money was used to pay foreign debts and to purchase grain and food imports. A VEB official said that although all of the states considered themselves responsible for the FSU’s hard currency external debt, only Russia had accepted responsibility for the FSU’s hard currency internal debt. He estimated the latter at approximately $11 billion to $12 billion and said that Russia’s debt claims on the various other states would be far greater than the other states’ asset claims on VEB. In June 1992, the Russian government launched negotiations with most of the other states aimed at assuming responsibility for their external debts if, in turn, the states agreed to forgo claims on the external assets of the FSU. As long as the issue is not fully resolved, the credit standing of all the republics could be adversely affected, we believe, owing to the previous agreement on joint and several responsibility. For example, according to the news organization Itar-Tass, on November 2, 1992, the Russian Deputy Prime Minister said that the Paris Club had indicated that the former Soviet republics would need to settle their debts fully with each other before the debt of the FSU could be rescheduled. Toward the end of November 1992, a tentative agreement was reached between Ukraine and Russia, giving Russia the sole right to negotiate with Ukraine’s western creditors. In return, Moscow promised to negotiate a pact with Ukraine sharing remaining assets and liabilities. Each country reserved the right to renounce the agreement if either failed to agree to a bilateral pact by the end of 1992. However, an agreement was not reached by the end of 1992. According to a State Department official, the Paris Club creditors were not willing to reschedule FSU debt unless satisfactory arrangements were reached between Russia and Ukraine concerning responsibility for the debt. During the early part of 1993, Russia and Ukraine made some progress toward reaching an agreement. On the basis of this progress, the Paris Club creditors felt sufficiently comfortable to consider Russia as primarily responsible for FSU debt, subject to conclusion of an agreement between Russia and Ukraine. The Paris Club decision means that the creditors will pursue Russia first in their efforts to secure payment of FSU debt. As of December 1993, Russia and Ukraine had still not finalized an agreement on the handling of FSU assets and debts. According to the World Bank, by the end of 1993, nine republics had signed agreements with Russia to exchange FSU assets for debt, and Ukraine and Georgia were negotiating with Russia. Russia had also offered to sign agreements with the Baltic countries. However, according to the bank, the Baltic countries had taken the position that they were not the legal successors of the FSU and therefore could not take responsibility for servicing and paying off its debt. In early 1993, western governments became increasingly concerned about a deteriorating political situation in Russia and the possibility that Russia’s commitment to democracy and economic reform might be reversed (see ch. 4). As a result, in April the G-7 nations agreed on a $28.4-billion package for providing economic support to Russia. In addition, as part of the effort to assist Russia, the United States and other western creditor governments agreed to reschedule some $15 billion in Russian and FSU debt. On April 15, 1993, representatives of the G-7 nations and the European Community announced agreement on a new package for providing financial assistance to Russia. A considerable portion of the G-7’s April 1992 aid package had never been forthcoming, in part because Russia had failed to stabilize its economy and reach agreement with the IMF on a standby agreement. As table 3.5 shows, the 1993 package included renewed commitments of support from 1992, totaling $7 billion, and it included $21.4 billion in new commitments for 1993. As was the case with the financial assistance offered by the G-7 nations in 1992, there was no assurance that Russia would receive all of the aid. While some of the assistance was expected to start flowing quickly, fulfillment of the package remained contingent on Russian progress in stabilizing its monetary situation and continuing the process of structural economic reform. In addition, much of the assistance depended on the cooperation of multilateral institutions, including the IMF, the World Bank, and the European Bank for Reconstruction and Development. Russian cooperation was also needed. During early 1993, the G-7 encouraged the multilateral institutions to ease up on their normal standards for conditionality and to provide financial assistance earlier as a way of encouraging later reform. On April 23, 1993, the IMF approved creation of a new loan facility for this purpose—the Systemic Transformation Facility (STF) that was included in the April 15 proposal of the G-7 and the European Community. The program is designed to provide several billion dollars in low-interest loans to Russia, and possibly other former socialist countries as well, under less stringent financial conditions than is typical for IMF loans. For example, countries are not required to have a standby loan in place to receive STF loans. The loans would be approved in two segments, with the first half disbursed immediately. Although a standby loan program is not required, a commitment to achieving macroeconomic stabilization is still important for receiving STF loans. STF is a temporary facility that expires at the end of 1994; however, withdrawals can be completed as late as the end of 1995. In early June 1993, the IMF held up approval of an STF loan that it was considering for Russia, in spite of pressure from the United States and others. However, on June 30, 1993, it approved a first drawdown of $1.5 billion on a $3-billion loan. In return, Russia committed itself to reducing its budget deficit to 5 percent of gross domestic product (GDP) and its monthly inflation level to a low, single-digit level. In July 1993, the G-7 established a $3-billion privatization and restructuring program for Russia that was expected to distribute funds over an 18-month period. It was to be made up of $500 million in bilateral grants to be used largely for technical assistance to newly privatized companies; $1 billion in bilateral export credits and $1 billion in World Bank and European Bank for Reconstruction and Development loans to be used by Russian companies to import western goods; and $500 million in World Bank loans to be used by local Russian governments to help them make up for health, education, and other services previously supplied to employees by state-owned companies. In late September 1993, President Clinton signed a foreign aid bill that authorized $2.5 billion of assistance for Russia. On April 2, 1993, representatives of the United States and 18 other western creditor governments reached a political agreement with Russia to recommend rescheduling $15 billion in Russian and FSU debt. The agreement concerned all arrears (at the end of 1992) on medium- and long-term official and officially guaranteed debt incurred before January 1, 1991, and maturities relevant to that debt falling due in 1993. This rescheduling referred to a 10-year span, with a 5-year grace period. Other obligations were also rescheduled, including those related to medium- and long-term obligations incurred during 1991, some short-term debt obligations, and some moratorium interest falling due during 1993. These latter obligations were rescheduled over 7 years, with a 2-year grace period. Arrears not covered by the rescheduling were to be fully paid by June 30, 1993, and Russia was required to stay current on all other scheduled payments. Under the agreement, interest would continue to accrue on deferred or rescheduled debt and would have to be repaid as it came due. However, 60 percent of the interest due in 1993 was rescheduled. Governments of the creditor countries were to work out the details in bilateral agreements with Russia. Russia committed itself to seek comparable terms from other external official creditors, banks, and suppliers. In effect, the April agreement was a practical recognition by official western creditors that Russia could not service most of its debt in 1993. By rescheduling overdue debt and debt likely to fall into arrears in 1993, the April agreement would enable Russia to apply for new loans from western governments and other creditors. According to a Treasury Department official, under the April agreement Russia also agreed to accept responsibility for repaying all of the official FSU debt. According to CCC, the April agreement was concluded outside of the Paris Club. Ordinarily the Paris Club does not reschedule government-to-government debt unless an IMF economic reform program is in place. Nonetheless, the agreement required that the Russian government adopt and implement an ambitious and comprehensive macroeconomic and structural adjustment program. The Russian delegation stressed the strong determination of its government to reduce Russia’s economic monetary and financial imbalances and to conclude an IMF upper credit tranche arrangement approved by the IMF Executive Board. Signatory creditor governments could declare the agreement null and void if Russia had not concluded an upper tranche arrangement by October 1, 1993. Russia did not conclude such an agreement by that time. However, the creditor nations waived their right to terminate the agreement. According to a State Department official, the creditors considered the IMF arrangement a significant issue, but they felt it was more important to normalize relations on the debt issue. As discussed earlier, in December 1991 the former Soviet Union suspended all principal payments to commercial creditor banks made before January 1, 1991. In January 1992, commercial creditors, negotiating through a bank advisory committee chaired by Deutsche Bank, granted a 3-month rollover of debt payments. It was extended for each consecutive quarter through the end of 1993. All agreements deferred payment on current principal due during the individual deferment periods. Interest was mostly unpaid, however, and as of June 30, 1993, cumulative interest arrears on commercial bank debt were $2.4 billion, excluding late interest charges. On July 30, 1993, Russia signed an agreement in principle with the commercial banks. According to Chemical Bank, the debt at that time included $24 billion in principal and $4.5 billion in interest arrears. Russia announced that it would make a $500 million partial payment on its interest arrears by the end of 1993, and the parties agreed to seek to restructure the overall debt in early 1994. According to PlanEcon, at the end of 1993, Russia’s total debt was about $87 billion, and it estimated that Russia’s debt had increased nearly $9 billion during 1993. At the time of the April 2, 1993, debt rescheduling agreement between Russia and its official creditors, the latter agreed to meet again with Russia in 1994 to discuss further debt relief. According to the World Bank, the creditors’ willingness to do so depended on Russia’s having an IMF upper credit tranche arrangement in place and arranging debt relief on other obligations due in 1993 (mainly commercial bank credit). However, according to USDA, on January 20, 1994, Russia’s major official creditors agreed, in response to a Russian request, to extend the terms of the April 2, 1993, agreement through April 30, 1994. They did so even though neither of the two above conditions was in place. In March 1994, a State Department official told us that Russia and the IMF had begun serious talks on additional debt relief arrangements. The official indicated that debt relief was no longer being made contingent on Russia’s having an upper credit tranche arrangement or concluding a debt rescheduling agreement with commercial creditors. In March 1994, the IMF Director indicated a standby agreement would not be possible until the second half of 1994, and he said that such an agreement would depend on Russia’s planned budget for 1995 and implementation of its STF program as intended. In addition, he said that the IMF must have a clear idea of how the process of disinflation was developing in 1994. As of March 1994, the IMF still had not approved the second half of the $3 billion STF loan to Russia. According to the WEFA Group, the IMF was unhappy with Russia’s lack of progress in stabilizing its economy.Subsequently, on April 20, 1994, the IMF announced approval of the second drawdown, equivalent to about $1.5 billion. The IMF said it was approving the loan to support Russia’s 1994 economic reform and stabilization program. The agreement was reached only after direct negotiations between the IMF Managing Director and Russia’s Prime Minister. According to the IMF Managing Director, the loan will provide foreign exchange and be part of the general financing. He noted that Russia has a lot of debt payments to make to the international community. According to the IMF, the Russian program’s main objectives are to further reduce the rate of inflation through tighter fiscal and monetary policies and to consolidate and strengthen structural reforms and the transition to a market economy. The IMF said the monthly rate of inflation is projected to decline to 7 percent by the end of 1994, and the federal budget deficit is expected to represent about 7 percent of GDP during the year. The IMF warned that success of the Russian program hinges critically on the strict implementation of the government’s fiscal plan. The IMF noted that various sectors would probably be reluctant to accept a reduced level of budgetary support. In addition, the IMF reported that Russia would clearly need a further comprehensive debt relief package to normalize relations with external creditors. The IMF said that external financing would also be needed by Russia and other FSU countries to help them consolidate large budget deficits in a noninflationary manner and to finance social safety nets. According to the IMF, official and private external financing would be available only in the context of strong and sustained stabilization and a reform program. Regarding the latter, as of November 1994, Russia had not concluded a standby loan agreement with the IMF. As table 3.6 shows, the IMF, the World Bank, and the European Bank for Reconstruction and Development delivered only $3 billion of $19 billion in aid that was announced for Russia during 1992 and 1993. Total official aid delivered from all sources was only $23 billion or about 58 percent of the $40 billion announced. According to the IMF, much of the $17 billion that was promised but not received was due to Russia’s failure to implement appropriate macroeconomic stabilization policies. Table 3.6 also shows that export credits accounted for most of the official assistance that was delivered in 1992-93. Some observers have been highly critical of the counting of export credits as financial assistance. For example, according to Jeffrey Sachs, a former financial adviser to the Russian government, most of the credits were short-term trade credits that had to be repaid in 1 to 3 years. Thus, the credits became government debt that rather quickly added to the government’s debt-financing problems. On June 4, 1994, Russia’s official creditors agreed to reschedule about $7 billion in FSU debt payments due in 1994, including debt contracted before 1991 and during 1991. According to a Treasury Department official, the $7-billion figure includes payments that had been deferred under the January through April extension previously discussed. The rescheduling included some short-term debt and previously rescheduled interest. The agreement provided for a grace period of 2 to 3 years, with payback periods ranging between 5 and 13 years. Russia and its official creditors also agreed to meet later in 1994 to discuss longer term and more comprehensive rescheduling. Regarding commercial debt rescheduling, on April 1, 1994, an official of Chemical Bank advised us that talks were being held between the Russian government and the bank advisory committee. According to the official, Russia had not paid any of the promised $500 million in interest during the last quarter of 1993 and had paid no interest during 1994. In October 1994, the Russian government issued a statement saying it was prepared to assume legal responsibility for the former Soviet Union’s commercial debts. Also, in early October 1994 there were press reports that Russia had reached agreement with its foreign bank creditors on a framework for a long-term rescheduling of the commercial debt. However, a representative of Chemical Bank advised us that the terms of an agreement had still not been defined (e.g., grace period, number of years over which principal would be repaid, contractual interest rate). He said it was possible that an agreement could provide for a grace period of up to 5 years and for repayment of rescheduled debt over 10 to 15 years. He said commercial creditors hoped that the Russian government would make some kind of cash payment on the debt. There were, he said, hopes that an agreement would be reached in the near term, but an agreement was far from being wrapped up. Between 1989 and 1991, the FSU experienced increasing debt problems. The situation reached crisis proportions in late 1991. Russia and many successor states eventually reached agreements whereby Russia would accept responsibility for external FSU debt in return for the other states not making claims on the FSU’s external assets. Russia has agreed with its official creditors to accept responsibility for the FSU debts. However, as of early October 1994, Russia had still not reached agreement with its foreign bank creditors on a framework for rescheduling and resuming payments on the FSU commercial debts. Since late 1991, the United States and other official creditor nations have provided considerable debt relief to the FSU and its successor states. The United States, other official creditor nations, and the IMF have also provided important financial assistance to Russia. However, much of the promised financial assistance has not been forthcoming because of insufficient progress by Russia in stablizing and restructuring its economy. Although official creditor nations have provided considerable debt relief, additional debt relief is needed. During the past few years, the FSU and its successor states have experienced historic economic and political change. The process is not yet complete. The Soviet empire is gone, replaced by 15 successor states, and the central role of the Communist Party has been abolished. The region has begun to move away from the old command economy of the FSU toward market-like economies, and some progress has been made in establishing democratic institutions. However, progress varies widely across the successor states. The successor states’ economies are in serious decline, and further deterioration is projected for most of them. Political legitimacy is an issue in a number of the new states. During 1993, Russia itself experienced a constitutional crisis concerning the respective roles of the parliament and the presidency in directing the affairs of the country, the direction and pace of economic reform, and the question of whether its leaders represented the views of the electorate. Five of the former Soviet republics have experienced significant armed conflict within their borders. Whether efforts to create effective market-based economies and democratic polities will succeed is not clear. Also uncertain is whether the political boundaries that resulted from the breakup of the Soviet empire will survive. Such uncertainties can affect the willingness of westerners to invest in the new states. Without substantial foreign investment, the new states’ creditworthiness can be adversely affected. It is 3 years since the Soviet Union disintegrated. Shortly before the breakup, the central administrative organs of the Communist Party were dissolved, its assets confiscated, and its archives seized. The party that dominated life in the Soviet Union for decades was banned or suspended in Russia and many other successor states. Also gone are the central governmental ministries and planning system in Moscow that played major roles in directing affairs across the various republics. Having discarded the Marxist-Leninist ideology, the successor states are trying to make a transition from command economies to free and open markets. In addition, many have made progress toward establishing democratic institutions. Nonetheless, former Communist elites continue to govern under the names of newly created parties in many of the new states, and Communist elites cling to power at regional and local levels as well. Although the old Communist Party was banned as a national organization after the 1991 coup attempt, several neocommunist parties have been formed in Russia since then. They have a strong national organization and, as discussed in the following section, experienced some success in December 1993 elections for a new parliament. Former Communists were recently returned to power in Lithuania. All of the states are experiencing acute economic crises that stem from the general economic collapse that preceded the dissolution of the Soviet Union and that has been further exacerbated by the breakup of the empire. Common elements of the crisis have included very high levels of inflation, hard currency shortages, and failing public health systems. Many of the new states are politically unstable, not only as a result of the economic crisis but also, in many cases, because of a lack of political legitimacy. Several states have been adversely affected by intraregional and internal ethnic and civil conflicts that have turned violent, particularly in the Caucasus region (see app. I). The economies of the former Soviet republics are in disarray. Economic deterioration was a major factor associated with the development of the debt crisis and the disintegration of the Soviet Union itself. According to WEFA estimates, Soviet GDP fell by 2 percent in 1990 and 16.9 percent in 1991. WEFA estimated that aggregate GDP for the former Soviet republics declined another 20 percent during 1992. It estimated the cumulative drop for 1990-92 at 34.9 percent. PlanEcon, a Washington, D.C., economic forecasting group specializing in East European countries, has estimated GNP losses for each of the former Soviet republics. According to its calculations, during 1989 through 1993, three of the republics/successor states sustained GNP declines ranging from about 12 to 26 percent (Belarus, Turkmenistan, and Uzbekistan); six states experienced declines of about 31 to 37 percent (Estonia, Kazakhstan, Kyrgyzstan, Moldova, Russia, and Ukraine); five states declines of 44 to 57 percent (Armenia, Azerbaijan, Latvia, Lithuania, and Tajikistan); and one state a decline of about 66 percent (Georgia). (See table 4.1.) The GNP estimates indicate that economic decline in most republics is already comparable to or greater than that experienced by the United States during the Great Depression. According to a Congressional Research Service analysis, the breakdown of the economies of the former Soviet republics can be attributed to the legacy of the Stalinist economic planning system combined with incomplete economic reforms that were introduced during the Gorbachev era. Under the command economy, the state owned all the means of production and controlled production and investment decisions. The result was an inefficient system that produced, with only a few exceptions, poor quality goods and services. Gorbachev’s reforms included laws to decentralize economic decisionmaking but did not go far enough. The reforms reduced the discipline of the state-run economy but left intact most of its fundamental elements—price controls, nonconvertibility of the ruble, public ownership, and the government monopoly over most of the means of production. The economic breakdown was manifested in monetary imbalances that led to high inflation and a shortage of goods as the former Soviet government ran up large budget deficits. The central government financed these deficits primarily by printing money, thereby generating inflation as increasing amounts of rubles chased decreasing amounts of goods. In addition, the distribution system collapsed. Direct relationships between suppliers and manufacturers and between manufacturers and distributors that were to substitute for the centrally controlled system did not fully develop. In February 1994, PlanEcon forecast another 2 years of economic decline in Russia and 3 years of additional decline in Belarus and Ukraine. It concluded that prospects for economic recovery in Armenia, Azerbaijan, Georgia, Moldova, and Tajikistan would remain bleak until their various political, interethnic, and territorial conflicts are resolved (see app. I). It forecast another 2 years of economic decline or stagnation in Kyrgyzstan, Turkmenistan, and Uzbekistan. And it forecast another year of economic decline in Kazakhstan. There were four bright spots in PlanEcon’s forecast. Although forecasting another year of decline in Kazakhstan, PlanEcon said it anticipated a strong recovery in 1995 and 1996. The recovery would be led, directly and indirectly, by several large projects to develop Kazakhstan’s natural resource wealth that involve a commitment of substantial resources by a significant number of major multinational corporations. However, PlanEcon said its forecast would be endangered if developments materialize that discourage the continued participation of western firms in the joint ventures. Without the imminent takeoff of these joint venture products, Kazakhstan’s recovery would be more closely tied to and possibly even lag behind Russia’s recovery. The other bright spots were the Baltic states. PlanEcon forecast recovery would get underway in these states in 1994 and 1995—provided that macroeconomic stabilization policies were continually pursued and that industrial restructuring got fully underway. According to PlanEcon, the Baltic states have made the most progress in transition toward market economies of all the former republics. PlanEcon said that tough monetary and fiscal measures had paid off—inflation had been sharply reduced, and all three states boasted strong new currencies. According to PlanEcon, (1) all three states closed 1993 with current account surpluses; (2) their trade balances were positive (except for Estonia); and (3) sizable aid and loan transfers, combined with surpluses in services, have made the near-term external payments picture quite solid. With generally stable monetary policies in place, PlanEcon said, they have been able to increase the level of confidence in their currencies, raise their foreign exchange reserves, and maintain the convertibility of their currencies for current account transactions. However, PlanEcon said, the Baltic states are highly dependent on foreign trade, including trade with the CIS. Consequently, prospects for recovery in foreign trade over the medium term, and thus their external payments environment, will depend on developments in the CIS and particularly Russia. As discussed in chapter 3, during 1991 the IMF and the G-7 encouraged Russia to set certain economic goals. Of all the CIS republics, Russia had the most reform-minded leadership, and its reform program set the pace for the others, according to the CIA. It has passed many of the laws and regulations required to establish market institutions and provide the necessary guidelines for private business activity. It took the lead on price deregulation. In terms of foreign exchange, it set exchange rates at more realistic levels, reduced the number of goods requiring export quotas and licenses, and abolished import quotas. It also made some serious efforts to stabilize its economy by reducing its budget deficit, and it made substantial cuts in defense expenditures. However, Russia still has far to go to create a market economy. Russian fiscal and monetary restraint has weakened considerably in the face of pressures from the old establishment for increased spending and easier credit. Elements of that establishment—industrial managers, farm bureaucrats, and local government officials—are resisting reforms that reduce their influence and diminish their financial support. In addition, Russia has also fallen far short of the goals that it outlined to the IMF in March and July 1992. As discussed in chapter 3, the March 1992 program called for Russia to reduce its budget deficit to 1 percent of GNP by the end of 1992 and indicated the rate of inflation would be slowed to 1 to 3 percent by the end of the year. In July 1992, Russian officials indicated to the IMF that these goals were not obtainable. At that time, they committed to reduce Russia’s budget deficit to below 10 percent in the second half of 1992 and to lower the monthly rate of inflation to 9 percent by December 1992. According to the IMF, Russia’s budget deficit in 1992 was nearly 19 percent of its GDP (see table 4.2). Regarding inflation, PlanEcon estimated that Russia’s average annual inflation in 1992 was 1,414 percent (see table 4.3) and that it averaged about 25 percent per month during the last quarter of 1992. According to the IMF, Russia reduced its budget deficit considerably in 1993; however, the deficit was still more than 9 percent of GDP for the year. Russia also reduced inflation significantly during 1993. Nonetheless, its average annual inflation for the year was 905 percent, and PlanEcon estimated that monthly inflation during the last quarter of 1993 averaged 16 percent. During the first half of 1994, Russia made unexpected progress in reducing inflation as the government maintained tighter fiscal and monetary discipline than had been expected. The average monthly inflation rate fell below 10 percent. PlanEcon estimated that the federal budget deficit during the first half of the year amounted to about 10.4 percent of GDP. However, during the summer, monetary policy was relaxed, raising concerns that inflation would significantly increase before the end of the year. In early October, the ruble began to depreciate significantly. A crisis erupted when the Russian ruble lost more than 25 percent against the dollar in 1 day, October 11—raising new concerns about the effectiveness of the government’s efforts to stabilize the economy. President Yeltsin fired the Finance Minister, sought the resignation of the head of the Russian Central Bank, and appointed a state commission to investigate the situation. The day following the collapse of the ruble, Russia’s Economic Minister was reported to have attributed the ruble collapse, in part, to the government’s easing of monetary and credit policy. Progress in stabilizing economies and implementing economic reform varies widely across the other successor states. As table 4.2 shows, 12 other states had budget deficits in 1993. For 10 of the 12 states, the deficits ranged between 6.1 percent to 52 percent of GDP. Regarding inflation, PlanEcon estimated that 12 other states had average annual inflation rates in 1993 ranging between 410 percent and 10,000 percent. Estonia and Latvia, the countries with the lowest inflation, had annual rates of 55 and 108 percent, respectively. (See table 4.3.) According to PlanEcon, little progress on economic reform has been made in many of the successor states. For example, Armenia, Azerbaijan, Georgia, Moldova, and Tajikistan have been sidetracked by war, civil conflicts, and/or trade embargoes. Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan continue to adhere to many features of the old centrally planned systems, including state orders, price controls, and trade regulation through permits and quotas. Belarus has failed to undertake any significant economic reform, partly because of a parliament dominated by old-style Communists. Kazakhstan hesitated in introducing market reforms during 1992 and 1993. Since then, it has begun to accelerate the transition, yet is still struggling to set up the institutions needed to manage the economy of an independent state. Ukraine made some progress on economic stabilization and reform during later 1992 and early 1993, but meaningful reform was then largely abandoned in favor of a return to greater central control. According to PlanEcon, the members of the old Communist Party continue to dominate that country. According to an IMF May 1994 assessment, most FSU successor states had not yet achieved a reasonable measure of macroeconomic stability. Exceptions cited were the Baltic states. High inflation and budget deficits, the IMF said, are contributing to economic uncertainty and inefficiency, the impoverishment of vulnerable groups of people, capital flight, and a protracted adjustment period. The high inflation and budget deficits are also discouraging needed foreign investment. According to the assessment, states that have pursued very expansionary monetary and fiscal policies (Belarus, Russia, and Ukraine are cited as examples) have not significantly mitigated the large declines in output associated with the transition to market economies. In those countries where macroeconomic stability has not been achieved, the IMF said, the first priority should be to eliminate the underlying sources of inflation—by sharply reducing budget deficits and reining in credit growth. Tax reform is required to enhance revenues and reduce distortions. Expenditure reform is required to reduce subsidies and to target social assistance more effectively. Eliminating excessive credit growth requires allowing financial markets rather than central banks to allocate credit at market-determined rates. The IMF found that most countries in transition have made substantial progress in structural reform. In particular, it said, prices are now largely market determined, and international trade has been liberalized in many countries. Privatization has also proceeded rapidly in some—but not all—countries. However, the IMF also found that the reform process has been significantly delayed in most FSU countries because the large declines in economic output, high inflation, and erosion of the financial position of vulnerable groups have resulted in severe economic and social strains. In some cases, the IMF said, the strains threaten to derail the reform process. As a result, it said, all countries in transition still face a daunting agenda of structural reform, which is crucial to the medium-term prospects for economic growth. A priority for the FSU countries (except the Baltics) is the elimination of the system of state orders, bilateral trading arrangements, barter agreements, and export controls and tariffs. These distortionary measures should be replaced, the IMF said, with more uniform tariff structures at low rates, a workable interstate payments system, and a trading system based on the most-favored nation principle. According to the IMF, privatization and enterprise reform are central to the establishment of market economies but are proceeding more slowly in most FSU countries than had been anticipated. As a result, it said, the pace and scope of privatization need to be strengthened, particularly to include the large enterprises. Land reform, including liberalized real estate markets and the privatization of agricultural land, should be speeded up in most countries. In addition, there is an urgent need in most of the countries to strengthen the financial sector and put in place a legal framework of property rights and effective bankruptcy procedures. According to the IMF, the decline in FSU and its successor states’ output has put great strain on social, economic, and public institutions. It said the old, the unemployed, and the unskilled have been exposed to severe hardship as inflation has eroded the real value of pensions, unemployment benefits, and minimum wages. In general, the patchwork of enterprise-provided social services that prevailed under central planning has not been replaced by adequate alternatives, and the absence of a social safety net has deterred firms from shedding labor. IMF said that there is an urgent need to maintain the purchasing power of many benefits in the face of inflation and to better target benefits by overhauling eligibility criteria and benefit structures, while keeping expenditures at levels consistent with sustainable budgetary positions. The FSU’s political situation is characterized by great uncertainty, with the economic depression that has swept across each state a major destabilizing force. Between the latter part of 1992 and the end of 1993, Russia experienced a political and constitutional crisis that pitted the powers of the Russian presidency against the parliament. The conflict also included a struggle between those who supported the government’s western-oriented market reforms, democratization, and foreign policies against those who wanted to moderate or reverse one or more of these policies. Conflicts have arisen among the republics over the disposition of the FSU’s armed forces, nuclear weapons and other assets, and foreign debts. Finally, historic ethnic rivalries that were largely suppressed during decades of Soviet rule have broken out into the open. They have already led to serious armed conflicts in five of the former republics—Armenia, Azerbaijan, Georgia, Moldova, and Tajikistan. Each conflict has affected Russian minorities living there, and Russia has employed military forces in some of the conflicts to protect its minorities and other interests. Russia has also deployed troops within its own borders in an effort to separate ethnic combatants and prevent a further spread of violence. (See app. I.) Russia is the most important of the republics, because it accounts for the large majority of the area, population, and resources of the FSU, and because it has been in the forefront of CIS republics attempting to institute economic and political reforms. Russia includes several highly disparate regions, each of which has an economy larger than nearly all of the other former Soviet republics. Russia’s population of more than 148 million includes over 150 ethnic groups. Several ethnically based groups have declared themselves sovereign entities and are practically self-governing. These include the Chechen and Tuva Republics and Tatarstan. One concern is whether some of the larger ethnically based groups will be content to remain a part of the Russian Federation or will prefer to seek full independence. Russia’s integrity and viability could be threatened if certain groups seek to leave the country. A related concern is that conflict between minorities could become violent and challenge the Russian government’s ability to maintain order. As discussed in appendix I, Chechens declared independence in 1991, but Russia did not recognize Chechnya’s claim to independence. In late 1994, the Russian government sent large military and police forces into Chechnya to disarm the Chechens and restore Russia’s authority. Chechens fought back fiercely. The conflict could seriously affect Russia’s transition to democracy and a market economy. For additional information about several of the other republics, see appendix I. successor states as a pretext for intervention and possible territorial aggrandizement. As discussed previously, in early 1992 Russia embarked upon a serious economic reform program. During that year, the Russian President governed largely by use of special emergency powers that allowed him to enact changes by executive decree. However, his government found it increasingly difficult to implement its program because of the rise of a powerful industrial lobby that established considerable influence with the Russian parliament. The lobby included large enterprise managers, trade union leaders, and other conservatives who wanted to roll back the government’s economic reform program. The program threatened to restrict the lobby’s powerful role in the Russian economy by decentralizing economic decision-making and holding firms accountable for their actions. During 1992 the lobby demanded and received hundreds of billions of rubles worth of easy credits. It was aided by the Russian central bank, which controlled the printing of money and was responsible to the parliament rather than to the President. The bank’s policy of printing rubles and making large credits available to state-owned industry and farms undermined the government’s monetary and credit policy, threatening hyperinflation. As a result, the budget deficit became much larger than planned, and high inflation rates continued unabated. Toward the end of 1992, a full-blown constitutional crisis developed. In November 1992, the President told the British Parliament that a cabal of militant nationalists and former Communist officials were plotting to overthrow him and sweep aside the economic and political reforms that his government had pursued. The President vowed to do whatever was necessary to prevent their success. In December, Russia’s supreme legislative body, the Congress of People’s Deputies, convened. As the Congress got underway, estimates indicated that only a minority of its members was committed to the government’s reform program; a larger minority was opposed. A crisis occurred when the Congress voted to not approve making the Acting Prime Minister, Yegor Gaidar, Prime Minister. He had spearheaded the government’s radical economic reform program. Following his rejection, the President declared that it had become impossible to work with the Congress. He called for a national referendum, to be held in January 1993, in which the public would be asked to choose between the Congress’ or the President’s ideas for leading Russia out of its economic and political crisis. The President said he would resign if he did not win the vote. The proposal was threatening to members of the Congress, since their terms were not due to expire until 1995 and a vote against the Congress could lead to early elections. The crisis was temporarily defused in mid-December 1992 when the President and the parliament agreed instead to hold a referendum in April 1993 to approve the basic principles for a new constitution, such as whether Russia should have a presidential or parliamentary system of government. However, in January the parliament began to try to back away from holding a referendum. In February the President and the parliament explored possible compromise power-sharing formulas that would allow them to postpone or call off the planned April referendum. At issue, in part, were concerns that a referendum would contribute to separatist tendencies in Russia’s regions and to political and economic instability more generally. In March the Congress met in emergency session to decide whether to pursue a referendum or approve a power-sharing arrangement between the parliament and the presidency. Instead, the Congress chose to cancel the referendum and reduce the President’s powers. It gave itself authority to suspend the President’s decrees, made it easier to remove him from office for unconstitutional conduct, and indicated it would act to further reduce the President’s powers and dismantle many of his reforms. In a televised address to the nation on March 20, 1993, President Yeltsin announced he was assuming temporary special powers to rule by decree and indicated he intended to hold a referendum on a new constitution and to secure a vote of public confidence in his leadership. Such measures were necessary, he said, to prevent restoration of Communist power. President Yeltsin also said that he had ordered the Prime Minister to speed up the economic reform process, including introduction of private land ownership, and to assume control over Russia’s central bank. A few days later, Russia’s constitutional court ruled that the President had violated the constitution by assuming special powers (even though the court had not yet received a copy of a presidential decree ordering an assumption of special powers). On March 23, the speaker of the parliament called for President Yeltsin’s impeachment. This crisis eased somewhat when the President’s decrees were published, since they backed away from the assumption of special powers. Even so, on March 28, 1993, nearly 60 percent of the members of the Russian Congress voted to impeach the President. However, the vote fell short of a required two-thirds majority. (Only a quarter of the legislators opposed the proposal to oust the President.) The Congress also rejected a proposal that called for elections for both the President and the parliament in November 1993. The Congress then passed a resolution calling for a referendum on April 25, with four questions to be put to the voters. They were whether the voters (1) had confidence in the President, (2) approved of the social and economic policies conducted by the President and the government since 1992, (3) considered it necessary to hold early elections for the presidency, and (4) considered it necessary to hold early elections for the Congress. The Congress did not approve questioning, as had been agreed upon in December 1992, the electorate on the basic principles for a new constitution. The Congress stipulated that a majority of the electorate would have to approve any question put to the voters before a decision would be accepted. The standard exceeded the normal condition for referendums that 50 percent of the electorate simply vote. The higher standard was considered difficult to achieve, given an apathetic electorate. However, on April 21, 1993, Russia’s Constitutional Court ruled that the President need secure only a majority of votes by those actually voting on the issues of (1) confidence in the President and (2) approval of the President’s socio-economic policy. In the April 25 referendum, a majority of those voting expressed confidence in the President, his handling of the economy, and early elections for the legislature. Nearly a majority (49.5 percent) voted for early presidential elections. However, the referendum did not bring an end to political gridlock or the constitutional crisis that had gripped Russia for many months. Less than a majority of the total electorate voted for early elections to the legislature and the presidency; consequently, early elections were not mandatory. Although there was a large voter turnout, key opponents of the President sought to discredit the election results before the vote counting had been completed and warned that the President might resort to unconstitutional measures to further his objectives. Between then and September 1993, relations between President Yeltsin and the parliament deteriorated further. As a result, on September 21 the President announced that he was disbanding the parliament and decreed that elections for a new legislature would be held in December. The lower tier of the parliament responded by voting in favor of impeaching the President and declaring Vice President Alexander Rutskoi as Acting President. The latter said he was nullifying Yeltsin’s decree and named new ministers of defense, interior, and security. However, Yeltsin won support from the existing defense, interior, and security ministries. On September 28 and 29, the Interior Ministry sealed off the White House (the building that housed the Russian parliament) with armored personnel carriers and barbed wire and ordered remaining armed parliamentary deputies to surrender their arms and leave the building. On October 3, thousands of anti-Yeltsin demonstrators overran police forces surrounding the parliament and seized control of several key facilities. There were many casualties, and the government launched a counteroffensive. The rebellion collapsed on October 4 when army troops subdued the opposition, including using armed force to retake the White House. Rutskoi and other hardliners were arrested. Following the collapse, President Yeltsin, in a televised address to the nation, warned that conditions in the nation remained dangerously unstable and said that quick action was needed to eliminate the remnants of the old system and put a new democratic structure in place. To this end, he called for (1) a new constitution, (2) elections for a new parliament in December and possibly for new local legislatures as well, and (3) unswerving commitment to continuing economic reform. Elections were held in mid-December 1993 but yielded mixed results. Voters did approve a new constitution for Russia, but progovernment parties fared poorly in the parliamentary elections. Hardline opposition parties won more than 40 percent of the popular vote and elected more deputies than those elected by the reformist parties. Ultranationalists, Communists, and their allies won the upper hand in the Duma, the more powerful legislative chamber. The significant representation that they achieved reflected widespread economic distress and increased opposition to President Yeltsin’s policies. Nonetheless, during the first half of 1994, Russian political developments were surprisingly favorable, according to PlanEcon, as the new parliament demonstrated greater professionalism and the President, Prime Minister, and parliament achieved more stability in their interactions with one another. However, as previously discussed, in late 1994 Russian military and police forces became involved in a major conflict in Chechnya, and the conflict could adversely affect Russia’s transition to democracy. As with economic reform, progress on political reform varies widely across the other successor states. According to PlanEcon, the Baltic states have established viable democratic institutions capable of managing the economic transition and relations with their neighbors, especially Russia. However, all of the other states remain considerably or even far behind. For example, owing to armed conflict and civil strife, Armenia, Azerbaijan, Georgia, Moldova, and Tajikistan have not even been able to achieve political stability. As a result, according to PlanEcon, none of the countries seems close to establishing the political stability necessary to promote economic recovery and attract foreign investment. In Belarus, there was little political change until recently. According to PlanEcon, since January 1994, the chief of state resigned under allegations of corruption, the parliament adopted a new constitution, and the country elected a new president. However, the new president’s positions on major issues remained unclear. In addition, parliamentary elections have not been held since 1990, and the legislature is dominated by a faction that consists largely of holdovers from the old Communist regime. In Central Asia, the political leaders of Tajikistan, Turkmenistan, and Uzbekistan have not strayed far from their Communist roots, according to PlanEcon. The President of Kyrgyzstan is characterized as somewhat of an exception. According to PlanEcon, he seems genuinely committed to change. However, he has been discouraged by corruption and resistance to reform and recently indicated that the people are not yet prepared for democracy. In Kazakhstan, the other successor state in Central Asia, policies have largely been decreed by its executive branch, headed by President Nursultan Nazarbayev. A new constitution was adopted in January 1993 that provided for a strong president (elected for 5 years), a legislature, and a quasi-independent judiciary. The country held its first parliamentary elections in March 1994, with the President’s political supporters securing a large majority. However, according to PlanEcon, both the candidate selection process and voting were not fair. In Ukraine, the first post-independence parliamentary elections were held in March and April 1994. Ukraine also elected its second post-independence president in July. However, according to PlanEcon, the country is still governed by a Soviet-era constitution and a small cadre of bureaucrats, driven by greed and heavily influenced by industrial bosses, controls the levers of both legislative and executive authority. As the prior discussion indicates, the economic and political situation in the FSU has changed dramatically, and there is much uncertainty about its future. As the CIA has noted, “an empire has collapsed, the dust has barely begun to settle, and the forces that will both buffet and propel reform are epic in proportion. The elements of uncertainty and unpredictability in this part of the world are greater than at any time since the Bolshevik Revolution of 1917.” At issue is whether attempts to establish democratic and market-based systems will succeed. Also at issue is whether the political boundaries that resulted from the breakup of the Soviet empire will survive. According to one observer, possible outcomes for the area during the next decade include (1) splintering the empire into different groupings, with widely divergent foreign policies and cultures; (2) instability and possibly even civil war; (3) restoration of the Russian empire under an authoritarian, xenophobic, anti-Western regime; or (4) the emergence of truly independent democratic nations united by some form of a common market and collective security framework. Regarding the latter two possibilities, PlanEcon recently concluded that a few of the new states have already begun to trade some of the normal attributes of sovereignty for closer ties with Russia. In support of this conclusion, it noted the following developments. During 1993, Russia muscled three recalcitrant countries into joining or promising to become full members of the CIS. Georgia had resisted membership since the collapse of the Soviet Union, but in 1993 the Georgian President asked for military assistance to prevent the overthrow of his government; simultaneously, he brought Georgia into the CIS. Azerbaijan, faced with a series of military defeats by the Armenians, turned to Russia for military support and, in exchange, began to participate more actively in the CIS. Moldova’s parliament had not ratified an agreement to participate in the CIS until Russia pressured it to do so in the fall of 1993. In addition, PlanEcon said, Tajikistan and Belarus have abandoned some of the normal attributes of sovereignty for closer integration with Russia. Tajikistan’s ruling group lost the initial rounds of a civil war and needed Russian assistance to reestablish control. The current government relies on Russian economic and security assistance to stay in power and has agreed to follow Russian security and economic policies. Belarus has said it will subordinate its economic, foreign, and security policies to Russia. According to PlanEcon, Kazakhstan and Ukraine are major question marks; if they relinquish the same powers as Belarus, Russia will have reestablished the heart of the former Soviet Union. Some observers believe the successor states lack the necessary political and economic preconditions for undertaking large and instant reforms. For example, Peter Reddaway, a professor of political science and international affairs at George Washington University in Washington, D.C., has concluded that Russia’s deeply Sovietized political culture is highly unsuited to free markets, entrepreneurism, privatization, and the rule of law and will remain so for a decade or two, even with sustained western assistance. According to Reddaway, Russians have reached the limits of their stoicism after the demoralizing traumas of loss of empire, ideology, and familiar institutions, and with severely diminished real incomes. Zbigniew Brzezinski, a former national security adviser to President Carter, has also indicated that the former Soviet republics enjoy little prospect of a successful transition to market-based democracies in the foreseeable future. According to him, the more realistic scenarios for the future of the FSU include (1) continued fragmentation of Russia itself—splitting perhaps into two or three states, with Moslem Central Asia going its own way; (2) emergence of an inward-oriented and rather authoritarian but modernizing Russian national state; or (3) establishment of an authoritarian and nationalist Russian state that seeks to recreate its imperial status. Brzezinski also said that the reforms demanded by the IMF and the West as part of the privatization process would force the post-Communist countries to accept prolonged, massive, and painful unemployment. This situation, he said, is politically and morally unacceptable; rather, the West should, at a minimum, help create some temporary safety nets for the victims of the transition process. In June 1992, the CIA said that it expected the reform process to continue in Russia and elsewhere but believed the process would be contentious and marked by recurring crisis. The CIA said the process would probably last a decade—during which the downside risks would be enormous and the range of possible outcomes wide, including extended political deadlock and instability so serious that it could derail reform in both the economic and political spheres. In early 1993, the CIA reported that there were reasonable prospects that Russia would continue its positive internal transformation and integration into the western system of values, but inevitably with continued great travail. Moreover, it said, there remains the possibility that Russia could revert to dictatorship or disintegrate into chaos, with immediate disastrous consequences for the world. In January 1994, the U.S. Ambassador responsible for coordinating U.S. assistance to the NIS advised Congress that a titanic struggle was underway in Russia over the future of the country. He said the struggle involved a long-term process that could take a generation or more to resolve. In March 1994, the Secretary of Defense said the struggle could lead to a fully democratic and market-oriented Russia, which he characterized as the best possible outcome imaginable or, in the worst case, an authoritarian, militaristic, imperialistic nation hostile to the West. The latter case, he said, could see a renewal of some new version of the old Cold War. As discussed in chapter 1, one of the factors that affects the creditworthiness of countries is their ability to attract foreign exchange to finance new investment within their borders as well as their external debt. As discussed in chapters 3 and 5, estimates are that Russia and the other new states will require billions of dollars in outside financing over the next several years to engineer a transformation from command to market economies. However, as WEFA recently noted, the amount of direct foreign investment already in Russia is meager. It noted that estimates by Russian officials vary widely, citing figures ranging between $2 billion and $7 billion. Even the latter figure, WEFA said, is small relative to the size of the Russian economy and Russia’s professed desire for foreign investment. In April 1994, a Commerce Department official told us that most observers agree that total foreign investment in Russia is not more than $4 billion. According to a recent report to Congress, total U.S. companies’ investment in Russia is estimated at about $1 billion. The willingness of foreigners to invest in the various new states depends importantly on their assessments of each state’s (1) political stability; and (2) willingness to pursue the economic reforms needed to establish viable market economies, according to PlanEcon and WEFA. Consequently, to the extent that there is considerable uncertainty about the future of economic reform and political stability in the new states, foreign direct investment is likely to be adversely affected. Severe economic problems and political and ethnic tensions make the future political and economic situation of the FSU highly uncertain. As long as such uncertainty persists, the FSU successor states will be less likely to attract needed foreign investment, thus adversely affecting their creditworthiness. Before 1992, all of the successor states had a relatively small debt compared to their economic output. Since then, all would be classified as severely indebted if held responsible for their respective shares of the FSU debt. Many of the FSU states have agreed to give up their claim on FSU assets in return for Russia’s accepting their share of the FSU debt. This situation would reduce their debt burden but increase Russia’s. Most of the new states have experienced severe liquidity problems. Russia’s serious and growing arrears in debt payments, its inability to meet current and future debt payments, and its need to reschedule its debts demonstrate weighty creditworthiness problems. The secondary market for trading country debt has deeply discounted FSU securities. Since Russia has been commonly perceived as having major responsibility for the FSU’s debt, the discounting shows that commercial investors do not perceive Russia as creditworthy. In addition, several major, private-sector assessments of country risk have rated Russia and all other successor states as high-risk or low on creditworthiness. The lack of creditworthiness of the successor states exposes the GSM-102 program to a high level of risk. For example, GSM-102 credit guarantees on the outstanding principal for the FSU and to Russia and Ukraine equaled about 44 percent of the GSM-102 portfolio. Moreover, these guarantees represented nearly 60 percent of the program’s portfolio risk exposure, according to our calculations. The amount of a country’s debt burden can be an important indicator of its solvency—the ability to fulfill its obligations in the long run. All other things being equal, a country with a high debt level poses a greater risk of default than one with a low debt level. The burden that debt poses depends, in part, on its relationship to a country’s economic output and its capacity to earn foreign exchange. One method used for analyzing country debt burden was developed by the World Bank. The bank used four indicator ratios to assess whether developing countries are less, moderately, or severely indebted. The ratios are (1) debt to GNP, (2) debt to exports of goods and services, (3) debt service to exports of goods and services, and (4) interest payments to exports of goods and services. The World Bank established thresholds for each of the ratios to use in classifying whether a country has a low level of indebtedness or, alternatively, is moderately or severely indebted. For each indicator, the bank’s moderate threshold represents 60 percent of the value of the severe threshold. A country is classified as “moderately” or “severely” indebted if three of its four ratios exceed the corresponding thresholds shown in table 5.1. Debt-to-GNP Ratio. This is the broadest measure of the solvency of a country and its ability to fulfill its debt obligations. A low debt-to-GNP ratio suggests good creditworthiness, since it shows that a nation’s output is large relative to its debt obligations. Debt-to-Exports Ratio. For countries that lack or are limited in their ability to draw upon foreign exchange reserves, exports are the principal means for obtaining foreign exchange needed to pay off loans. Countries with large export revenues relative to their debt are likely to be less vulnerable to foreign exchange crises and thus are less likely to default on their foreign loans. Debt Service-to-Exports Ratio. Debt service ratios relate principal and interest payments to revenues received from the exports of goods and services. They indicate a country’s ability to service its debt from hard currency export earnings. Interest-to-Exports Ratio. The interest-to-exports ratio indicates a country’s debt burden from the perspective of interest payments alone. Creditors generally do not reschedule interest payments on outstanding loans. If a country needs to reschedule its debt, creditors will want the country to at least stay current on its interest payments. The increasing frequency with which countries with debt service problems are rescheduling their principal payments has increased the relative importance of this indicator. Prior to 1992, the FSU and its successor states were not included in the World Bank’s debtor reporting system. Therefore, we used historical and forecast data for the FSU and the successor states to calculate their debt burden ratios and thus classify their overall debt burden. For a variety of reasons, economic forecasts for the former Soviet Union and its successor states are difficult to make because of major uncertainties associated with their transition from command to market economies. These uncertainties include the form and pace of economic restructuring that will be attempted, the amount of external assistance they might receive, and the extent to which they will cooperate with each other. In addition, official statistics of the FSU and the new states are difficult to obtain and do not adequately capture the growing nonstate sector. Consequently, although we have attempted to minimize these data problems by using relative indicators such as ratios instead of absolute values, the forecast, as well as our analysis, should be used with caution. Data were obtained from the WEFA Group. WEFA did not provide disaggregated data for the external debt of the successor states. We estimated the states’ individual debts by allocating total FSU debt according to a fall 1991 agreement among the successor states that assigned preliminary debt shares for each of the states. Other economic variables necessary for calculating the ratios were obtained from state level data as reported by WEFA. Although WEFA stopped making detailed forecasts for the FSU as an entity in October 1992, it continued publishing forecasts for each former republic. Its forecasts for the former republics were denominated in rubles. When there was a need to convert them to dollar values, we used PlanEcon data on historical or forecast market exchange rates to convert the republic data to their dollar equivalents (see below). The extraordinarily severe depreciation of the ruble relative to the dollar, however, may cause an undervaluation of the data for the former republics. We made our analysis using historical and forecast data made by WEFA in January 1993. We were not able to obtain compatible, more recent data to update the forecast data. Nonetheless, we believe the underlying economic conditions for the forecasts have not been significantly altered for most of the successor states. Important problems being faced then are still being confronted by the states. Examples include large government budget deficits, decreasing economic output, and high levels of inflation. Table 5.2 summarizes the results of our analysis, using the World Bank’s old methodology, in terms of whether and when each country is considered to have low, moderate, or severe indebtedness during 1988 through 1997. As table 5.2 shows, before its dissolution in late 1991, the Soviet Union was a less-indebted country. If the outstanding 1988 through 1991 FSU debt were to be distributed among the republics according to the formula agreed upon in 1991, Belarus and Russia would be classified as less-indebted republics. In contrast, Armenia, Azerbaijan, Estonia, Georgia, Kyrgyzstan, Latvia, and Moldova would fall into the severely indebted category. For 1992, and for the 1993 through 1997 period, all of the states are classified as severely indebted. As earlier indicated, we arrived at the debt classifications in table 5.2 by allocating the total FSU debt among the various republics. As discussed in chapter 3, many of the former republics have now reached agreement with Russia to give up their claims on FSU assets in exchange for Russia’s assuming their shares of FSU debt. Successor states that do this will have reduced their debt burden and thus may no longer be classified in the same category for the forecast years. At the same time, Russia will have increased its debt burden. While a number of successor states have benefited from Russia’s assuming responsibility for all of the FSU’s debt, they have been hurt by a reduction in transfers received from Russia. According to the IMF, most FSU countries have experienced a steep decline in large explicit and implicit transfers, including fiscal transfers from the former Soviet Union budget, which disappeared in 1992, and the subsidy implicit in the underpricing of energy and raw material exports (relative to world prices). This subsidy was reduced significantly as interstate prices for these goods were raised. The IMF estimated that between 1992 and 1994, the loss of official transfers from Russia and the rise in the import bill—on the assumption that energy and materials prices rise to world levels—may cost the other FSU countries $15 billion, or about 15 percent of their estimated 1994 GDP (at market exchange rates). In addition, a number of successor states have fallen into serious arrears as a result of trade deficits with other FSU countries. For example, PlanEcon estimated that Ukraine had a 1993 trade deficit with CIS countries of about $2.5 billion and that it owed Russia more than $1 billion for energy resources while having substantial arrears with Turkmenistan for gas deliveries. As a result, creditor states were refusing to deliver new supplies until old arrears were paid for. Turkmenistan has cut back on gas deliveries to Georgia because of the latter’s inability to pay for supplies. Armenia and Moldova have had large trade deficits with other CIS countries, especially Russia and Turkmenistan. Uzbekistan reduced gas deliveries to Kyrgyzstan and threatened to cut off all supplies if the latter did not pay its debts. Tajikistan has accumulated large deficits with Russia. “Liquidity,” as used in this report, refers to a country’s ability to secure foreign exchange over the short- and medium-term future sufficient to meet its debt service payments. We used two different methods to measure the liquidity of the successor states: (1) We examined the gross financial requirements of a country, defined as the amount of financial resources needed to meet its debt service obligations and international payments, including imports of goods and services. (2) We constructed liquidity ratios that parallel the World Bank’s old method for measuring debt burden to assess the liquidity of the successor states. Table 5.3 provides recent historical and forecast data for the hard currency financial requirements for the former Soviet Union. Figures for 1992 and the forecast years of 1993 through 1997 treat the 15 independent states as a single aggregate. The data are from the WEFA Group. WEFA’s forecast indicates that as an aggregate, the FSU will require major assistance to meet its gross financial needs during the next several years. As the table shows, WEFA forecast a negative current account balance—a nation’s trade in goods and services and net transfers—for each year during the 1993 through 1997 period. Net capital flows—direct and portfolio investments and changes in gold and foreign exchange reserves (not shown in table 5.3)—were estimated to be positive in each of the years but were not large enough to offset the current account deficit. Consequently, net debt, i.e., debt in excess of reserves (also not shown in table 5.3) is expected to grow throughout the forecast period. The net new credit (debt) required to finance the current account deficit and net capital flows is expected to decrease from a high of $11.3 billion in 1992 to $1.5 billion in 1996. However, when combined with the financing required to meet scheduled repayments on short-term and long-term debt, the financial requirements of the FSU and its successor states are considerable—ranging between $22 billion and $30 billion per year. Unless the successor states of the FSU receive substantial debt relief, they would require, on average, an estimated $24 billion annually of external financing between 1993 and 1997. In contrast, according to the IMF, the annual average gross external financing for developing countries as a whole and for the former centrally planned economies as a separate grouping for 1990 to 1993 was estimated at $213 billion and $45.6 billion, respectively. Thus, the FSU annual gross financial requirement represents approximately 11 percent of the requirement for all developing countries and 53 percent of the gross financial requirement for all of the formerly centrally planned economies. It is doubtful that such a large financial inflow can be attained from financial markets if they are not confident that the successor states represent a growing economy and a stable investment climate. We also provide in table 5.4 the distribution of the gross financial requirements among the successor states as well as the average ratio of the gross financial requirements to the gross domestic product for each republic from 1993 to 1997. The financial requirement of Russia and Ukraine combined accounts for about 85 percent of the total requirements for all successor states. However, when the gross financial requirements are viewed relative to the economic resources of the FSU and its successor states as measured by their GDP, the requirements of Russia and Ukraine represent only 27 percent of their respective gross domestic products. In contrast, the gross financial requirement for Estonia, Kyrgyzstan, Tajikistan, and Turkmenistan is less than $200 million each but represents several hundred percent of their respective gross domestic products. We used four indicator ratios to measure liquidity: (1) foreign exchange reserves to imports, (2) current account balance to GNP, (3) government budget balance to GNP, and (4) short-term debt (credit) to imports. These variables were selected because they are used by the banking industry to determine a country’s general ability to service its debt in the shortterm. Reserves-to-Imports Ratio. This ratio describes a country’s stock of foreign exchange relative to its annual import levels. As such, it measures the extent to which a country could pay for its imports out of reserves alone if that were necessary. Moreover, a country with large reserves relative to imports is likely to have increased flexibility for using reserves to help service its debt, at least over the short run. Current Account Balance-to-GNP Ratio. The current account balance measures a country’s trade in goods and services and financial flows related to interest and dividends and transfer payments. If a country has a current account deficit, it is not taking in sufficient foreign exchange from its exports of goods and services and financial earnings inflows to offset the costs of its imports and financial payments outflows. A current account deficit is roughly equal to the amount of new financing required to meet international purchases and transfers. The lower a country’s current account deficit relative to its GNP, the greater its potential for servicing its debt and the lower the probability of default. Government Budget Balance-to-GNP Ratio. Countries with a surplus of central government revenues relative to expenditures are less likely to face short-term liquidity problems. Countries in surplus may be able to dedicate some of the surplus to paying off foreign debt if it is in the form of hard currency. However, countries that have a budget deficit will need additional domestic or foreign financing if they want to use government monies to help pay off debt. Short-Term Debt (Credit)-to-Imports Ratio. As short-term debt increases relative to medium- and long-term debt, a country will require more foreign exchange over the short term to meet its near-term payments. When not paid for in cash, imports represent the amount of revolving trade credits that have to be maintained in good order. The ratio of short-term debt to imports is a measure of the short-term cash flow or immediate demands on a country’s foreign exchange. We hypothesize that as short-term debt to imports increases, a country’s creditworthiness is more likely to decrease. We constructed our own thresholds for these indicators by using IMF and World Bank data on net debtor developing countries for 1986-90. In October 1992, the IMF reported that 72 out of 122 net debtor countries had experienced recent debt service difficulties because they had incurred external payments arrears or entered into official or commercial bank debt-rescheduling agreements during 1986 through 1990. For each of our liquidity measures, we calculated the liquidity ratio for each of 114 countries for each year between 1986-90. We then calculated the average ratio for each country over the 5-year period. Since roughly 60 percent of the IMF’s list of net debtor countries (72 of 122) were designated as having a debt service problem, we used the observation that marks the 60th percentile on each liquidity measure as a threshold for characterizing a severe liquidity problem. We believe this is a reasonable characterization, since arrears and rescheduling are indicative of serious liquidity problems. We followed the World Bank’s old method of designating “moderate” thresholds equal to 60 percent of the value of “severe” thresholds. Also similar to the bank’s approach, we designated a country as having an overall moderate or severe liquidity problem only if three or more of its liquidity ratios equaled or exceeded the moderate or severe threshold values, respectively. (See prior discussion on debt burden.) Table 5.5 presents our country liquidity thresholds for the ratios of reserves to imports, current account deficit to GNP, government deficit to GNP, and short-term debt to imports. We used data from the WEFA Group to calculate the liquidity ratios for the successor states. However, data were not available for republic-level foreign currency reserves. To estimate the foreign currency reserves for each republic, we defined a relationship between the FSU’s foreign currency reserves and its imports and exports. We then estimated each republic’s reserve using its export and import data so that the level of reserves is proportional to its external trade balance. In addition, we used data on GDP in place of GNP for the two ratios previously discussed that include the GNP variable. As with the debt burden analysis, we used historical and forecast data provided by WEFA in January 1993. Table 5.6 summarizes the results of our analysis in terms of whether and when each country is considered to have low, moderate, or severe liquidity problems. As table 5.6 shows, for 1988 through 1991, the liquidity problem of the FSU would have been classified in the severe category. However, if the outstanding debt at that time were distributed among the various republics, three—Kazakhstan, Tajikistan, and Uzbekistan—would have been classified as having moderate liquidity problems. The rest would have remained in the severe category. For 1992, 11 of the 15 successor states were estimated to have severe liquidity problems. The other four states were estimated to have moderate problems. For the forecast period, 1993 through 1997, 8 of the 15 states are expected to experience severe liquidity problems. The other seven are estimated to have moderate problems. During both periods, Russia and the 15 successor states treated as a single entity are estimated to have severe liquidity problems. Liquidity is a more appropriate measure of creditworthiness than debt burden, since liquidity more directly measures the ability to generate foreign currency for servicing short- and medium-term debt. The liquidity results indicate that most of the successor states, including Russia, are high-risk countries because of their severe liquidity problems. The amount of arrears, the need for debt relief, and the specific types of IMF loan arrangements a country has accepted are major indicators of a lack of creditworthiness. When countries with liquidity problems cannot meet all of their immediate debt obligations, they fall into arrears. In some cases, arrears reflect an unwillingness to service debt. In either case, if arrears continue and the situation is not remedied, the country is likely to be considered a poor credit risk. If arrears persist and/or become prolonged, a country may reach a point where it concludes it cannot meet its current and future debt payments unless it obtains debt relief. Debt relief is obtained by rescheduling outstanding debt or by debt forgiveness. Debt rescheduling alters the terms and maturity of outstanding debt. Debt relief is typically undertaken only after payments have been missed or when default is imminent or has already occurred. To initiate a debt renegotiation, official creditors must be convinced that (1) the debtor country will be unable to meet its external payments obligations unless it receives the relief and (2) the debtor will take necessary steps to eliminate the causes of its payment difficulties and to achieve a lasting improvement in its external payments position. For countries that are members of the IMF, creditors rely on the IMF to help the debtor country design appropriate adjustment measures. Creditors have also required that an “upper credit tranche” arrangement with the IMF be in place before the start of debt renegotiations. Many countries that have accepted IMF arrangements are also countries that have rescheduled their debts. As previously discussed, the Soviet Union was in substantial arrears by the end of 1990 (see ch. 3). In the fall of 1991, international creditors agreed to defer a substantial amount of the country’s principal payments that were due in 1992. Since the dissolution of the Soviet Union in December 1991, only Russia has been making payments on Soviet debt. During 1992, Russia’s arrears worsened, and Russian officials requested debt relief. In April 1993, official creditors agreed to reschedule $15 billion in debt that was already in arrears or scheduled for payment in 1993. By the end of 1993 a number of the successor states had reached agreement with Russia to exchange their responsibility for repaying a portion of the FSU debt in return for dropping their claims on a share of the FSU’s assets held by Russia. In June 1994, Russia’s official creditors agreed to reschedule another $7 billion (approximately) of FSU debt already due and/or yet to come due during 1994—indicating that Russia was unable to fully service the debt in spite of the 1993 rescheduling. Meanwhile, Russia had still not reached agreement with bank creditors on rescheduling remaining FSU commercial debt. This issue had been outstanding since the dissolution of the Soviet Union in December 1991. As discussed in chapter 3, the commercial debt at the end of July 1993 was estimated by one source at $28.5 billion. According to an April 1994 IMF assessment, it is clear that Russia will require a further comprehensive debt-relief package to normalize relations with external creditors. And, the IMF said, Russia and the other FSU countries will require external financing to help them consolidate large budget deficits in a noninflationary manner and to finance social safety nets. But, the IMF warned, official and private external financing will be forthcoming and helpful to Russia and the other states only in the context of strong and sustained stabilization and reform programs. (See also ch. 4.) Otherwise, foreign lending will tend to increase capital flight and external debt and further delay the development of an environment in which a strong private sector can emerge. As discussed in chapter 2, Ukraine began defaulting on its GSM-102 loan repayments to the United States in the spring of 1994. As of August 17, 1994, defaults totaled about $31.1 million, and CCC had paid $21.6 million on claims made by lenders. The successor states’ prolonged arrears, the repeated need to reschedule debt, and the failure to reach agreement on re-scheduling FSU commercial debt all indicate a lack of creditworthiness. Successor states that have agreed with Russia to exchange their responsibility for the FSU debt for forgoing claims on FSU assets cannot be faulted for subsequent arrears that arise on FSU debt or a need to further reschedule FSU debt. However, as the recent IMF assessment indicates, other successor countries will still require external financing to help them consolidate large budget deficits in a noninflationary manner and to finance social safety nets. As discussed in chapter 1, USDA’s Trade and Economic Information Division is responsible for analyzing the ability and willingness of countries that have requested GSM-102 export credit guarantees to meet their current and future external debts, including potential GSM debt. As reported in chapter 2, TEID judged FSU and Russian debt as high risk between December 1990 and September 1992 when USDA committed to making available more than $5 billion in export credit guarantees to these states. Table 5.7 shows that TEID grades countries on a scale that ranges between A and F, and risk is evaluated primarily in terms of whether a country is currently involved in and likely to be involved in future debt rescheduling. For example, a country is classified as “high risk” or a “D” if there is a greater than 50-percent chance that it will reschedule its old debt during the next 3 years. (See table 5.7.) TEID considers a country’s risk to be “unacceptable” or an “F” if the state is both currently involved in rescheduling old debt and likely to reschedule new debt within the next 3 years. On the basis of the analyses presented in this report and the terms of the April 1993 and June 1994 debt-rescheduling agreements and related developments, we believe that additional debt rescheduling for Russia during the next 3 years is a real possibility. As discussed in chapter 4, Russia experienced a constitutional crisis during 1993 that was based importantly on disagreement between the parliament and the President over the pace and extent of economic reform. Although voters approved a new constitution and elected a new parliament in December 1993, it remains to be seen whether the executive and legislative branches will work well together. As presented in this chapter, Russia is shown to have both a severe debt burden and severe liquidity problems. Although the April 1993 debt rescheduling alleviated Russia’s liquidity problems in 1993, it has continued to have serious problems in 1994. In June, it entered into another substantial rescheduling agreement with its official creditors. As table 5.3 showed, the FSU gross financial requirements could exceed $20 billion per year between 1994 and 1997. Given these considerations, we believe that Russia would continue to be classified as at least high risk under the TEID criteria displayed in table 5.7. In fact, in May 1994, USDA officials advised us that TEID has assessed Russia as not creditworthy for more than a year. Other countries rated as not creditworthy by TEID included Armenia, Azerbaijan, Belarus, Georgia, Kyrgyzstan, Moldova, Tajikistan, and Ukraine. According to the officials, TEID has rated the Baltic states, Kazakhstan, Turkmenistan, and Uzbekistan as creditworthy. The secondary market for trading developing countries’ loans and bonds is another measure that can be used to assess creditworthiness. Countries whose debt trades close to the face value of the loan or bond are considered quite creditworthy, whereas those whose debt is traded at a deep discount are not. Some observers have criticized the use of secondary market prices as a measure of creditworthiness. They assert that the market exhibits abrupt price movements regardless of changes in the underlying economic conditions of the debtor countries. There have also been allegations that publicly reported secondary prices and actual transaction prices are different. Additionally, not all secondary market price movements can be linked to economic performance, as some price movements reflect only a country’s willingness to pay back its debt. Moreover, the ability to service debt is dependent, in part, on the economic conditions of developed countries. Therefore, one might expect that the secondary market would be correlated with global economic conditions. However, there is little correlation between secondary price movements and variations in measures of global economic aggregates, such as industrial countries’ growth. On the other hand, we believe the secondary market is the most reliable source of risk-adjusted valuation of debt that can be used to convert judgmental perceptions of risk into a measurable amount in dollars and cents. Prices in the secondary market for countries with strong growth and lower levels of external debt have been found to be generally higher than prices in the secondary market for countries with severe economic and debt problems in part because investors associate strong growth and low debt with improved creditworthiness. (See ch. 6 for further discussion of why we believe the secondary market is a useful measure.) A secondary market has developed for FSU loans and bonds. According to a March 1993 trade publication, the FSU/Russian debt market had been very illiquid. The study reported that transactions on FSU/Russian debt in the secondary market were very structured and often took a few months. Very often transactions were in the form of debt-for-debt swaps, and each transaction was dependent on its own specifications. As a result, the study said, FSU/Russian debt has been one of the most illiquid papers on the secondary market, and total market turnover for the sovereign debt amounted to at most $200 million in 1992. However, according to more recent information, trading of FSU/Russian debt was considerably higher in 1992 (i.e., $678 million) and increased dramatically in 1993, to $24.7 billion. According to another trade publication, Soviet debt started trading in the secondary market in about 1990 and during 1991 traded at 55 to 60 cents on the dollar. By spring 1992, it said, prices had fallen to 30 to 35 cents on the dollar. According to Chemical Bank data, secondary market prices for FSU loans traded for about 17 to 21 cents on the dollar between July 1992 and February 1993 and then fell to a low of 10 to 11 cents on the dollar in March 1993. Loan prices gradually increased to reach a high of 55 cents on the dollar during part of December 1993. Between then and March 1994, prices again declined, reaching a low of 28 cents on the dollar on March 21, 1994. Vnesheconombank began issuing Eurobonds in the late 1980s. By March 1993, there were seven issues, amounting to a total value of $1.7 billion. VEB made servicing of these bonds a priority, continuing to make payments despite defaults on its debt service payments for loans. Consequently, the bonds have carried a higher price than the loans since they started being quoted at a discount at the beginning of 1991. In mid-1991 the bonds approached 55 to 60 cents on the dollar. By spring 1992, they had fallen as low as 44 cents. In June 1993, they were trading at 60 to 65 cents on the dollar. We believe the secondary market’s valuation of FSU debt can be considered to represent market participants’ judgment about Russian creditworthiness. (As previously discussed, Russia has assumed responsibility for making payments on FSU loans and bonds.) Figure 5.1 provides secondary market prices of FSU loans for July 1992 to March 1994. The low prices indicate that the market finds Russia quite uncreditworthy. Following the rapid growth of developing countries’ debt in the early 1970s and an increasing number of debt reschedulings in the 1980s, an assessment of the risk posed by cross-border lending and investments grew in importance. Therefore, the international financial community developed country risk assessments to evaluate the risk of loss from the future actions of debtors. Country risk analysis is based on a holistic approach. It encompasses social and economic risk, as well as “sovereign” (i.e., political) risk. The latter refers to exposure arising from events that are substantially under the control of a foreign government rather than a country’s private sector. A number of private organizations rate countries on the degree of risk associated with cross-border financial transactions. Lenders and investors can use the ratings in deciding whether to lend to or invest in particular countries. We analyzed the ratings of three publication services: Euromoney, Institutional Investor, and International Country Risk Guide (ICRG). Each assigns a country risk rating ranging between 0 for least creditworthy to 100 for most creditworthy. Each rating service uses a unique methodology for assessing country risk. Not surprisingly, there is considerable overlap in terms of the factors each considers. Euromoney, a leading international publication, assigns credit ratings as a weighted average of market indicators covering access to bond markets and trade finance, credit indicators covering payment records and rescheduling difficulties, and analytical indicators incorporating economic performance forecasts and political environments. In April 1992, a Euromoney analysis concluded that the republics of the FSU were not in a position to repay the full amount of their debts at that time and that a debt restructuring package seemed inevitable. At the same time, the analysis said it was generally accepted that the former Soviet republics as a whole were potentially wealthy enough to meet their obligations over time and that debts should be fully serviced and paid. As shown in table 5.8, in September 1992, Euromoney rated Russia and several other successor states in the range of 14.6 (Moldova) to 24.2 (Estonia) out of a possible 100. Relative to 169 countries rated, they fell into the bottom quartile. Euromoney concluded that access to bank lending by any of the successor states is “impossible” and that their access to international bond and syndicated loan markets is “nearly impossible.” Institutional Investor surveys leading international banks to rate the creditworthiness of sovereign states. Each bank provides its own rating, and Institutional Investor weights the responses using a formula that gives more importance to responses from banks with greater worldwide exposure and more sophisticated country analysis systems. In March 1992, following the demise of the Soviet Union, Institutional Investor made its last rating for that entity. The score, 29.7, represented a staggering decline of 34.6 points over the previous 2-1/2 year period. The Soviet Union’s score of 29.7 placed it 58 out of the 113 countries rated by Institutional Investor. In September 1992, Institutional Investor rated Russia 23.6, Belarus 21.1, Ukraine 21.1, Kazakhstan 18.7, and Uzbekistan 16.6. These ratings placed them 73, 78, 79, 90, and 98, respectively, among 126 countries. With the exception of the Baltic states, individual scores for the other republics were not reported. The ratings for the Baltics were lower than their March 1992 ratings. Estonia was rated 22.1, Latvia 21.4, and Lithuania 20.7. International Country Risk Guide (ICRG) provides a detailed country-by-country assessment of the risk of operating, investing in, or lending to particular countries using a three-part system that evaluates political, financial, and economic risk. It assigns an overall score to each country by using a weighting system that allocates 50 percent of the score to political risk, 25 percent to financial risk, and 25 percent to economic risk. According to ICRG, its country scores can be interpreted as in table 5.9. In August 1992, ICRG rated Russia 52.5, putting it slightly above countries it considers as very high risk. ICRG did not provide ratings for any other former republic. (See table 5.8.) As table 5.8 shows, the scores of the three rating services appear to be generally consistent with one another in the way they rank the creditworthiness of countries. Not surprisingly, though, there are some differences. The scores of Euromoney and Institutional Investor most closely approximate one another. ICRG scores are generally considerably higher than those of the other two services except for countries that are rated as high in creditworthiness. Using a statistical method for effectively summarizing data from several sources, known as “principal components analysis,” we analyzed whether the three rating services are measuring the same phenomenon. The analysis indicated that overall the ratings do measure a common factor. The principal components method was then used to generate a combined, overall rating for each of the countries. To the extent that the rating services are measuring different yet important aspects of creditworthiness and to the extent that bias or poor information may affect their ratings of some countries, we believe our combined ratings provide a better measure of the relative creditworthiness of countries. As table 5.8 shows, the combined creditworthiness ratings for the successor states range from a low of 13.6 points for Moldova to a high of 31.8 points for Russia. In terms of rankings, Moldova ranked 158 and Russia 100 out of the 172 countries rated. The previous analysis was prepared using country risk ratings from the August and September 1992 period. Table 5.10 provides more recent information on the FSU successor states for two of the rating services, Euromoney and Institutional Investor. The table shows that both services ranked nearly all of the countries as worse on creditworthiness in September 1993 as compared to September 1992. In March 1994, all of the countries rated by Institutional Investor, including the Baltic states, were ranked lower than they had been in September 1993. In March 1994, 8 of the 15 countries rated by Euromoney, including the Baltic states, improved on their rankings relative to September 1992 and September 1993. Even so, four of those countries were still ranked among the bottom quartile of all countries rated (i.e., Kyrgyzstan, Moldova, Tajikistan, and Uzbekistan); the other four were ranked close to or among the bottom one-third of all countries rated (i.e., the Baltic states and Turkmenistan, respectively). For both services, the 1994 rankings of Russia and Ukraine declined further compared to September 1993. As of August 17, 1994, the extension of GSM-102 credit guarantees for exports to the FSU and to Russia had created a contingent liability to the U.S. government of about $2.9 billion for outstanding principal payments. That amount includes the large reschedulings that occurred in September 1993 and early June 1994. We used country risk ratings and secondary market prices to estimate the risk of default and, in turn, the expected cost of the GSM-102 loans to the FSU and Russia as of June 1994. Table 5.11 provides Euromoney country risk ratings for the FSU successor states for three time periods between September 1992 and March 1994 and the average of the three ratings. As previously discussed, countries were rated by Euromoney on a scale ranging between 0 and 100. The higher the score, the better the creditworthiness and the lower the score, the worse the creditworthiness. We used the country risk ratings to estimate an implied risk of the country’s defaulting on its external debt. The results are presented in table 5.11. As with the country risk ratings, we also calculated the average risk of default for the three time periods. As table 5.11 shows, the average country risk ratings for the FSU successor states varied between a low of 17.2 for Armenia and Azerbaijan to a high of 28.9 for Estonia. The average implied risk of default for the countries ranged between a low of 71 percent for Estonia to a high of 83 percent for Armenia and Azerbaijan. Russia’s country risk ratings ranged from 21.8 in September 1992, to 24.7 in September 1993, to 26.0 in March 1994. Its average rating was 24.2. The March 1994 implied risk of Russia’s defaulting was 74 percent, and its average risk of default for the September 1992 to March 1994 period was 76 percent. As previously stated, Russia’s contingent liability for GSM-102 debt in August 1994 was about $2.9 billion. Using the March 1994 implied risk of default score for Russia, we calculated that $2.1 billion in outstanding GSM-102 guaranteed principal repayments was at risk of default. If one uses the average risk of default score, nearly $2.2 billion was at risk of default. The average price of FSU loans in the secondary market in March 1994 was 32 cents on the dollar. This price implies a 68-percent risk of default at that time. According to data provided to us by Chemical Bank, between July 1992 and March 1994, the price of FSU loans in the secondary market averaged 26.8 cents on the dollar—implying that financial markets expected about a 73-percent discount on repayment of outstanding FSU loans over that time period. These default risk rates are quite similar to those indicated by the Euromoney country risk ratings previously discussed. The March 1994 implied risk of default through the secondary market price suggests that $2 billion of the $2.9 billion GSM-102 principal is at risk of default. The average risk of default score for the secondary market price suggests that about $2.1 billion is at risk of default. These estimates do not take account of possible savings in the cost of commodity support programs that may result when the GSM-102 program is used to promote increased exports of U.S. commodities. However, as discussed in chapter 2, whether and to what extent lower costs will result from the GSM-102 program depends importantly on the availability of alternative markets for the exports in question and how world market prices are affected by actions taken by other exporter nations in the absence of U.S. GSM program benefits for the FSU and its successor states. According to an ERS official, while the potential CCC liability on GSM loans is great, one should consider Russia’s self-interest in meeting its GSM-102 repayments responsibilities. If Russia does not meet its obligations, the official said, its ability to obtain future credit from the United States and other potential creditors would be complicated. In addition, the official noted that if Russia repays the credit and at an appropriate higher interest rate for rescheduled debt, U.S. taxpayers would endure no long-term cost under the program. Hence, the official said, rather than suffering a loss, U.S. taxpayers may earn revenue from rescheduled loans. In commenting on a draft of this report, USDA said GAO should examine the terms of rescheduled debt with the FSU. USDA said that taxpayers do not lose money as long as the interest charge exceeds the opportunity cost of funds to U.S. taxpayers and as long as principal is repaid. We agree that Russia will have greater difficulty in obtaining future credit if it does not meet its GSM-102 repayment obligations. However, whether and to what extent it will do so is the question. We have provided estimates of the likelihood of its repaying based on country risk ratings and the secondary market’s valuation of FSU loans. It is conceivable that at some point in the future Russia may seek and obtain forgiveness for a substantial part of its GSM loan obligations. In the meantime, as of mid-August 1994, the United States had already paid out $1.4 billion to cover claims on GSM-102 defaults for FSU loans and was expecting to pay out another $429 million in claims by the end of 1994 as part of the June 4, 1994, rescheduling agreement. In its comments on our draft report, USDA disagreed with our use of the secondary market to estimate the risk of default on GSM-102 loans. (See ch. 6.) However, in its comments, USDA itself questioned whether Russia had sufficient self-interest to repay GSM-102 debt. USDA said that Russia’s self-interest had been overtaken by recent events, including lower import demand, large infusions of food aid, and the fact that the Russians had not requested new credit and did not seem very interested in staying current on GSM-102 debt payments. As discussed in chapter 1, the GSM statute prohibits USDA from extending credit guarantees to any country the Secretary determines cannot service the debt. However, the statute does not require that a country be considered generally creditworthy to receive GSM credit guarantees. In addition, the law does not provide any guidance as to what is an acceptable level of risk in evaluating whether countries can adequately service proposed GSM debt. Consequently, countries that USDA program officials assess as high risk in terms of creditworthiness can still be approved to receive GSM credit guarantees. Also, the statute does not place a limit on the amount of GSM guarantees that can be provided each year to high-risk countries in aggregate or to individual high-risk countries. As a result, USDA can allocate large amounts of guarantees to high-risk countries, making the GSM-102 portfolio subject to a potentially high rate of default. In chapter 1 we showed that the FSU and two of its successor states (Russia and Ukraine) received the largest portion of GSM-102 credit guarantees provided during the fiscal years 1990-92. As a result of the large guarantees provided to the FSU and its successor states, the GSM-102 program became considerably exposed to default by these states. Table 5.12 shows that on January 29, 1993, the FSU and its successor states were responsible for $3.6 billion, or about 44 percent, of all outstanding principal on GSM-102 guaranteed loans. Except for Mexico and Algeria, which were responsible for 26 percent and 11.5 percent, respectively, of the outstanding principal, most of the other GSM-102 recipients each accounted for less than 1 percent of the outstanding principal. We used the combined country risk ratings presented in table 5.8 to estimate the principal at risk for each country participant in the GSM-102 export credit guarantee program. The results are presented in table 5.12. As the table shows, the exposure of the GSM-102 program to default by the FSU and its successor states is considerably larger when the potential for default is considered. Whereas the FSU and its successor states together accounted for about 44 percent of the outstanding principal at the end of January 1993, they represented approximately 59 percent of the portfolio’s risk, because their country risk ratings were lower than most of the other GSM-102 credit guarantee recipients. In contrast, Mexico, which accounted for 26.1 percent of the principal exposure, represented only 12.5 percent of the risk because its country risk ratings were significantly higher than most of the GSM-102 recipients. Although GSM-102 recipient countries vary significantly from one another in terms of their risk of defaulting on GSM-102 loans, CCC does not adjust the fee that it charges for credit guarantees to take account of country risk. CCC fees are based upon the length of the credit period and the number of principal payments to be made. For example, for a 3-year GSM-102 loan with semiannual principal payments, CCC charges a fee of 55.6 cents per $100, or 0.56 percent of the covered amount. For 3-year loans with annual principal payments, the fee is 66.3 cents per $100. CCC fees that included a risk-based component might not cover all of the country risk, but they could help to offset the cost of loan defaults. USDA officials told us that including a fee for country risk could reduce the competitiveness of GSM-102 exports. However, they said they did not have recent or current data to support their claim. The U.S. Export-Import Bank, which provides credit guarantees to promote a variety of U.S. exports, uses risk-based fees to defray the cost of defaults on its portfolio. Under its system, each borrower/guarantor is rated in one of eight country risk categories. Exposure fees vary based on both the level of assessed risk and the length of time provided for repayment. For example, in the case of repayment over 3 years, a country rated in the lowest risk category is charged a fee of 75 cents per $100, whereas a country in the highest risk category is charged a fee of $5.70 per $100 of coverage. Thus, the bank’s fee structure includes a substantial added charge for high country risk. According to the bank, its system is designed to remain as competitive as possible with fees charged by official export credit agencies of other countries. Under section 211(b)(1)(b) of the 1990 Farm Bill, CCC is currently restricted from charging an origination fee for any GSM-102 credit guarantee in excess of an amount equal to 1 percent of the amount of credit extended under the transaction. This restriction was initially enacted in 1985 following proposed administration legislation to charge a 5-percent user fee for exports backed with credit guarantees. Some Members of Congress were concerned that such a fee would adversely affect the competitiveness of GSM-102 exports. Under the 1-percent restriction, CCC would be considerably limited in the size of the fee that it could charge to take account of country risk should it decide to do so. For example, as previously noted, CCC charges 0.56 percent for a loan payable in 3 years and with principal payments due annually. The most it could increase the fee would be 0.44 percent. In contrast, the Export-Import Bank currently charges fees as high as 5.7 percent for 3-year loans. The various analyses previously presented above indicate that Russia and the other successor states are high-risk countries in terms of creditworthiness. Russia is severely indebted, and its agreement to accept responsibility for the other states’ shares of the FSU debt increases its burden. Most of the successor states, including Russia, have severe liquidity problems, and these problems are likely to persist for the next several years. Russia’s arrearage problems and its need to reschedule its debts also demonstrate a lack of creditworthiness. In addition, secondary market valuations of FSU debt and country risk ratings point to poor creditworthiness. The large amount of GSM-102 export credit guarantees already provided to the FSU and its successor states, along with their low creditworthiness, means that the GSM-102 portfolio is exposed to a high level of risk that could result in additional, substantial costs to U.S. taxpayers. As earlier discussed, in September 1993 and June 1994 the United States rescheduled large amounts of GSM-102 debt. Providing the successor states with more guarantees at this time would add to the already high exposure of the GSM-102 portfolio to further defaults. Since the GSM-102 program provides financing with terms up to only 3 years, additional guarantees for the successor states would add to the difficult liquidity problems that they are expected to experience over the next several years. Consequently, the GSM-102 program may not be an appropriate vehicle for continued financing of U.S. agricultural exports to the FSU successor states at this time. Nonetheless, there may be important economic and national security reasons for the United States to further assist the financing of food exports to Russia and one or more successor states. For example, if circumstances arise where the Russian government cannot obtain the hard currency to pay for food imports needed to balance its food needs, political stability could be threatened. In a major policy statement on April 1, 1993, President Clinton said that nothing could contribute more to global freedom, security, and prosperity than the peaceful progression of Russia’s transformation from a totalitarian state into a democracy, a command economy into a market, and an empire into an a modern nation-state.However, he noted, the outcome is not assured. The President warned of the danger of Russia, with its vast arsenal of nuclear weapons, being torn apart by the ethnic strife that has engulfed former Yugoslavia. If Russia were to revert to imperialism or plunge into chaos, he said, the United States would need to reassess its plans for defense savings. This could mean billions of less dollars for other uses, including creating new businesses and new jobs in the United States. America’s interests, he said, lie with Russian reform and Russian reformers, and America’s position is to support democracy and free markets in Russia and the other new independent states. In support of the policy statement, on April 4, 1993, President Clinton announced a $1.6 billion assistance package for Russia for 1993. As discussed in chapter 3, on April 15, the United States, in concert with the G-7 nations, announced a financial assistance program of $28.4 billion for Russia. Also on April 15, the Secretary of State announced that the administration would propose to Congress another U.S. aid package for Russia of $1.3 billion in direct aid and $500 million in assistance to be channeled through international assistance agencies. Subsequently on September 30, 1993, the President signed the fiscal year 1994 foreign aid bill that included $2.5 billion for the NIS. There are alternatives to the GSM-102 program for helping to finance continued U.S. agricultural exports to successor states to the FSU. Examples include the GSM-103 program and various food aid programs. Since the latter include substantial concessionality and at times total grant aid, they would entail higher budgetary outlays. As discussed in chapter 1, the GSM-103 export credit guarantee program is similar to the GSM-102 program but provides terms of credit whereby the repayment period can range up to 10 years. An advantage of this program is that it would help recipient successor states to finance food imports without adding to their difficult liquidity problems during the next few years, since repayments can be stretched out over a decade. However, GSM-103 is not an appropriate program to use if the successor states are uncreditworthy and is questionable if they are high risk, since longer repayment terms may also increase risk. A limitation of the program is that far fewer dollars have been authorized for GSM-103 guarantees than for GSM-102 guarantees (see table 1.1). Under the 1990 Farm Bill, CCC is required to make available at least $5 billion for each of fiscal years 1991 through 1995, whereas the minimum level stipulated for GSM-103 assistance is only $500 million. USDA has used several food aid programs to provide food assistance to the successor states during the past few years. These include Public Law 480, title I, the section 416(b) program of the Agricultural Act of 1949 (P.L. 81-439), and the Food for Progress program of the Food Security Act of 1985 (P.L. 99-198). Title I of the Food for Peace program (P.L. 480) is a concessional sales program to promote exports of agricultural commodities from the United States and to foster economic development in recipient countries. The program requires annual appropriations and thus has a direct impact on federal spending. Food for Peace provides export financing over payment periods of 10 to 30 years, grace periods on payments of principal of up to 7 years, and low interest rates. Eligible countries are developing countries experiencing a shortage of foreign exchange earnings and having difficulty meeting all of their food needs through commercial channels. According to USDA, program allocations take into account changing economic and foreign policy situations, market development opportunities, existence of adequate storage facilities, and possible disincentives to local production. Section 416(b) of the Agricultural Act of 1949 authorizes donations of uncommitted CCC stocks to assist needy people overseas. Food for Progress is a food aid program that is carried out using funds or commodities made available through Public Law 480, title I, or the section 416(b) program. Food for Progress is generally administered on grant terms. It provides commodities to developing countries and emerging democracies to encourage democracy and private enterprise, including agricultural reform. Table 5.13 provides figures on the value of GSM-102 credit guarantee and food aid assistance to the FSU/successor states during fiscal year 1991 through April of fiscal year 1994. As the table shows, GSM-102 credit guarantees accounted for all of the assistance provided during fiscal year 1991 and most of the assistance made available during fiscal year 1992. As a result of the suspension of the GSM-102 program in the fourth quarter of 1992, food aid became the dominant form of agricultural assistance in fiscal year 1993. The combined total of GSM-102 and food aid assistance in fiscal year 1993 was slightly more than all GSM-102 assistance provided during fiscal year 1991 but represented only about two-thirds of the combined value of the GSM-102 and food aid assistance made available during fiscal year 1992. As the table shows, from fiscal year 1991 through April of fiscal year 1994, GSM-102 credit-guaranteed assistance was about $5.1 billion, while food aid assistance equaled about $2 billion. Total agricultural assistance made available in fiscal year 1994 (through April) was a small fraction of that provided during each of the 3 previous fiscal years. Questions exist about the need for and value of additional credit guarantees and food aid for the FSU successor states. For example, FSU agricultural imports were down considerably in 1993 and, according to USDA, there generally is not a food shortage problem in the area.According to USDA, economic reforms have begun to have some positive effects, and as they take further hold, the successor states are not likely to continue importing at their former high levels. At the same time, credits and credit guarantees have unintentionally impeded the reform process by increasing the successor states’ external debt burden and perpetuating state control of agricultural distribution. According to USDA, the successor states’ demand for agricultural imports diminished by 27 percent in 1993 compared to 1992 levels. In commenting on a draft of this report, USDA said that Russian agricultural imports are down sharply largely due to a reduction in demand, particularly of grain, which makes up the bulk of imports. The drop in FSU agricultural imports is expected to continue and, according to USDA, is a sign that economic reforms are working, at least to some degree. USDA noted that high levels of Soviet agricultural imports in the 1980s were used to prop up an overexpanded and inefficient livestock sector. Declines in that sector have freed up domestic grain supplies (production of which has remained steady with the exception of 1991’s drought-affected crop) and lowered the FSU demand for imports. In addition, USDA said, price liberalization in several republics has led to lower waste, increased incentives, and more rational use of inputs. In commenting on a draft of our report, USDA indicated that food assistance has adversely affected reform in the FSU. USDA said that although widespread dislocation in the FSU food supply never occurred, the West continued to provide assistance (credits and food aid) to the FSU, which accepted it to the likely detriment of economic reforms (increased debt and continued state control of agricultural marketing). According to a USDA analysis, the high level of FSU grain imports in recent years—sustained by credits, credit guarantees, and food donations—allowed FSU authorities to delay increases in farm prices and to maintain the centralized grain distribution and marketing system to a large degree. For example, the average price of wheat imported by the FSU in 1992-93 was $125 a ton (excluding freight), while Russian farmers received less than $40 a ton. The state provided massive subsidies that lowered the price of the imported grain relative to domestic farm prices. Thus, instead of paying Russian farmers higher prices, which would have improved farm incomes, increased farm sales, and reduced waste, the state chose instead to purchase large amounts of foreign grain. When commercial financing was no longer available, the state requested concessional loans and donations to help maintain these imports. Obtaining imports on concessional terms, which meant deferring immediate repayment, was easier for state planners than allowing market forces to set domestic grain prices. The commercial credits and credit guarantees also adversely affected the reform process, because scarce hard currency needed to support domestic reform was instead required to service the increased external debt. According to the USDA analysis, fewer credits and credit guarantees are likely to be provided in the future because of increased western concerns about FSU creditworthiness, particularly Russia’s, and expectations of decreased FSU demand for imports. USDA also believes that concessional financing and humanitarian assistance may still be necessary for some of the successor states in the short- to medium-term future. The GSM statute prohibits USDA from extending credit guarantees to any country the Secretary determines cannot service the debt. However, the statute does not provide any guidance as to what is an acceptable level of risk in evaluating whether countries can adequately service proposed GSM debt. In addition, the statute does not limit the amount of GSM guarantees that can be provided each year to very risky countries—either individually or in aggregate. Consequently, USDA can allocate large amounts of guarantees to high-risk countries and even to countries that are judged not creditworthy, making the GSM-102 portfolio subject to a potentially high rate of default. CCC fees that included a risk-based component could help to offset the cost of loan defaults. However, under the 1990 Farm Bill, CCC is currently restricted from charging an origination fee for any GSM-102 credit guarantee in excess of an amount equal to 1 percent of the amount of credit extended under the transaction. Given this restriction, CCC would be considerably limited in the size of the fee that it could charge to take account of country risk should it decide to do so. Most, if not all, of the FSU successor states are not creditworthy and all should be considered at least high risk from a creditworthiness perspective. The GSM-102 portfolio is exposed to a high level of risk of default because a large portion of the portfolio includes FSU debt and because of Russia’s lack of creditworthiness. Since the GSM-102 program provides financing with terms to only 3 years, providing additional GSM-102 guarantees to the successor states could further add to their liquidity problems during the financing period. The GSM-103 program could help successor states to finance food imports without adding to their difficult liquidity problems during the next few years, since repayments can be stretched out over 10 years. However, GSM-103 is not a good program to use if the successor states are uncreditworthy and is questionable if they are high risk, since longer repayment terms may also increase risk. Consequently, both GSM programs may not be an appropriate vehicle at this time for financing additional U.S. agricultural exports to Russia or other successor states. Alternatives to the GSM programs include various food aid programs. Of course, the latter include substantial concessionality and at times total grant aid, and thus would result in higher budgetary outlays. There may be important economic and national security reasons for the United States to further assist the financing of food exports to Russia and one or more successor states. For example, if circumstances develop where the Russian government cannot obtain the hard currency to pay for food imports needed to balance Russia’s food needs, the country’s political stability could be threatened. The latter could disrupt Russia’s progress toward establishing democratic institutions and a free market economy and, in turn, significantly affect U.S. defense expenditures. If Congress concludes that Russia or other successor states are too risky to receive additional GSM-102 credit guarantees, and if Congress concludes that continued agricultural exports to the states serve important U.S. economic and national security interests, Congress may wish to consider authorizing additional foreign aid to finance the sale of the food. Such additional authorization of foreign aid to finance food exports to the states could then be weighed against other priorities for U.S. foreign economic assistance. To reduce future exposure of the GSM-102 portfolio to default, Congress may wish to consider limiting the total amount of credit guarantees that can be issued each year to high-risk countries and the amount that can be provided to any single high-risk country. In addition, Congress may wish to consider (1) amending the statutory provision that precludes the Commodity Credit Corporation from charging a fee in excess of 1 percent of the amount of the credit guarantee and (2) requiring CCC to include a risk-based charge as part of its overall fee for GSM credit guarantees. We requested comments on a draft of this report from USDA. It provided general comments that are reproduced in appendix III. Most of these comments are discussed in this chapter; some are addressed directly in other chapters of this report, as indicated in marginal references. USDA also provided a separate set of technical and editorial comments that were incorporated into the previous chapters where appropriate. Our draft report was reviewed by a number of offices in USDA that concluded the draft was well researched and presented. According to USDA, the report accurately presented USDA source materials, the GSM-102/103 decisionmaking process, and the interviews we conducted pursuant to the investigation. USDA expressed principal disagreements with our methodology for assessing the costs and benefits of the GSM-102 credit guarantees provided to the FSU and its successor states, particularly our use of the secondary market as a means of estimating losses. USDA also disagreed with our draft conclusion that all of the 15 successor states were not creditworthy. As discussed in chapter 5, we considered secondary market valuations of FSU loans in evaluating the creditworthiness of the FSU and its successor states, and we used the secondary market’s valuation of FSU loans to estimate expected losses on the value of outstanding GSM-102 loans to the FSU. According to USDA, there are too few participants in the secondary market and they can easily manipulate the market. Thus, USDA said, none of its reviewing offices believe the secondary market is a reliable indicator of the value of FSU debt paper. In addition, USDA said that the attributes of debt traded in the secondary market might be materially different from the GSM debt. We disagree with USDA on these points. As discussed in a previous GAO report, we concluded that the secondary market provides the best available risk-based valuations of sovereign debt of countries that do not have well developed financial systems. More specifically, we found that the secondary market provides the same characteristics of many functioning securities markets. Generally speaking, the market is (1) self-correcting; (2) appears to have minimal outside forces operating on it other than the risk-reward evaluation by a large number of participants—banks, insurance companies, pension funds, and private investors; (3) has substantial volume and appears to be efficient; (4) and has a wide variety of instruments with varying lengths of maturity and other characteristics. USDA seems to ignore the emergence of the secondary market as a major financial market. According to a World Bank 1992 study, the total volume of secondary market trading rose from an estimated $4 billion in 1985 to $100 billion in 1990. The bank noted that as a result of improved market efficiencies, secondary market prices were increasingly used as indicators of a country’s creditworthiness and as benchmarks in debt reductions/restructuring packages. According to more recent studies, secondary market trading increased enormously in 1992 and 1993, reaching volumes of $773.7 billion and $1.9 trillion, respectively. As concerns FSU or Russian paper, it has become one of the more popularly traded assets in the secondary market. According to the Emerging Markets Traders Association, Russian debt ranked eighth on trading volume out of 42 countries for which the group reported data for 1993. Trading volume in Russian debt increased more than 35-fold—from $678 million in 1992 to $24.7 billion in 1993. Although we feel confident about our use of secondary market data to estimate expected losses on the value of outstanding GSM-102 loans to the FSU, we developed a second method for estimating such losses after receiving USDA’s comments on our draft report. As discussed in chapter 5, we used Euromoney country risk ratings to estimate the risk of default and, in turn, the expected cost of GSM-102 loans to the FSU and Russia as of June 1994. The results were very similar to the results obtained from our use of the secondary market prices and, thus, increase our confidence in the secondary market method. As noted previously, USDA also commented that the attributes of secondary market debt may be materially different from the GSM debt. USDA did not cite any examples of how the debt might be materially different or explain how such differences might affect the use of secondary market prices to reflect the risk of default on GSM-102 loans. GSM debt is different in the sense that the U.S. government guarantees most, if not all, of the principal in the event that the borrower defaults on its loans. Since lending banks are guaranteed that USDA will repay at least 98 percent of defaulted GSM-102 loans, lenders to Russia would presumably have little reason to trade the debt on the secondary market when Russia defaults on such debt. However, this characteristic of GSM-102 loans does not reflect on the likelihood of whether Russia will default on its payoff of GSM-102 debt. In its comments on our draft report, USDA said that it disagreed with our conclusion that all of the FSU successor states are not creditworthy. USDA indicated that between August 1993 and February 1994 it had found Ukraine, Uzbekistan, and Turkmenistan to be creditworthy; it noted that each of these states had been found qualified to receive modest amounts of GSM-102 credits during that period. (USDA also said that each program was driven by market development objectives.) However, in May 1994 USDA officials advised us that the office responsible for preparing creditworthy assessments had rated Ukraine as not creditworthy during the previous year and still considered Ukraine as uncreditworthy. Thus, USDA had made credit guarantees available to Ukraine in fiscal year 1994 even though its own analysis indicated the country was uncreditworthy. In addition to Ukraine, other successor states identified by USDA as still not creditworthy in May 1994 were Russia, Armenia, Azerbaijan, Belarus, Georgia, Kyrgyzstan, Moldova, and Tajikistan. Thus, USDA classified 9 of the 15 successor states as not creditworthy. Creditworthy successor states at that time, according to USDA, included the three Baltic states (Estonia, Latvia, and Lithuania) as well as Kazakhstan, Turkmenistan, and Uzbekistan. Creditworthiness evaluations involve a multidimensional analysis of a variety of factors and some subjective judgment. As a result, evaluations by different parties may not always fully agree. This is best evidenced in chapter 5, where we compare the country risk evaluations of three different private rating services (see table 5.8). Consequently, it is not necessarily surprising that USDA did not agree with the conclusion in our draft report that all of the successor states were not creditworthy. After considering USDA’s comment, we decided to restate our conclusion as follows: Most, if not all, FSU successor states are not creditworthy and all should be considered at least high risk from a creditworthiness perspective. We believe our restated conclusion is well supported by the information and analyses presented in the report, especially by the material presented in chapters 4 and 5. The most recent summary information in support of our restated conclusion is found in tables 5.10 and table 5.11. As table 5.10 shows, in March 1994 both Euromoney and Institutional Investor rated nearly all of the FSU successor states among the bottom one-third of all the countries they rated on creditworthiness, and most of the rated successor states were in the bottom quartile. As table 5.11 shows, Euromoney’s actual risk ratings for the 15 successor states for March 1994 imply risks of default ranging between about 66 percent (Latvia) to about 82 percent (Armenia). We believe it is reasonable to characterize countries that rank among the bottom one-third of all countries on country risk and that have an implied risk of default equal to or greater than 66 percent as being either uncreditworthy or at least highly risky from a creditworthiness perspective. As discussed in chapter 2, whether and to what extent GSM-102 exports lower domestic commodity support program costs depends importantly on the availability of alternative markets for the exports. This is referred to as the “additionality” issue. For example, if one assumes that in the absence of the GSM-102 credit guaranteed exports to the FSU and its successor states alternative export markets would not exist, this is characterized as 100-percent program additionality. If one assumes that 75 percent of the commodities could be exported to other countries, the program additionality would be only 25 percent. In chapter 2, we raised questions about USDA’s approach, which largely relied on an assumption of 100-percent additionality. We expressed the view that analyses should consider a range of additionality levels. In commenting on our draft report, USDA provided mixed views on this issue. On the one hand, USDA agreed that one should consider a range of additionalities. In fact, USDA cited a third estimate, provided to the Secretary of Agriculture in February 1993, in which two levels of additionality were assumed for $2 billion in credits to the FSU—50-percent additionality and 100-percent additionality. According to USDA, the estimate indicated deficiency payment savings of $0.7 billion to $1.4 billion. However, USDA further assumed a loan concessionality of 60 percent, or $1.2 billion, to cover loan defaults, freight costs, EEP bonus payments, and other unspecified factors. USDA estimated that the net budget costs of $2 billion in credits (after subtracting estimated deficiency payment savings from the loan concessionality cost) would vary between a cost of $500 million to a savings of $200 million. Although USDA’s comments cited a third estimate that included a 50-percent additionality case, USDA went on to say that an assumption of 100-percent additionality with regard to the FSU and its successor states seemed reasonable. In support of the latter view, USDA said it is likely that without the GSM-102 coverage the FSU would not have been able to purchase substantial quantities of U.S. commodities. This is illustrated, USDA said, by the sharp decline in U.S. exports to the FSU after it was suspended from the program. In addition, USDA said there were few alternative opportunities for the use of the credit guarantees in other countries. In chapter 2, we questioned USDA’s assumption that alternative export markets would not be available on the grounds that special features of the GSM-102 program made available to the FSU and its successor states should be attractive if offered to other importing nations. One special feature we noted was USDA coverage of 100 percent of the value of the commodities. However, in commenting on our draft report, USDA indicated that it does not like and would be unlikely to provide 100-percent coverage. In addition, USDA said, our analysis presumes there are creditworthy countries in the world marketplace that are interested in participating in a large-scale GSM-102 program. According to USDA, during fiscal years 1991 and 1992 principal markets not targeted for the GSM-102 program included China, Cuba, Iran, Libya, and Japan. With the exception of China and Japan, USDA said, there were few alternative markets that could exert the same amount of influence on U.S. domestic prices as that exerted by the FSU market; and both China and Japan purchased heavily from the United States during the time period without credit guarantees. We do not believe that 100-percent additionality is the most reasonable assumption. As discussed in chapter 2, we estimated that the combination of freight cost financing and EEP bonus payments, alone, made the additionality attributable to the GSM program for the FSU and its successor states in fiscal years 1991 and 1992 equal at most to about 77 percent. In addition, the issue of what assumed additionality level is most appropriate does not depend simply on whether the FSU could have purchased the U.S. commodities without the GSM-102 guarantees. If the United States had not provided the guarantees, other exporting countries might have provided credits or credit guarantees to assist the FSU. Doing so could have reduced those countries’ exports to third countries, enabling the United States to increase its exports to the latter. Even if the United States had not provided the guarantees and other countries had not provided additional guarantees, it is not obvious that the 100-percent additionality case should be applied. A decline in sales to the FSU would tend to lead to reduced prices on world markets, which, in turn, could result in increased demand. We have not advocated providing 100-percent loan guarantee coverage. However, we believe that if one wants to consider to what extent credit guarantees to the FSU and its successor states increased U.S. exports, a fair comparison should consider what would have happened if comparable terms had been offered to other countries. We are not aware of any single country with a market comparable to that of the FSU that would have been interested in GSM-102 credit guarantees. However, it is possible that a number of countries with smaller markets might have been interested in credit guarantees or additional guarantees if the terms were comparable to those extended to the FSU. Any guarantees or increase in guarantees provided to other countries would, of course, further detract from the realism of a 100-percent additionality assumption. USDA did not express any view regarding our suggestions in chapter 5 on how Congress could reduce future exposure of the GSM-102 portfolio to default. A USDA official told us that it had been examining the issue but had not yet reached any conclusions. USDA approved of our suggestion that if Congress concludes the United States needs to ensure continued U.S. agricultural exports to Russia and/or other successor states but decides additional GSM-102 guarantees are not appropriate at this time, it may want to consider authorizing additional foreign aid money to finance export sales. USDA said that ongoing agricultural exports to the FSU are essential to the American farm community and to U.S. geopolitical interests and provide needed foodstuffs to a market of enormous potential. We agree that the American farm community may benefit from ongoing exports to the FSU. We also agree that broader U.S. interests may be served by U.S. agricultural exports to the successor states but do not believe that such exports automatically advance such interests. For example, the United States has favored economic reforms in the FSU that promote development of a free market economy. Yet, as USDA itself noted in commenting on our draft report, western assistance (credits and food aid) to the FSU has probably had a detrimental impact on FSU economic reforms—including increased debt and continued state control of agricultural marketing. | Pursuant to a congressional request, GAO reviewed the creditworthiness of the former Soviet Union (FSU) and its successor states in the context of the Department of Agriculture's (USDA) Office of the General Sales Manager (GSM)-102 Export Credit Guarantee Program, focusing on: (1) the countries' general economic and political environment; (2) the relationship between the Soviet debt crisis and Soviet economic reform and creditworthiness; (3) how assessments of creditworthiness and market considerations affect USDA decisions on providing credit guarantees; and (4) the GSM-102 portfolio's exposure to default by FSU and its successor states. GAO found that: (1) most of the FSU successor states are not creditworthy because of their heavy debt burdens and severe liquidity problems; (2) as a block, FSU and its successor states hold the largest portion of program credits; (3) USDA extended $5 billion in credit guarantees to FSU, Russia, and Ukraine despite their high risk because it believed the states could service the debt; (4) the poor creditworthiness of FSU countries heavily exposes the GSM-102 loan portfolio to default; (5) FSU and Russian loan defaults have already occurred and the U.S. government has expended over $1 billion to settle loan guarantee claims; (6) the countries' continued ability to import food due to credit extensions may have hampered their agricultural reforms and food production and prolonged the existence of state-owned processors; (7) FSU debt arrearages continue to increase despite efforts to defer and reschedule debt and foreign economic assistance; (8) the countries' debt burden has grown out of their increased reliance on imports and credit programs, particularly for food; (9) Russia's debt burden increased significantly when it accepted responsibility for all FSU debt; (10) much of the foreign assistance provided to Russia in 1992 was contingent on Russia's implementation of additional economic reforms; and (11) the successor states are expected to experience further economic decline despite some progress in market reforms. |
Since 1988, DOD has relied on the BRAC process as an important means of reducing excess infrastructure and realigning bases to meet changing force structure needs. The 2005 BRAC round was the fifth round of base closures and realignments undertaken by DOD since 1988, and it was the biggest, most complex, and costliest BRAC round ever.The 2005 BRAC process generally followed the legislative framework of previous BRAC rounds, providing for an independent Defense Base Closure and Realignment Commission to review the Secretary of Defense’s closure and realignment recommendations, which were produced through the BRAC processes coordinated by the Deputy Under Secretary of Defense for Installations and Environment.Commission assessed the Secretary’s recommendations, under its authority to approve, modify, reject, or add closure or realignment recommendations, before reporting its own recommendations to the President. The President then approved the Commission’s recommendations and forwarded them to Congress, and the recommendations became final in November 2005. Implementation of the recommendations was required to be complete by September 15, 2011. Figure 1 below is a timeline of the 2005 BRAC round. As specified in the BRAC statute, DOD has 6 years to complete all installation closures and realignments, although certain actions, such as the cleanup of environmentally contaminated property and the subsequent transfer of unneeded property to other users, may extend beyond the 6-year implementation period for each round. Once DOD officially closes an installation, the property is typically considered excess and offered to other federal agencies. As shown in figure 2, any property that is not taken by other federal agencies is then considered surplus and is disposed of through a variety of means to state and local governments, local redevelopment authorities, or private parties. The various methods used to convey unneeded property to nonfederal parties noted in figure 2 are targeted, in many cases, to a particular end use of the property. For example, under a public benefit conveyance, state and local governments and local redevelopment authorities acquire surplus DOD property for little or no cost for purposes such as schools, parks, and airports. Under an economic development conveyance, property is transferred for uses that promote economic recovery and job creation. Conservation conveyances provide for the transfer of property to a state, a political subdivision of a state, or a qualified not-for-profit group for natural resource and conservation purposes. Property can also be conveyed to nonfederal parties through other methods shown in figure 2 without regard, in many cases, to a particular end use. For example, property can be sold or special congressional legislation can dictate transfer to a particular entity. In recent years, the growth of installations has occurred as a result of both the 2005 BRAC round and other DOD initiatives. Under the 2005 BRAC round, DOD implemented 182 recommendations, many of which resulted in significant personnel movement across installations and the subsequent growth of some of those installations. In addition, DOD has undertaken other actions outside of BRAC that have resulted in the growth of installations. For example, the Army has undergone a major force restructuring through its force modularity effort, which has been referred to as the largest Army reorganization in 50 years. This effort created Stryker brigades, primarily located at Joint Base Lewis-McChord, Washington. Finally, DOD’s Grow the Force initiative increased the end strength of the Army and the Marine Corps, affecting bases across the country. Although DOD has recently announced plans to downsize the Army and the Marine Corps, installations that experienced significant growth under these initiatives, and their surrounding communities, are still dealing with the impact of additional personnel and their dependents. Within DOD, the Office of Economic Adjustment (OEA)—a field activity under the Office of the Under Secretary of Defense for Acquisitions, Technology, and Logistics—assists communities by providing technical and financial assistance in planning and carrying out adjustment strategies in response to defense actions. OEA is the primary DOD office responsible for providing assistance to communities, regions, and states affected by significant defense actions including base closures and realignments. Much of that assistance in the past was directed toward communities that lost military and civilian personnel because of the closure or major realignment of a base. However, because the 2005 BRAC round and other initiatives described above have created significant growth at many bases, OEA has also assisted affected communities with growth planning. We have reported on the impact of BRAC actions numerous times over the last several years. In particular in our 2005 report on BRAC, we reported that DOD had closed 97 major installations since the first BRAC round in 1988. Specifically DOD closed 16 bases in BRAC 1988, 26 bases in BRAC 1991, 28 bases in BRAC 1993, and 27 bases in BRAC 1995. (See appendix II for a list of the 97 bases and the year of closure for each.) In that report, we studied 62 affected communities and found that most of the surrounding communities were able to replace the jobs lost due to the installation closure with new jobs created by the reuse of the installation and that these communities generally had economic indicators that compared favorably with the U.S. national averages. Specifically, as of July 31, 2004, almost 70 percent of the 62 affected communities studied in that report (43 out of 62) had unemployment rates at or below the national average. Our analyses in that report of annual real per capita growth rates for the BRAC–affected communities showed mixed results. The latest available data at that time (1999-2001) showed that only 48 percent of the BRAC-affected communities (30 of the 62) had an estimated average real per capita income growth rate that was higher than the national average. This was a decline from when we reported these figures in our 2002 report, where we reported that 33 out of 62 communities (53 percent) matched or exceeded the national average for real per capita income growth rate for 1996 to 1999. Communities surrounding the 23 major DOD installations closed in BRAC 2005 have used a variety of strategies to deal with the closures, and economic data on unemployment and real per capita income growth show the rates for these communities are comparable to national averages. During the BRAC 2005 round, DOD closed 23 major installations in the United States, the majority of which were Army installations. Figures 3 and 4 show the number of major installations that were closed by military service and the locations of these installations. Communities affected by these installation closures faced a number of different challenges and developed strategies to deal with them. In particular, community representatives we spoke with or surveyed said that some of their greatest challenges in dealing with the installation closures were developing a plan for the reuse of the property, dealing with facilities that were in poor shape or not suitable for reuse, and obtaining funding for infrastructure improvements. Other challenges identified by community representatives included navigating the legal intricacies of property transfer, dealing with environmental cleanup, and replacing lost jobs. Based on our analysis, we found that community representatives have used a variety of strategies to deal with these challenges. For example, several representatives that responded to our survey cited forming a local redevelopment authority as an effective strategy for dealing with installation closure. One community representative said his local redevelopment authority was comprised of local and regional stakeholders to help build alliances and partnerships across the entire community spectrum. Another said his local redevelopment authority operates in a public forum where all meetings are open to the public and community input is solicited at every meeting. This was also reiterated by another community representative who said it was important to use an open process involving public meetings and outreach to accept input and gain community support in developing a plan for the reuse of the property. Community representatives also cited working closely with DOD as an effective strategy for dealing with installation closure. For example, one community representative said the local redevelopment authority worked with the Army to ensure there was an effective and acceptable plan for dealing with environmental remediation prior to and after installation closure. Another local representative worked with DOD to have many of the facilities that were in poor condition demolished prior to finding new tenants, thus saving the community maintenance and operation dollars. A third representative said it would have been impossible without assistance from DOD for the local redevelopment authority to make its way through the maze of federal regulations and available information to create redevelopment plans. In addition, another community representative said that the Army was extremely helpful in transferring knowledge from numerous years of operating the installation. He said this historical knowledge allowed the local redevelopment authority to utilize previously completed studies in lieu of committing taxpayer dollars to repeat those investigations and reports. Further, he noted that data the Army maintained on utility and infrastructure would be used to manage repair and maintenance of the property in the future. Finally, another local redevelopment authority worked with local and headquarters Air Force staff to find follow-on jobs for DOD civilians that did not relocate or retire when the installation closed. Some community representatives cited leveraging funds from state and federal agencies as a successful strategy. For example, one community representative said the local redevelopment authority was able to secure state-issued bonds to make the facilities more business-ready. Another local redevelopment authority secured federal funds to modify existing structures, and a third local redevelopment authority took advantage of a state tax benefit that allowed it to use state money to pay down improvement bonds. In addition, community representatives cited receiving grants from OEA as being helpful to their local redevelopment authority’s organizational, planning, and implementation efforts. For example, one representative said the funding OEA provided was used to hire personnel, maintain offices, and conduct planning. Another said OEA provided funding through a grant that permitted the local redevelopment authority to hire dedicated professional staff and contract with a consultant to prepare the redevelopment plan and assist with the property transfer application. Many other representatives told us that OEA provided resources to develop reuse plans or supporting studies or to hire specific consulting or legal services. Table 1 below displays the OEA grants provided to closure communities during calendar years 2005 through 2012. In addition, community representatives who responded to our survey or that spoke with us said taking early possession of the property and leasing some of its assets was an effective strategy for dealing with installation closure. For example, one respondent said that since funding is limited for BRAC reuse projects, the local redevelopment authority entered into a protection and maintenance contract with the Army and was therefore able to lease out some of the assets on the installation, allowing the local redevelopment authority to increase revenue. Another community representative said the local redevelopment authority was able to move from complete financial dependence on federal funding to utilizing alternative revenue sources such as leases, tax revenue, and fees for service. He noted that the local redevelopment authority was awarded the Army caretaker contract for the former installation, and by providing services for fees, it was able to generate revenue. Furthermore, another local redevelopment authority representative told us that it is taking over the closure property in a piecemeal fashion. For example, it has already taken over all the historic homes at the installation and has leased out many of them. According to the community representative, this has provided the local redevelopment authority with some revenue to operate and maintain the facilities. The local redevelopment authority is also preparing to take over the rest of the property when the primary caretaker leaves. Community representatives also said hiring experts was an effective strategy for dealing with installation closure. For example, one community representative said the local redevelopment authority had to hire individuals who had experience with BRAC to get things done in a timely and professional way. In particular, she said the local redevelopment authority hired an attorney to draft documents associated with specific lease amendments. Likewise, a community representative said the local redevelopment authority hired an experienced BRAC attorney to advise them on the process for land transfer. Selected economic indicators for the 21 communities surrounding the 23 DOD installations closed in BRAC 2005 are comparable to national averages. In our analysis, we used annual unemployment and real per capita income growth rates compiled by the U.S. Bureau of Labor Statistics (BLS) and the U.S. Bureau of Economic Analysis (BEA) as broad indicators of the economic health of those communities where installation closures occurred. Our analyses of BLS annual unemployment data for 2011, the most recent data available, showed that 11 (52 percent) of the 21 closure communities had unemployment rates at or below the national average of 8.9 percent for the period from January through December 2011. The other 10 communities had unemployment rates that were higher than the national average (see figure 5). Of the 21 closure communities, Portland, Maine (Naval Air Station Brunswick) had the lowest unemployment rate at 6.1 percent and Modesto, California (Riverbank Army Ammunition Plant) had the highest rate at 16.8 percent. We also analyzed BEA real per capita income growth rates between 2006 and 2011 and found that 13 (62 percent) of the 21 closure communities had real per capita income growth rates that were higher than the national average of 0.14 (see figure 6). The other 8 communities had rates that were below the national average. Of the 21 communities affected, Yukon- Koyukuk, Alaska (Galena Forward Operating Location) had the highest growth rate at 4.31 percent and Atlanta, Georgia (Fort Gillem, Fort McPherson, and Naval Air Station Atlanta) had the lowest rate at -1.58 percent. Since 2005, DOD has implemented several major initiatives, including BRAC realignment actions and Army Modularity, that have resulted in growth in military and civilian personnel at 23 installations, and the communities surrounding these installations, which also experienced growth, have used a variety of practices and strategies to accommodate this growth. As shown in table 2, these 23 installations had a combined net growth of about 191,000 military and civilian personnel from fiscal years 2006 through 2012, with their total population growing from about 526,000 to over 717,000, a 36.3 percent increase. While Fort Sill, Oklahoma actually incurred a net loss during this time period, OEA treated it as a growth location because it was originally projected to increase by more than 2,000 personnel which was anticipated to impact the surrounding community in the areas of housing, schools, and transportation. In addition, Fort Knox, Kentucky also experienced a net loss in population, but changes to the mission and the resulting changes to the demographics, including the increase of full-time military personnel versus the temporary students that were previously stationed at the installation, caused significant challenges to the surrounding community. The growth of each of the 21 installations that did grow during this time period ranged from about 12 percent to 117 percent. Of the 23 growth installations, 16 were Army, 3 were Navy and 4 were Air Force. The seven installations that grew the most all had growth rates of more than 50 percent over fiscal years 2006 through 2012. These were five Army installations (Fort Belvoir, Virginia; Fort Bliss, Texas; Fort Carson, Colorado; Joint Base Lewis-McChord, Washington; and Fort Lee, Virginia); one Navy installation (Marine Corps Base Quantico, Virginia); and one Air Force installation (Joint Base San Antonio, Texas). Between 2006 and 2011 all of the surrounding communities also experienced growth. Table 3 shows the change in population of the communities surrounding the major growth installations from calendar years 2006 through 2011. The growth rates for the individual communities associated with the installations ranged from about 1 to 14 percent over this period. As with the installations, the majority of the communities experiencing the most growth surround Army installations. Specifically, communities surrounding Fort Bliss, Texas; Fort Stewart, Georgia; Redstone Arsenal, Alabama; and Fort Sill, Oklahoma experienced growth rates of more than 12 percent from 2006 through 2011. Also, one Air Force installation, Joint Base San Antonio, Texas experienced growth of more than 13 percent. The community with the sixth largest population gain was around the Marine Corps’ Camp Lejeune and New River Air Station, North Carolina with a growth rate of about 12 percent. Further, we found that population growth in the communities surrounding the growth installations could differ based on factors other than installation growth. While some of the community growth can be attributed to additional servicemembers and their families living in the communities, growth could also happen for other reasons. For example, at Joint Base Lewis-McChord, Washington, the two counties surrounding the installation experienced growth at the same time that the number of servicemembers at the installation increased. Installation officials told us that while the population of the installation increased primarily due to the creation of the Army’s Stryker Brigades, the surrounding communities also experienced an increase due to the growth in local industry, including the aerospace industry. We also found that in some cases, an installation experienced a large growth in personnel, but the surrounding community did not experience the same level of growth. For example, Fort Belvoir had the largest percentage of installation growth but ranked tenth in percentage of community growth. This occurred for a number of reasons. According to DOD officials most personnel that transferred to this installation already lived in the region and thus were commuting from other local areas to Fort Belvoir rather than moving to the region. Therefore, while Fort Belvoir incurred significant growth, population growth in the local communities was not impacted as directly as it might have been if transferred personnel were coming from other communities. Furthermore, the actual community growth for this area was almost 370,000 people—the largest overall increase of any community in our review—but because the metropolitan area was so large to start with the increase did not change the percentage of growth as significantly as did smaller changes to smaller communities. Growth of an installation can cause a variety of challenges for a community, and we found that communities have used a variety of strategies to cope with these challenges, which include increased demand for transportation, education, and other public services. For example, based on our site visits, interviews with DOD officials, data we collected from surveys, and discussion groups, transportation was a key challenge facing communities around growth installations. During all our site visits, DOD officials cited transportation as a major challenge, as did several other installations we contacted. For instance, several installation officials expressed concerns about the impact of traffic in the community and on the installation when vehicles are entering or leaving the base. In September 2009, OEA conducted a project needs assessment for defense growth communities. This assessment was initiated to assess projects identified by communities as needed to support DOD growth actions. As a result of this assessment, communities identified transportation improvements to mitigate growth impacts as the greatest need. In addition, based on data we collected from our surveys and interviews of growth community representatives, transportation was the most frequently cited challenge. Another challenge for communities discussed during our site visits as well as in interviews with DOD officials and discussion groups and cited by survey respondents involved overcrowding in local schools. DOD officials at installations we contacted expressed concerns about the capacity of local area schools to handle the growth. In addition, the 2009 OEA project needs assessment identified education as the second greatest funding need for defense communities. Other challenges noted by DOD officials and community representatives included the need for additional medical care and housing, lack of federal funding to deal with the growth, and inadequate utility systems. Communities have used a variety of strategies and practices to deal with these concerns. The most common successful strategy, cited by DOD officials, community representatives we interviewed, survey respondents, and discussion group participants, was to form a regional working group composed of representatives from all of the jurisdictions affected by the growth at the installation. Examples of some of the regional working groups are cited below: In response to growth at the North Carolina Eastern Region, which includes Marine Corps Base Camp Lejeune, Marine Corps Air Station Cherry Point and New River, the North Carolina’s Eastern Region Military Growth Task Force was established. The task force included representatives from surrounding counties, and its mission was to analyze community impacts from the sudden and unanticipated growth of these installations and develop potential recommendations to address those impacts. In response to mission growth at Fort Bragg, the Fort Bragg Regional Alliance was formed to evaluate economic, employment, infrastructure, and social impacts associated with this expansion and to identify actions required to address future growth needs in the area. The community around Walter Reed National Military Center established a stakeholders’ advisory board that brought together local business and community leaders and representatives from all levels of government and the Navy, who worked together to identify growth impacts and to propose solutions. In addition, DOD officials, community representatives, survey respondents, and discussion group participants cited seeking grants as a successful strategy to cope with the challenges posed by installation growth. In some cases, communities were successful in obtaining funding to address the associated growth in their area. OEA provided growth communities with grants to cover administrative expenses, including hiring consultants to conduct growth management studies. As seen in table 4 below, from 2005 through 2012, OEA provided over $73 million in grants to growth communities. In addition to OEA funding, state and local governments also provided funding to address various issues. For example, at Redstone Arsenal, state and local governments provided funding to address the overcrowding of the local schools. Conducting studies to determine installation and community needs was also cited as a key practice in working effectively through the challenges that base growth creates. Studies provide the necessary data to guide the individual bases and community representatives to take action to find solutions to address challenges. Examples of studies are cited below: At Joint Base Lewis-McChord, the South Sound Military and Communities Partnership, a group comprised of communities surrounding the installation, conducted a survey of servicemembers and found that where servicemembers live in the community is influenced by how much they can afford with their Basic Allowance for Housing. Further, installation officials identified a lack of affordable housing as an issue and worked with community representatives to develop a rental property program where landlords from the community voluntarily sign up to give discounts to servicemembers in exchange for receiving their rent as an allotment directly from the Army. This helped the community by decreasing vacancy rates, and helped the base by finding servicemembers affordable housing. At Fort Bragg, a study conducted for the Fort Bragg Regional Alliance, a group formed to deal with the growth at Fort Bragg, revealed that many people working in the Fort Bragg area were not prepared to compete for high-wage and high-skill jobs both on base and in the community. As a result the base and community worked together to develop a career exploration platform and installed enhanced technology classrooms into 33 schools and 8 community colleges throughout the region, which provided training and resources to better prepare the community for the workforce. At Camp Lejeune Marine Corps Base, a study conducted for the North Carolina’s Eastern Region Military Growth Task Force identified traffic issues between the base and the surrounding neighborhoods. The study group proposed the implementation of an Intelligent Traffic System in the surrounding City of Jacksonville to offer instant relief by monitoring and controlling key choke points on area roadways that connect the base to the neighborhoods where employees live. Intelligent traffic lights were later installed and helped with the flow of traffic on base and the surrounding communities. Community representatives we surveyed and spoke with indicated that DOD provides good support to communities facing base closure through its OEA, but representatives from communities surrounding closed Army installations that took ownership of the facilities stated that in many instances the Army facilities were not maintained at a sufficient level to retain their value or facilitate reuse. The Navy and the Air Force have guidance that aligns with DOD guidance specifying levels of maintenance to be provided during the BRAC process but the Army has not issued its own guidance. If Army officials and community representatives do not have a clear understanding as to the level of maintenance that should be carried out, local redevelopment authorities and the Army will continue to have differing expectations of the maintenance that should be provided to closed facilities, hindering the transfer and reuse process. DOD, primarily through OEA, provides assistance to communities surrounding closure installations. OEA assigns a project manager to each community who can provide assistance in a variety of ways. For example, project managers can provide funds for hiring consultants to assist in developing a reuse plan, information on federal grant money or other available resources, and information on best practices used by other closure communities. Installation and community representatives that we spoke with and surveyed stated that they were pleased with the level of assistance that OEA provided. For example, one representative stated that the OEA project manager was a valuable resource in dealing with the base closure. Another representative stated that OEA’s assistance was helpful to address crucial issues with closing the installation. A third representative described OEA’s support as invaluable and stated that his community could not have planned for base reuse without OEA’s assistance. Further, OEA provides grant money to closure communities as described above. All of the respondents to our survey that requested best practice information from DOD, OEA or the services stated that they received some or all of the information they requested. In addition, OEA project managers regularly connect communities so that they can share best practices and OEA’s website provides reports containing lessons learned from other communities and information on other available resources. OEA is currently developing a community forum function on its website where community members can exchange ideas and learn from each other’s experiences. The Navy and the Air Force have issued guidance on the appropriate maintenance levels to be performed on closed facilities, but the Army has not. In both interviews and discussion groups, representatives from communities surrounding some closed Army installations stated that the Army did not provide adequate facility maintenance to buildings it planned to transfer ownership of during the BRAC process. An official from the Army BRAC office stated, however, that the Army makes every effort to provide maintenance in accordance with the planned usage of the facilities and that communities have unrealistic expectations of the condition of the buildings. DOD guidance states that surplus facilities and equipment at installations that have been closed can be important to the eventual reuse of the installation and that each military department is responsible for protecting and maintaining such assets in order to preserve the value of the property. The guidance further states that the services should consult with the local redevelopment authority to determine the maintenance levels for the facilities. Finally, while the DOD guidance states that the services have developed specific maintenance levels, only the Navy and the Air Force have published service-specific guidance to clearly describe their maintenance levels consistent with factors outlined in DOD’s guidance. The Navy has developed the most comprehensive guidance on the maintenance of closure facilities. The Department of the Navy’s Base Realignment and Closure Implementation Guidance describes a process for establishing initial maintenance levels in consultation with the local redevelopment authority.intended reuse of the individual building. For example, a building that is designated for immediate reuse, meaning that the local redevelopment authority has already identified a tenant, will be given the highest level of maintenance. A building that does not have a reuse identified is given the lowest level of maintenance, where only conditions adversely affecting public health, the environment and safety are to be corrected. This guidance further states that the Navy’s BRAC Program Management Office is responsible for overseeing this process and for serving as the Navy’s liaison to the community. Navy officials stated that the local redevelopment authorities often want all of the buildings maintained at a high level or that they change their minds on the immediacy of reuse of the buildings after the Navy has begun maintenance at a different maintenance level, resulting in property that is not maintained to the level that the community would like because degradation of the property may have already occurred. Officials further stated that they are willing to work with communities to determine the appropriate maintenance levels and even to change the maintenance levels if possible, but the best way to ensure maintenance levels are in place to meet the needs of the community is for the local redevelopment authority to work closely with the Navy early on in the closure process to determine the proper level of maintenance for individual facilities. Building maintenance levels are based on the Air Force guidance from 1991 states that as facilities are vacated, they shall be placed into one of six maintenance levels that are dependent on the planned reuse of the facilities. This guidance has not been updated to reflect the current command structure of the Air Force, including the Air Force Real Property Agency, which was formed in 2002 and would have responsibility for overseeing the implementation of these actions in a future BRAC round. However, officials stated that the current policy is still in effect and being followed, even though the Air Force has reorganized the components overseeing this process. The Army has not issued any guidance in this area. An Army official stated that the Army had draft guidance, but it was never finalized and that the Army therefore relies solely on the DOD guidance. However, the DOD guidance does not describe specific levels of maintenance. Community representatives surrounding some closed Army installations who we spoke with in interviews and discussion groups stated that the Army did not always maintain the facilities to their expectations, resulting in facility deterioration. For example, at the former Kansas Army Ammunition Plant, community representatives stated that a storm damaged the roof of one building that was never repaired by the Army, resulting in mold growing inside the building and requiring the building to be demolished at the expense of the local redevelopment authority (see figure 7). In interviews, discussion groups and survey responses, other representatives from communities surrounding closed Army installations provided examples of poorly maintained property. One representative stated that the service did not provide any maintenance to a housing facility, resulting in significant mold growth inside the facility and rendering it unusable. The community had to demolish this facility at its own expense. Another representative told us that the service did not properly maintain the grass on the property, causing it to become overgrown and attract wildlife. Because of this increase in wildlife in the area, the service had to spend approximately three times the originally budgeted amount for environmental restoration of the property. Another community representative stated that her local redevelopment authority took over the maintenance of an Army installation prior to transfer through a maintenance contract with the Army. She stated that her organization did this to fix issues of deferred maintenance and to ensure that buildings were properly maintained so that they could be reused. An official with the Army BRAC office told us that the Army makes an effort to maintain closed facilities in accordance with their planned usage. For example, he stated that in fiscal year 2013 the Army provided $49 million in caretaker funds for installations closed during BRAC 2005. He further stated that local redevelopment authorities would like the buildings to be in new condition, but that is not a realistic expectation. As a result, an expectations gap exists between the Army and communities regarding the levels of maintenance to be provided to facilities during the transition period. Without clear guidance on the expected levels of maintenance for closed facilities, the communities will not have a clear understanding of what maintenance the Army will provide, hindering the transfer and reuse process. Community representatives we surveyed or spoke with indicated that DOD provides good support to communities facing base growth through its OEA, but more data and long-term coordination could improve the communities’ and DOD’s ability to respond to future force structure changes. Without accurate and timely information, and a means to ensure continued effective communication throughout the growth process, communities will be hindered in their efforts to effectively plan for growth. Similar to the types of support that DOD provides to communities facing installation closures, DOD, primarily through OEA, also provides support to communities facing base growth. OEA provides a project manager to growth communities to help with technical and financial information assistance and growth management planning assistance. The project manager assists the communities in identifying available resources, including potential OEA grants as described above. In addition, as in the case of communities facing installation closures, the OEA project manager can link growth communities with other growth communities to facilitate collaboration and the sharing of best practices. Growth community representatives that we spoke with or that responded to our survey were pleased with the level of support that they received from OEA. For example, one growth community representative commented that his community’s OEA project manager was a great source of information and had a lot of experience. Another community representative commented that having OEA support was tremendously beneficial to the community. She further stated that with OEA’s support, her community was able to better plan for community needs. Several community representatives that we spoke with stated that their project manager visited their community regularly and participated in planning meetings. Community and installation representatives that we spoke with and responded to our survey identified some areas where improvements could be made to enable both DOD and the communities to be in a position to better respond to potential installation growth, particularly with regard to additional data for planning purposes and long-term coordination between the community and the installation. First, community representatives indicated that they need additional information to adequately plan for the growth in their community. DOD guidance states that maximum advance information and support should be provided to state and local governments to plan for military growth actions. The services implemented DOD’s guidance by issuing service- specific guidance specifying certain information that shall be provided to communities including military and civilian personnel changes; school-age children increases or decreases; and construction activity. However, some community representatives noted that they would like more specific information. For example, they told us that installations are unable to provide communities with aggregate data on where servicemembers and their families live while stationed at the local installation, because they do not have a system that tracks this type of information. Service officials confirmed that current personnel data systems contain the servicemembers’ home station of record rather than their current residence and payroll systems may only include direct deposit information and not a home address, and that therefore this information is not currently available. In addition, although the housing office at the installation may have information on the number of servicemembers living off the installation, there is no requirement to maintain information on where those servicemembers live. Installation and service officials did note, however, that existing data systems could potentially be modified to provide this information. Installation officials noted that communities continually asked for this information so that they can plan for the impact of installation growth on transportation routes, local school districts, and the need for various social services. Installation and service officials expressed some concerns about privacy and force protection issues stemming from the release of this information, but acknowledged that it would be beneficial for communities to have some type of aggregate information on where servicemembers reside in the communities to help with community planning and traffic management, including where traffic will feed into the access control points on base. One community representative noted that as installation growth often happens incrementally over time, having updated information that captures where the additional servicemembers move to during the ongoing growth period would enhance the communities’ ability to respond to this growth. In addition to the need for more data on where personnel live, community representatives and installation officials we interviewed stated that establishing a long-term civilian point of contact at the base installation level is necessary to effectively plan for the long-term effects of growth on the base and local community. Both the Navy and the Marine Corps have a provision for a Community Planning and Liaison Officer at installations whose role is to be the central information point with the community. Navy guidance states that to ensure continuity, the inclusion of a civilian planner in the community planning liaison team is strongly encouraged. The guidance further states that not every installation can support a full- time Community Planning and Liaison Officer position, and that this position can be a collateral duty. In the Marine Corps, the Community Planning and Liaison Officer is usually a senior civilian who has the responsibility to develop and maintain a network with state and local officials. Air Force officials told us that the Air Force has a draft instruction with information on this type of position, but it has not yet been approved and an Army official confirmed that the Army does not currently have this type of position. We have previously reported that career civilians possess institutional memory, which is particularly important in DOD because of the frequent rotation of military personnel. An official from the Army BRAC office agreed that a long-term point of contact at an installation is important to maintaining community relationships. In the past, the Army specifically stationed an additional officer to large growth installations to be the primary contact point for the installation during the growth period. This official further stated that he believes that the liaison function can be performed by an active duty person; however, many Army installations have a civilian deputy garrison commander acting in this capacity. According to leaders we spoke with, base commanders do an excellent job with community outreach; however, because they are typically only in their positions for 2 to 3 years and then transferred, community outreach has to start all over again once a new commander is appointed. In the Navy and the Marine Corps, the Community Planning and Liaison Officer does not replace the installation commander in community outreach; rather, the position provides an additional person to act as a day to day point of contact to work directly with local governments, community representatives, and non-governmental organizations. Officials at installations that we visited felt that maintaining such a position at the installation level would be beneficial in establishing long-term working relationships with the community. Accurate and timely information on such things as personnel residence areas and expected changes in demand for public services could better facilitate communities’ efforts to accommodate installation growth. Further, effective communication throughout the growth process enables community and installation leaders to collaborate on solutions to the problems raised by installation growth. DOD plays a significant role in communities across the country, and actions taken by the department and the military services to change force structure, composition, size, or distribution can have direct impacts on the communities where such actions are implemented. A decision to change the size or population of a base installation or to close it entirely affects the economy of the surrounding community. DOD has taken effective steps to aid both growth and closure communities during the BRAC process, however further efforts could prove useful. Base closure presents many challenges for a surrounding community. The condition of the buildings that DOD no longer needs and plans to either sell or transfer to the communities has a direct effect on the development of a successful reuse plan or arrangement between the parties; if properties deteriorate during the closure transition time, they will have less value for the communities that acquire them. In the case of the Army, both installation officials and communities would benefit if there were clear Army guidance to govern the maintenance of facilities prior to their transfer to the community. Installation growth can affect surrounding communities, creating new demands for transportation, education, and other social services. To meet these needs, community representatives stated that additional information could be helpful to facilitate planning efforts. Finally, community leaders pointed out that having an established long- term point of contact with the military community on base to help see growth projects through to completion would also be helpful. Community representatives generally expressed satisfaction with the quality of DOD’s support and regular interaction with community planners. However, the support and regular interaction that is so important to maintaining productive and efficient working relationships between communities and installations can be enhanced by taking steps to lessen the impact of changes in personnel such as those that occur with changes of station for military personnel. With DOD hoping to pursue more BRAC rounds in the future, actions that would further ease the transitions that communities face would be worthwhile. To improve the ability of the Army and local communities to manage future base closures, we recommend that the Secretary of Defense direct the Secretary of the Army to issue, consistent with DOD guidance, guidance on specific levels of maintenance to be followed in the event of a base closure based on the probable reuse of the facilities. To improve the ability of DOD and the local communities to respond to future growth actions, we recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force to consider developing a procedure for collecting service members’ physical addresses while stationed at an installation, annually updating this information, and sharing aggregate information with community representatives relevant for local planning decisions, such as additional population per zip code, consistent with privacy and force protection concerns. Furthermore, we recommend that the Secretary of Defense direct the Secretaries of the Army and the Air Force to consider creating or designating a civilian position at the installation level to be the focal point and provide continuity for community interaction for future growth installations and to consider expanding this position to all installations. This position may be a collateral duty. In written comments on a draft of this report, DOD concurred with one recommendation and partially concurred with two recommendations. DOD concurred with our first recommendation to direct the Secretary of the Army to issue guidance on specific levels of maintenance to be followed in the event of a base closure, based on the probable reuse of the facilities. DOD stated that the Army agrees to publish property maintenance guidance prior to closing installations in the event of future base closures. DOD partially concurred with our second recommendation to develop a procedure for collecting service members’ physical addresses while stationed at an installation, annually updating this information, and sharing aggregate information with community representatives. DOD stated that it agrees that information pertaining to the physical location of installation personnel helps affected communities plan for housing, schools, transportation and other off-post requirements and that existing policy requires the military departments to share planning information, including base personnel, with states and communities. DOD also stated that in the event of future basing decisions affecting local communities, it will work with the military departments to assess and determine the best means to obtain, aggregate, and distribute this information to help ensure that adequate planning information is made available. We are pleased that DOD recognizes the importance of this information to community planners and plans to address this in the future. However, we believe that proactively determining the best means to provide such information, rather than assessing the problem should it arise due to future basing decisions, would reduce the challenges the department and affected communities face. DOD partially concurred with our third recommendation to direct the Army and the Air Force to create or designate a civilian position at the installation level to be the focal point for community interaction for future growth installations, and consider expanding this position to all installations. DOD stated that it agrees with the need for a designated position at the installation level and will ensure that each military department is meeting this need through current practices. DOD also stated that many growth installation officials often already serve as “ex- officio members” of the community’s growth management organizations, and as we noted in our report, community officials agree that this has been quite valuable for both the department and affected growth communities. However, it is not clear from DOD’s comments whether the department specifically agrees that installations should maintain a civilian rather than military position to fulfill the role of community liaison, and thus we reiterate our belief that creating or designating a civilian position would provide greater continuity over time than would assigning liaison responsibilities to a military servicemember. DOD’s comments are printed in their entirety in appendix V. We are sending copies of this report to appropriate congressional committees, the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director of the Office of Economic Adjustment. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7968 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To identify communities experiencing installation closures and to compare their current economic indicators to national averages, we focused our review on the 23 major DOD installations closed in the BRAC 2005 round and their surrounding communities. For BRAC 2005, DOD defined major installation closures as those that had a plant replacement value exceeding $100 million. We identified the major closure installations using DOD’s information provided for our previously issued report on BRAC. We defined the “community” surrounding each installation as the economic area identified in DOD’s Base Closure and Realignment Report which linked a metropolitan statistical area, a metropolitan division, or a micropolitan statistical area to each installation. Because DOD’s BRAC report did not identify the census area for the Galena Forward Operating Location in Alaska or the Naval Weapons Station Seal Beach Detachment in Concord, California, we identified the town of Galena as within the “Yukon-Koyukuk Census Area” and the city of Concord in the Oakland-Fremont-Hayward, CA Metropolitan Division and our analyses used the population and economic data for these areas. To compare the economic indicator data of the communities surrounding the 23 major DOD installations closed in the BRAC 2005 round to U.S. national averages, we collected and analyzed calendar year 2011 unemployment data from the U.S. Bureau of Labor Statistics (BLS) and calendar year 2006 through 2011 per capita income growth data, along with data on inflation, from the U.S. Bureau of Economic Analysis (BEA), which we used to calculate real per capita income growth. The most current calendar year for which local area data was available from these databases was 2011. We assessed the reliability of these data by reviewing BLS and BEA documentation regarding the methods used by each agency in producing their data and found the data to be sufficiently reliable for our purposes. We used unemployment and real per capita income as key performance indicators because (1) DOD used these measures in its community economic impact analysis during the BRAC location selection process and (2) economists commonly use these measures in assessing the economic health of an area over time. While our assessment provides an overall picture of how these communities compare with the national averages, it does not necessarily isolate the condition, or the changes in that condition, that may be attributed to a specific BRAC action. To identify the installations that have experienced significant population increases since 2005 (“growth installations”) and their surrounding communities, we collected and analyzed available military service data regarding the personnel growth at 23 growth installations within the United States. We identified the major growth installations by using DOD’s information provided for our previously issued report on BRAC and OEA grant data. We defined the “community” surrounding each installation as the community identified in DOD’s Base Closure and Realignment Report which linked a Metropolitan Statistical Area to each installation, supplemented with U.S. Bureau Census data as needed. To describe the populations of the 23 DOD growth installations and their surrounding communities, we collected and analyzed population data from the military services and the U.S. Census Bureau, respectively. In order to present information regarding expected growth at each military installation, we analyzed Army and Air Force headquarters-level data, and Navy and Marine Corps installation-level population data. We obtained and analyzed the installation population data for fiscal years 2006 and 2012, the most recent data available, for military and civilian personnel excluding dependents and nonmission-related contractors. We contacted cognizant Army, Navy, Marine Corps, and Air Force officials to gather and explain these data. We found this data to be sufficiently reliable for our purposes. We analyzed community population data for calendar years 2006 through 2011, the most recent data available. For the 2006 populations, we used the latest estimates available, which were from 2009. We assessed the reliability of the Census Bureau data by reviewing documentation regarding the methods used to produce the data. We assessed the reliability of the military service data by asking service officials to answer a set of standard questions about the accuracy and completeness of the data including relevant data collection, storage, maintenance, and error-checking procedures. In addition, we conducted logic and computational checks on the data that the Army provided. We found all these data sources to be sufficiently reliable for our purposes. While our assessment provides an overall picture of how these installations and communities grew during this timeframe, it does not necessarily isolate the condition, or the changes in that condition, that may be attributed to a specific BRAC action. To gain initial insight into the practices and strategies communities used to address installation closures and growth, we held two discussion group meetings with closure community representatives and two discussion group meetings with growth community representatives at the Association of Defense Communities Conference in August 2012 in Monterey, California. Officials from six closure communities and six growth communities participated in these groups. A discussion group protocol was developed to help the moderator gather information from these officials about the experiences of these communities. The protocol contained questions about challenges the communities had experienced, strategies that the communities have found successful in preparing for or dealing with installation closure or growth, and the type and quality of assistance they had received from multiple sources. Notes were taken by at least one, but usually multiple, GAO note-takers. Sessions were audio- recorded but recordings were used as a backup to written notes only. The results of these discussion groups cannot be generalized to all closure or growth communities, but common responses across groups and similar findings through the survey provide converging validation. We also conducted a survey of closure and growth communities, which is described in detail below. We also called and talked to several respondents by phone to clarify their answers to the survey and ask additional follow-up questions. Further, we collected data from OEA about the grants that they provided to closure and growth communities. We assessed the reliability of the OEA data by asking OEA officials to answer a set of standard questions about the accuracy and completeness of the data including relevant data collection, storage, maintenance, and error- checking procedures. We found these data to be sufficiently reliable for our purposes. To gain additional insight into the practices and strategies communities used to address installation growth, we visited four locations representing each of the military departments: Camp Lejeune Marine Corp Base, North Carolina; Eglin Air Force Base, Florida; Fort Belvoir, Virginia; and Joint Base Lewis-McChord, Washington. At each location we interviewed installation and local community officials regarding the communities’ growth challenges and strategies. To determine the extent to which DOD has provided support to communities to address base closure or growth, we discussed this issue with closure and growth community representatives individually and in the discussion groups described earlier and we interviewed DOD and service officials. We also surveyed closure and growth community representatives on their experiences and any areas where they felt they needed additional support or areas they considered adequate to support their needs. The survey is described in detail below. We also reviewed DOD and service guidance on DOD and the services’ roles and responsibilities in the event of a base closure or growth. To inform multiple objectives, we sent a survey to representatives of all 23 growth communities and 22 of the 23 closure communities to gather detailed information on the greatest challenges each community had experienced, successful strategies they had used to deal with change, and assistance and information they had received from federal sources. We did not send a survey to the Mississippi Army Ammunition Plant because the property was transferred from the U.S. Army to the National Aeronautics and Space Administration, which did not require disposal through a local redevelopment authority. The survey was implemented as a self-administered Microsoft Word form emailed to respondents. We sent e-mail notifications to community representatives beginning on November 5, 2012. We then sent the questionnaire and a cover e-mail to representatives on November 7, 2012 and asked them to fill in the questionnaire form and email it back to us within two weeks. To encourage respondents to complete the questionnaire, we sent e-mail message reminders and a replacement questionnaire to each non-respondent approximately one week and three weeks after the initial questionnaire was sent. We also made follow-up phone calls to non-respondents from December 11, 2012 to February 4, 2013. We closed the survey on February 19, 2013. Overall, we received 37 completed questionnaires for an overall response rate of 82.2 percent. Of those, 21 were from growth communities and 16 were from closure communities, for response rates of 91.3 percent and 72.7 percent, respectively. To minimize errors that might occur from respondents interpreting our questions differently than we intended, we pretested our questionnaire with four community officials (two from growth communities, two from closure communities) who were in positions similar to the respondents who would complete our actual survey. During these pretests, we asked the officials to complete the questionnaire as we observed the process and noted potential problems (two sessions were conducted in-person, two were conducted by phone). We then discussed the questions and instructions with the officials to check whether (1) the questions and instructions were clear and unambiguous, (2) the terms used were accurate, (3) the questionnaire was unbiased, (4) the questionnaire did not place an undue burden on the officials completing it, and (5) to identify potential solutions to any problems identified. We also submitted the questionnaire for review by an independent GAO survey specialist and two external reviewers who were experts on the topic of the survey (selected based on their experience with military installation closure and/or growth issues). We modified the questionnaire based on feedback from the pretests and reviews, as appropriate. Because we attempted to collect data from every community rather than a sample of communities, there was no sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps in the development of the survey, the data collection, and the data analysis to minimize these non- sampling errors and help ensure the accuracy of the answers that were obtained. For example, a social science survey specialist designed the questionnaire, in collaboration with GAO staff with subject matter expertise. Then, as noted earlier, the draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by external experts and an additional GAO survey specialist. Data were electronically extracted from the Word questionnaires into a comma-delimited file which was then imported into a statistical program for analyses. No manual data entry was performed, thereby removing an additional potential source of error. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error, and addressed such issues as necessary. Quantitative data analyses and the compilation of open-ended responses were conducted by the first GAO survey specialist using statistical software and working directly with GAO staff with subject matter expertise. An independent GAO data analyst checked the statistical computer programs for accuracy. The verbatim wording of key survey questions whose results are discussed in this report is below: What have been your community’s three greatest challenges in dealing with base closure or growth in your community? Please list one challenge per box below. You can list up to three challenges in any order. (Response options provided: Three text boxes.) What successful strategies, if any, has your community used in dealing with the first challenge? The box will expand as you type. (Response option provided: One text box.) Has your community received any financial, technical, or other assistance (e.g., networking assistance) from the Office of Economic Adjustment, the military services, or any other office within the Department of Defense (DOD) to deal with the first challenge you listed above? Please do not include any assistance you received from the Association of Defense Communities or any other agency or organization other than DOD. (Response options provided: Checkboxes labeled “Yes”, “No”, and ‘Don’t know.”) If Yes, what assistance has your community received from the Office of Economic Adjustment, the military services, or any other office within DOD to deal with the first challenge you listed above? (Response option provided: One text box.) In your opinion, did the Office of Economic Adjustment provide adequate assistance to address the first challenge you listed above? (Response options provided: Checkboxes labeled “Yes”, “No”, and ‘Don’t know.”) Responses to closed-ended (e.g., Yes/No) questions were summarized as standard descriptive statistics. Responses to open-ended questions were analyzed through content analysis. In conducting the content analysis, one GAO analyst reviewed each open-ended response from each community representative to identify recurring themes. Using the identified themes, the analyst then developed categories for coding the responses. A second GAO analyst reviewed each response from each community representative and reviewed the first analyst’s themes and categories to reach concurrence on the themes and categories. Each of the two GAO analysts then independently reviewed the answers to each open-ended question and placed them into one or more of the categories. The analysts then compared their coding to identify any disagreements and reached agreement on all items through discussion. We conducted this performance audit from June 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. James R. McTigue, Jr., (202) 512-7968 or mctiguej@gao.gov. In addition to the contact named above, Laura Durland, Assistant Director; Bonita Anderson; Leslie Bharadwaja; Timothy Carr; Mary Jo LaCasse; Gregory Marchand; Michael Silver; and Erik Wilkins-McKee made key contributions to this report. Defense Infrastructure: Improved Guidance Needed for Estimating Alternatively Financed Project Liabilities. GAO-13-337. Washington, D.C.: April 18, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies. GAO-13-134. Washington, D.C.: November 15, 2012. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington, D.C.: April 1, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. | Through BRAC and other growth initiatives, DOD has made significant changes to its force structure, affecting communities around DOD installations. To help transition toward a smaller, more agile force, DOD has requested new BRAC authority. House Report 112-479, accompanying the fiscal year 2013 National Defense Authorization Act, directed GAO to study the practices and strategies that communities have used to cope with installation closure or growth. This report (1) describes the practices and strategies communities have used in dealing with base closures and growth since 2005 and economic and population data in those communities and (2) presents information on communities' needs in adjusting to installation closure and growth. GAO interviewed DOD, service, and installation officials; interviewed and surveyed community representatives; reviewed relevant guidance; and visited select installations. The 21 communities surrounding the 23 Department of Defense (DOD) installations closed in the 2005 Base Realignment and Closure (BRAC) round have used strategies such as forming a local redevelopment authority and seeking federal grants to deal with the closures. Some economic data for these communities are comparable to national averages, with some variation. For instance, GAO found that 52 percent (11 of 21) of communities had unemployment rates lower than the national average of 8.9 percent, although the rates ranged from a low of 6.1 percent to a high of 16.8 percent. Sixty-two percent (13 of 21) of the closure communities had real per capita income growth rates higher than the national average of 0.14 percent for the period from 2006 through 2011. Since 2005, 23 other installations have experienced population increases that have resulted in net growth of about 191,000 military and civilian personnel (a 36 percent increase), and their corresponding communities have used several strategies to accommodate this growth, including forming a regional working group composed of representatives from affected jurisdictions. Community representatives stated that DOD's Office of Economic Adjustment (OEA) provides good support to communities facing base closure, but some representatives from communities surrounding closed Army installations stated that facilities were not maintained at a high enough level for reuse. An Army official told GAO that the Army makes an effort to maintain closed facilities in accordance with their planned usage and that local redevelopment authorities have unrealistic expectations of maintenance levels. DOD guidance states that the services have developed specific maintenance levels for facilities during the transition process. The Air Force and the Navy have published this specific guidance, but the Army has not and instead relies upon DOD's guidance, which does not describe specific levels of maintenance. Without clear guidance on the expected levels of maintenance for closed facilities, the communities may not have a clear understanding of what maintenance the Army will provide. Community representatives indicated that OEA provides good support to communities facing base growth, but that additional data and a civilian point of contact at the installation could improve their ability to respond to future growth. DOD has issued guidance that states communities should be provided maximum advance information to plan, and service guidance states that services will give communities information including military and personnel changes. However, community representatives told GAO that they would like additional aggregate information on where servicemembers live while stationed at the installation to facilitate planning for the impact of installation growth. Installations currently do not provide communities with this information because they do not have a system to track it, but officials noted that existing systems could potentially be modified to provide it. Installation officials and community representatives also stated that establishing a long-term civilian point of contact at the installation would help the community effectively plan for growth. Accurate and timely information on personnel residence areas and a civilian point of contact at the installation could better facilitate communities' efforts to accommodate installation growth. DOD concurred with GAO's recommendation that the Army issue guidance on maintenance levels to be provided during the base closure process. DOD partially concurred that it should establish procedures for sharing additional information with growth communities and designate a civilian point of contact at growth installations. GAO believes action by DOD prior to future installation growth will help forestall future challenges. |
The Forest Service and Interior collectively manage about 700 million acres of federal land, much of which is considered to be at high risk of fire. Federal researchers estimate that from 90 million to 200 million acres of federal lands in the contiguous United States are at an elevated risk of fire because of abnormally dense accumulations of vegetation, and that these conditions also exist on many nonfederal lands. Addressing this fire risk has become a priority for the federal government, which in recent years has significantly increased funding for fuels reduction. Fuels reduction is generally done through prescribed burning, in which fires are deliberately lit in order to burn excess vegetation, and mechanical treatments, in which mechanical equipment is used to cut vegetation. Although prescribed burning is generally less expensive on a per-acre basis than mechanical treatment, prescribed fire may not always be the most appropriate method for accomplishing land management objectives—and in many locations it is not an option, because of concerns about smoke pollution, for example, or because vegetation is so dense that agency officials fear a prescribed fire could escape and burn out of control. In such situations, mechanical treatments are required, generating large amounts of wood—particularly small-diameter trees, limbs, brush, and other material that serve as fuel for wildland fires. Woody biomass can be used in many ways. Small logs can be peeled and used as fence posts, or can be joined together with specialized hardware to construct pole-frame buildings. Trees also can be milled into structural lumber or made into other wood products, such as furniture, flooring, and paneling. Woody biomass also can be chipped for use in paper pulp production and for other uses—for example, a New Mexico company combines juniper chips with plastic to create a composite material used to make road signs—and can be converted into other products such as ethanol and adhesives. Finally, woody biomass can be chipped or ground for energy production in power plants and other applications. Citing biomass’s potential to serve as a source of electricity, fuel, chemicals, and other materials, the President and the Congress have encouraged federal activities regarding biomass utilization—but until recently, woody biomass received relatively little emphasis. Major congressional direction includes the Biomass Research and Development Act of 2000, the Farm Security and Rural Investment Act of 2002, the Healthy Forests Restoration Act of 2003, and the American Jobs Creation Act of 2004. Utilization of woody biomass also is emphasized in the federal government’s National Fire Plan, a strategy for planning and implementing agency activities related to wildland fire management. For example, a National Fire Plan strategy document cites biomass utilization as one of its guiding principles, recommending that the agencies “employ all appropriate means to stimulate industries that will utilize small-diameter woody material resulting from hazardous fuel reduction activities.” Federal agencies also are carrying out research concerning the utilization of small-diameter wood products as part of the Healthy Forests Initiative, the administration’s initiative for wildland fire prevention. Most of the federal government’s woody biomass utilization efforts are being undertaken by USDA, DOE, and Interior. While some activities are performed jointly, each department also conducts its own activities, which generally involve grants for small-scale woody biomass projects; research on woody biomass uses; and education, outreach, and technical assistance aimed at woody biomass users. USDA, DOE, and Interior have undertaken a number of joint efforts related to woody biomass. In June 2003, the three departments signed a memorandum of understanding on woody biomass utilization, and the departments sponsored a 3-day conference on woody biomass in January 2004. The departments also have established an interagency Woody Biomass Utilization Group, which meets quarterly to discuss relevant developments and to coordinate departmental efforts. Another interdepartmental collaboration effort is the Joint Biomass Research and Development Initiative, a grant program conducted by USDA and DOE and authorized under the Biomass Research and Development Act of 2000. The program provides funds for research on biobased products. DOE also has collaborated with both USDA and BLM on assessment of biomass availability, while USDA and Interior have entered into a cooperative agreement with the National Association of Conservation Districts to promote woody biomass utilization. USDA, DOE, and Interior also participate in joint activities at the field level. For example, DOE’s National Renewable Energy Laboratory (NREL) and the Forest Service have collaborated in developing and demonstrating small power generators that use woody biomass for fuel. The Forest Service also collaborates with Interior in funding and awarding grants under the Fuels Utilization and Marketing program, which targets woody biomass utilization efforts in the Pacific Northwest. The agencies also collaborate with state and local governments to promote the use of woody biomass—for example, the Forest Service, NREL, and BLM entered into a memorandum of understanding with Jefferson County, Colorado, to study the feasibility of developing an electricity-generating facility that would use woody biomass. Most of USDA’s woody biomass utilization activities are undertaken by the Forest Service and involve grants, research and development, and education, outreach, and technical assistance. The Forest Service provides grants through its Economic Action Programs, created to help rural communities and businesses dependent on natural resources become sustainable and self-sufficient. The Forest Service also has created a grant program in response to a provision in the Consolidated Appropriations Act for Fiscal Year 2005, which authorized up to $5 million for grants to create incentives for increased use of biomass from national forest lands. Two other USDA agencies—the Cooperative State Research, Education and Extension Service (CSREES) and USDA Rural Development—maintain programs that could include woody biomass utilization activities. CSREES oversees the Biobased Products and Bioenergy Production Research grant program and the McIntyre-Stennis grant program, which provides grants to states for research into forestry issues under the McIntyre-Stennis Act of 1962. Within USDA Rural Development, the Rural Business-Cooperative Service oversees a grant program emphasizing renewable energy systems and energy efficiency among rural small businesses, farmers, and ranchers, and the Rural Utilities Service maintains a loan program for renewable energy projects. Forest Service researchers are conducting research into a variety of woody biomass issues. Researchers have conducted assessments of the woody biomass potentially available through land management projects and have developed models of the costs and revenues associated with thinning projects. Researchers also are studying the economics of woody biomass use in other ways; one researcher, for example, is beginning an assessment of the economic, environmental, and energy-related impacts of using woody biomass for power generation. The Forest Service also conducts extensive research, primarily at its Forest Products Laboratory, into uses for woody biomass, including wood-plastic composites and water filtration systems that use woody biomass fibers, as well as less expensive ways of converting woody biomass to liquid fuels. In addition, the Forest Service conducts extensive education, outreach, and technical assistance activities. Much of this activity is conducted by the Technology Marketing Unit (TMU) at the Forest Products Laboratory, which provides woody biomass users with technical assistance and expertise in wood products utilization and marketing. Forest Service field office staff also provide education, outreach, and technical assistance, and each Forest Service region has an Economic Action Program coordinator who has involvement in woody biomass issues. For example, one such coordinator organized a “Sawmill Improvement Short Course” designed to provide information to small-sawmill owners regarding how to better handle and use small-diameter material. The Forest Service also has partnerships with state and regional entities that provide a link between scientific and institutional knowledge and local users. Most of DOE’s woody biomass activities are overseen by its Office of the Biomass Program and focus primarily on research and development, although the department does have some grant and technical assistance activities. DOE’s research and development activities generally address the conversion of biomass, including woody biomass, to liquid fuels, power, chemicals, or heat. Much of this work is carried out by NREL, where DOE recently opened the Biomass Surface Characterization Laboratory. DOE also supports research into woody biomass through partnerships with industry and academia. Program management activities for these partnerships are conducted by DOE headquarters, with project management provided by DOE field offices. In addition to its research activities, DOE provides information and guidance to industry, stakeholder groups, and users through presentations, lectures, and DOE’s Web site, according to DOE officials. DOE also provides outreach and technical assistance through its State and Regional Partnership, Federal Energy Management Program (FEMP), and Tribal Energy Program. FEMP provides assistance to federal agencies seeking to implement renewable energy and energy efficiency projects, while the Tribal Energy Program provides technical assistance to tribes, including strategic planning and energy options analysis. DOE’s grant programs include (1) the National Biomass State and Regional Partnership, which provides grants to states for biomass-related activities through five regional partners; and (2) the State Energy Program, which provides grants to states to design and carry out their own renewable energy and energy efficiency programs. In addition, DOE’s Tribal Energy Program provides funds to promote energy sufficiency, economic development, and employment on tribal lands through renewable energy and energy efficiency technologies. Interior’s activities include providing education and outreach and conducting grant programs, but they do not include research into woody biomass utilization issues. Four Interior agencies—BLM, the Bureau of Indian Affairs (BIA), Fish and Wildlife Service (FWS), and National Park Service (NPS)—conduct activities related to woody biomass. These agencies conduct education, outreach, and technical assistance, but not to the same degree as the Forest Service. For example, BIA provides technical assistance to tribes seeking to implement renewable energy projects, and while FWS and NPS conduct relatively few woody biomass utilization activities, in some cases the agencies will work to find a woody biomass user nearby if a market exists for the material. Interior plans to expand its outreach efforts by using the National Association of Conservation Districts, with which it signed a cooperative agreement, to conduct outreach activities related to woody biomass. And while Interior’s grant programs generally do not target woody biomass, BIA has provided some grants to Indian tribes, including a 2004 grant to the Confederated Tribes of the Warm Springs Reservation in Oregon to conduct a feasibility study for updating and expanding a woody biomass-fueled power plant. Several other federal agencies are engaged in limited woody biomass activities through their advisory or research activities. The Environmental Protection Agency provides technical assistance, through its Combined Heat and Power Partnership, to power plants that generate combined heat and power from various sources, including woody biomass. Three other agencies—the National Science Foundation, Office of Science and Technology Development, and Office of the Federal Environmental Executive—also are involved in woody biomass activities through their membership on the Biomass Research and Development Board, which is responsible for coordinating federal activities for the purpose of promoting the use of biobased industrial products. Two groups serve as formal vehicles for coordinating federal agency activities related to woody biomass utilization. One, the Woody Biomass Utilization Group, is a multiagency group that meets quarterly on woody biomass utilization issues and is open to all national, regional, and field- level staff across numerous agencies. The other, the Biomass Research and Development Board, is responsible for coordinating federal activities to promote the use of biobased industrial products. The board consists of representatives from USDA, DOE, and Interior, as well as EPA, the National Science Foundation, Office of the Federal Environmental Executive, and Office of Science and Technology Policy. When discussing coordination among agencies, however, agency officials more frequently cited using informal mechanisms for coordination—through telephone discussions, e-mails, participation in conferences, and other means— rather than the formal groups described above. Several officials told us that informal communication among networks of individuals was essential to coordination among agencies. Officials also described other forms of coordination, including joint review teams for interagency grant programs and multiagency working groups examining woody biomass at the regional or state level. The Forest Service—the USDA agency with the most woody biomass activities—developed a woody biomass policy in January 2005, and, in March 2005, in response to a recommendation in our draft report, the agency assigned responsibility for overseeing and coordinating its woody biomass activities to an official within the Forest Service’s Forest Management branch. In addition, the agency has created the Biomass Utilization Steering Committee, consisting of the staff directors of various Forest Service branches, to provide direction and support for agency biomass utilization. DOE coordinates its woody biomass utilization activities through its Office of Energy Efficiency and Renewable Energy. Within this office, the Office of the Biomass Program directs biomass research at DOE national laboratories and contract research organizations, while the Federal Energy Management Program and the Tribal Energy Program conduct a small number of other woody biomass activities. Interior has appointed a single official to oversee its woody biomass activities and is operating under a woody biomass policy adopting the principles of the June 2003 memorandum of understanding among USDA, DOE, and Interior. Interior also has appointed a Renewable Energy Ombudsman to coordinate all of the department’s renewable energy activities, including those related to woody biomass, and has worked with its land management agencies to develop woody biomass policies allowing service and timber contractors to remove woody biomass where ecologically appropriate. Similarly, BLM has appointed a single official to oversee woody biomass efforts and has developed a woody biomass utilization strategy to guide its activities that contains overall goals related to increasing the utilization of biomass from treatments on BLM lands. Agency officials cited two principal obstacles to increasing the use of woody biomass: the difficulty in using woody biomass cost-effectively and the lack of a reliable supply of the material. Agency activities are generally targeted toward the obstacles identified by agency officials, but some officials told us that their agencies are limited in their ability to fully address these obstacles and that additional steps beyond the agencies’ authority to implement are needed. However, not all agree that such steps are appropriate. The obstacle most commonly cited by officials we spoke with is the difficulty of using woody biomass cost-effectively. Officials told us the products that can be created from woody biomass—whether wood products, liquid fuels, or energy—often do not generate sufficient income to overcome the costs of acquiring and processing the raw material. One factor contributing to the difficulty in using woody biomass cost- effectively is the cost incurred in harvesting and transporting woody biomass. Numerous officials told us that even if cost-effective means of using woody biomass were found, the lack of a reliable supply of woody biomass from federal lands presents an obstacle because business owners or investors will not establish businesses without assurances of a dependable supply of material. Officials identified several factors contributing to the lack of a reliable supply, including the lack of widely available long-term contracts for forest products, environmental groups’ opposition to federal projects, and the shortage of agency staff to conduct activities. A few officials cited internal barriers that hamper agency effectiveness in promoting woody biomass utilization, including limited agency expertise related to woody biomass and limited agency commitment to the issue. A variety of other obstacles were noted as well, including the lack of a local infrastructure for handling woody biomass, consisting of loggers, mills, and equipment capable of treating small- diameter material. Agency activities related to woody biomass were generally aimed at overcoming the obstacles agency officials identified, including many aimed at overcoming economic obstacles. For example, Forest Service staff have worked with potential users of woody biomass to develop products whose value is sufficient to overcome the costs of harvesting and transporting the material; Economic Action Program coordinators have worked with potential woody biomass users to overcome economic obstacles; and Forest Products Laboratory researchers are working with NREL to make wood-to-ethanol conversion more cost-effective. Despite ongoing agency activities, however, numerous officials believe that additional steps beyond the agencies’ authority are need to fully address obstacles to woody biomass utilization. Among these steps are subsidies and tax credits, which officials told us are necessary to develop a market for woody biomass but which are beyond the agencies’ authority. According to several officials, the obstacles to using woody biomass cost- effectively are simply too great to overcome by using the tools—grants, outreach and education, and so forth—currently at the agencies’ disposal. One official stated that “in many areas, the economic return from smaller- diameter trees is less than production costs. Without some form of market intervention, such as tax incentives or other forms of subsidy, there is little short-term opportunity to increase utilization of such material.” Some officials stated that subsidies have the potential to create an important benefit—reduced fire risk through hazardous fuels reduction—if they promote additional thinning activities by stimulating the woody biomass market. Rather than incentives or subsidies, some officials noted the potential for increased use of woody biomass through state requirements—known as renewable portfolio standards—that utilities procure or generate a portion of their electricity by using renewable resources, which could include woody biomass. But not all officials believe these additional steps are efficient or appropriate. One official told us that, although he supports these activities, tax incentives and subsidies would create enormous administrative and monitoring requirements. Another official stated that although increased subsidies could address obstacles to woody biomass utilization, he does not believe they should be implemented, preferring instead to allow research and development efforts and market forces to establish the extent of woody biomass utilization. Further, not all agree that the market for woody biomass should be expanded. One agency official told us he is concerned that developing a market for woody biomass could result in overuse of mechanical treatment (rather than prescribed burning) as the market begins to drive the preferred treatment, and representatives of one national environmental group told us that relying on woody biomass as a renewable energy source will lead to overthinning, as demand exceeds the supply that is generated through responsible thinning. The amount of woody biomass resulting from increased thinning activities could be substantial, adding importance to the search for ways to use the material cost-effectively rather than simply disposing of it. However, the use of woody biomass will become commonplace only when doing so becomes economically advantageous for users—whether small forest businesses or large utilities. Federal agencies are targeting their activities toward overcoming economic and other obstacles, but some agency officials believe that these efforts alone will not be sufficient to stimulate a market that can accommodate the vast quantities of material expected— and that additional action may be necessary at the federal and state levels. Nevertheless, we believe the agencies will continue to play an important role in stimulating woody biomass use. The Forest Service took a significant step recently by designating an agency lead for woody biomass activities, responding to a need we had identified in our draft report and enhancing the agency’s ability to ensure that its multiple activities contribute to its overall objectives. Given the magnitude of the woody biomass issue and the finite nature of agency budgets, it is essential that federal agencies appropriately coordinate their woody biomass activities—both within and across agencies—to maximize their potential for addressing the issue. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or at nazzaror@gao.gov. David P. Bixler, James Espinoza, Steve Gaty, Richard Johnson, and Judy Pagano made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In an effort to reduce the risk of wildland fires, many federal land managers--including the Forest Service and the Bureau of Land Management--are placing greater emphasis on thinning forests and rangelands to help reduce the buildup of potentially hazardous fuels. These thinning efforts generate considerable quantities of woody material, including many smaller trees, limbs, and brush--referred to as woody biomass--that currently have little or no commercial value. GAO was asked to determine (1) which federal agencies are involved in efforts to promote the use of woody biomass, and the actions they are undertaking; (2) how these agencies coordinate their activities; and (3) what the agencies see as obstacles to increasing the use of woody biomass, and the extent to which they are addressing the obstacles. This testimony is based on GAO's report Natural Resources: Federal Agencies Are Engaged in Various Efforts to Promote the Utilization of Woody Biomass, but Significant Obstacles to Its Use Remain (GAO- 05-373), being released today. Most woody biomass utilization activities are implemented by the Departments of Agriculture (USDA), Energy (DOE), and the Interior and include awarding grants to businesses, schools, Indian tribes, and others; conducting research; and providing education. Most of USDA's woody biomass utilization activities are undertaken by the Forest Service and include grants for woody biomass utilization, research into the use of woody biomass in wood products, and education on potential uses for woody biomass. DOE's woody biomass activities focus on research into using the material for renewable energy, while Interior's efforts consist primarily of education and outreach. Other agencies also provide technical assistance or fund research activities. Federal agencies coordinate their woody biomass activities through formal and informal mechanisms. Although the agencies have established two interagency groups to coordinate their activities, most officials we spoke with emphasized informal communication--through e-mails, participation in conferences, and other means--as the primary vehicle for interagency coordination. Internally, DOE coordinates its woody biomass activities through its Office of Energy Efficiency and Renewable Energy, while Interior and the Forest Service--the USDA agency with the most woody biomass activities--have appointed officials to oversee, and have issued guidance on, their woody biomass activities. The obstacles to using woody biomass cited most often by agency officials were the difficulty of using woody biomass cost-effectively and the lack of a reliable supply of the material; agency activities generally are targeted toward addressing these obstacles. Some officials told us their agencies are limited in their ability to address these obstacles and that incentives--such as subsidies and tax credits--beyond the agencies' authority are needed. However, others disagreed with this approach for a variety of reasons, including the concern that expanding the market for woody biomass could lead to adverse ecological consequences if the demand for woody biomass leads to excessive thinning. |
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and memorials. VA is one of the largest federal departments, with more than $150 billion in obligations and a workforce of approximately 313,000 employees for fiscal year 2013. VA is responsible for administering health care and other benefits that directly affect the lives of about 22 million veterans and eligible members of their families. The department is to provide these services through the Veterans Health Administration, Veterans Benefits Administration, and the National Cemetery Administration. VA serves over 6 million patients at 151 medical centers, provides compensation and benefits for about 4 million veterans and beneficiaries, and maintains about 3 million gravesites at 131 properties. In carrying out its mission, VA collects and maintains sensitive medical records and personally identifiable information (PII) of veterans through the use of medical, administrative, and financial computer applications. For example, the department stores veterans’ admission, diagnosis, surgical procedure, and discharge information for each stay at a VA medical center, nursing home, or domiciliary, as well as storing PII such as Social Security numbers. Each of the medical centers, which are located around the country, uses local computer systems to run these standard applications. In addition, in providing oversight for disability assistance and economic opportunity to veterans, VA maintains information such as compensation, pension, insurance, and benefits assistance services, as well as educational, loan, and vocational rehabilitation and employment services data. In providing health care and other benefits to veterans and their dependents, VA relies on a vast array of information technology systems and networks, which supports its operations and stores sensitive information, including medical records and PII. Without proper safeguards, these computer systems are vulnerable to significant risks, including loss or theft of resources; inappropriate access to and disclosure, modification, or destruction of sensitive information; use of computer resources for unauthorized purposes or to launch attacks on other computer systems; and embarrassing security incidents that erode the public’s confidence in the agency’s ability to accomplish its mission. Cyber-based threats are evolving and growing and arise from a wide array of sources. These threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or defective equipment that inadvertently disrupt systems, as well as user error. Intentional threats can come from sources both internal and external to the organization. Internal threats include fraudulent or malevolent acts by employees or contractors. External threats include the ever-growing number of cyber-based attacks that can come from hackers, criminals, foreign nations, and other sources. These threat sources can exploit vulnerabilities such as those resulting from flaws in software code that could cause a program to malfunction. Reports of incidents affecting VA’s systems and information highlight the serious impact that inadequate information security can have on, among other things, the confidentiality, integrity, and availability of veterans’ personal information. For example: In January 2014, a software defect in VA’s eBenefits system—a web application used by over 2.8 million veterans to access information and services—improperly allowed users to view the personal information of other veterans. According to an official from VA’s Office of Information and Technology, this defect potentially allowed 5,399 users to view data of 1,301 veterans or their dependents. In June 2013, VA’s former Chief Information Security Officer testified that in 2010 VA’s network had been compromised by uninvited visitors—nation-state-sponsored attackers—and that attacks had continued. He stated that these attackers were taking advantage of weak technical controls within VA, including those for web applications that contained common exploitable vulnerabilities. He further stated that these resulted in unchallenged and unfettered access to and exploitation of VA systems and information by this specific group of attackers. The Federal Information Security Management Act of 2002 (FISMA) sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets.other things, develop, document, and implement an agency-wide information security program, using a risk-based approach to information security management. Such a program includes planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies. FISMA requires each agency to, among The act also assigned the National Institute of Standards and Technology (NIST) responsibility for developing standards and guidelines that include minimum information security requirements. For example, NIST specifies requirements for testing vulnerabilities, remediating them, and developing plans of action and milestones for information systems. At VA, the Assistant Secretary for Information and Technology, who serves as the agency’s Chief Information Officer (CIO), is responsible for ensuring that information systems operate at an acceptable level of risk. The CIO reports annually to the head of VA on the overall effectiveness of VA’s information security program, including the progress of remedial actions. The CIO designated a Chief Information Security Officer (CISO) who, among other things, manages the development and maintenance of information security policies, procedures, and control techniques to address applicable requirements. The CISO also heads the department’s Office of Information Security, which is responsible for the department’s system authorization process, including ensuring plans of action and milestones are maintained. Also within the Office of Information Security, NSOC is responsible for performing vulnerability and compliance scans. Among other things, NSOC may detect incidents such as network intrusions, test web applications for security vulnerabilities, and scan VA’s network to test devices connected to the network for known vulnerabilities. In addition, under the direction of the CIO, the Deputy CIO for Service Delivery and Engineering and system owners are responsible for the overall procurement, development, integration, modification, daily operation, maintenance, and disposal of VA information and information systems. These responsibilities include ensuring (1) that the secure baseline configuration for each system is documented and approved by the authorizing official prior to implementation; (2) compliance with federal security requirements and VA security policies; and (3) remediation and updating of plans of action and milestones and completion of other reviews. As we recently testified, VA has faced long-standing challenges in effectively implementing its information security program. Specifically, from fiscal year 2007 through 2013, VA has consistently had weaknesses in key information security control areas. In addition, in fiscal year 2013, the department’s independent auditor reported, for the 12th year in a row, that weaknesses in information system controls over financial systems constituted a material weakness. Further, the department’s inspector general has identified development of an effective information security program and system security controls as a major management challenge for VA. These findings are consistent with challenges GAO has identified in VA’s implementation of its security program going back to the late 1990s. While VA has taken actions to mitigate previously identified security vulnerabilities, they were insufficient to ensure that these weaknesses were fully addressed. Specifically, VA took steps to contain and eradicate an incident involving intrusion of its network, but these activities were not fully effective. In addition, VA took insufficient actions to address vulnerabilities in two key web applications. Finally, weaknesses identified on VA’s workstations (e.g., laptop computers) had not been corrected in a timely manner. Collectively, these weaknesses increase the risk that sensitive data—including veterans’ personal information—could be compromised. Upon detection of an incident, NIST requires that agencies document actions taken in analyzing, containing, eradicating, and recovering from the incident. Specifically, agencies should create follow-up reports for each incident and keep them for a period of time as specified in record retention policies. Organizations should establish a policy for how long evidence from an incident should be retained, taking into account factors such as providing evidence for law enforcement, data retention policies, and cost. Moreover, NIST directs agencies to follow National Archives and Records Administration (NARA) guidance, which states that agency records related to computer security incident handling should be maintained for 3 years. NIST guidance also notes the importance of agencies having tools in place to aid in incident response. VA took actions to contain and eradicate an incident detected in 2012 involving an attack by malicious outsiders. VA’s NSOC had analyzed the scope of the incident and documented actions taken in response. For example, center staff identified hosts that they believed were affected by the event and took actions to eradicate the effects from these hosts. They documented the actions taken to address the incident to the point where they believed the incident had been successfully remediated. However, VA could not provide sufficient documentation to demonstrate that these actions were effective. This is consistent with our findings from a recent government-wide review, in which we estimated that agencies were not able to effectively demonstrate actions taken in response to detected incidents in about 65 percent of cases. For this particular incident at VA, staff could not locate the associated forensic analysis report or other key materials. Officials explained that digital evidence was only maintained for 30 days due to storage space constraints. As a result, we could not determine the effectiveness of actions taken to address this incident. Subsequent to this incident, VA established a standard operating procedure that requires the forensic analysis report and related documentation to be maintained for 6 years but allows digital evidence collected during a forensic analysis to be purged 1 month after the completion of the associated forensic analysis report. However, purging such evidence after 1 month is not consistent with NIST-recommended NARA guidance, which calls for records related to computer security incident handling to be maintained for at least 3 years. Without maintaining evidence of incidents, VA cannot demonstrate the effectiveness of its incident response activities and will be unable to use these records to assist in handling future incidents or aiding law enforcement authorities in investigating and prosecuting crimes. In addition, VA has not yet addressed an underlying vulnerability that contributed to the intrusion. Specifically, VA had planned to implement a solution that would have corrected the weakness in February 2014, but at the time of our review, the solution had not been implemented. VA did take other actions to mitigate the weakness—specifically, limiting the use of the affected system. However, this is insufficient to prevent recurrence of a similar incident. Until this weakness is fully addressed, or additional mitigating controls are applied, unnecessary risk exists that an incident of this type could recur. More broadly, NSOC did not have sufficient visibility into VA’s computer networks. NIST Special Publication 800-61 states that incident response policies should identify the roles, responsibilities, and levels of authority for those implementing incident response activities. However, VA’s policies did not define the authority for NSOC’s access to logs of activity on VA’s network that are collected at VA’s data centers. As a result, the NSOC cannot be assured that the incident was effectively contained and eradicated from VA’s network. As we reported in April 2014, VA’s incident response policies defined roles and responsibilities but did not include authorities for the incident response team. Accordingly, we recommended, among other things, that VA revise its policies for incident response by including requirements for defining the incident response team’s level of authority. VA concurred with this recommendation. Implementing this recommendation should include providing the NSOC with appropriate authority to review logs of activity on VA’s network. NSOC has initiatives under way to further improve incident response capabilities. For example, it is performing an analysis to determine how best to further restrict access to the VA network and is planning to purchase new incident response tools. However, it has not established a time frame for completing these actions. As noted in our prior work, elements such as specific actions, priorities, and milestones are desirable for evaluating progress, achieving results within specific time frames, and ensuring effective oversight and accountability. Until VA’s NSOC establishes such elements, it remains to be seen whether the initiatives will improve its incident response capabilities. Without assurance that incidents have been effectively contained and eradicated, or the underlying weaknesses effectively mitigated, VA is at increased risk that veterans’ PII and other sensitive data may be illicitly modified, disclosed, or lost. NIST guidance and VA policy both require applications to be tested prior to authorization in order to detect security weaknesses or vulnerabilities. NIST also recommends that organizations develop plans of action and milestones to address these weaknesses. Such plans provide a prioritized approach to risk mitigation and can be used by officials to monitor progress in correcting identified weaknesses. NSOC tests VA’s web applications as part of VA’s system authorization process and also conducts tests to validate that corrective actions have been taken to remediate identified vulnerabilities. For two high-impact web applications we reviewed, NSOC had identified four vulnerabilities that it considered high risk for each of the applications. For one of the applications, it also identified a critical vulnerability affecting the protection of PII. As of June 2014, VA had corrected six of the nine identified vulnerabilities, including the critical PII vulnerability, which it had corrected within 1 week of discovery. However, correction of one of the vulnerabilities had not yet been validated by NSOC for one of the web applications—and had been outstanding for over a year—and two had not yet been validated for the other application. Table 1 shows the status of the nine identified critical and high-risk vulnerabilities. VA did not provide evidence that it had developed plans of action and milestones for the identified vulnerabilities for which mitigation activities had not been completed. Without plans of action and milestones for correcting high-risk vulnerabilities, VA has less assurance that these weaknesses will be corrected in a timely and effective manner. This, in turn, could lead to unnecessary exposure of veterans’ sensitive data that are maintained by these applications. Various tools, such as “static analysis” tools, can scan software source code, identify root causes of software security vulnerabilities, and correlate and prioritize results. NIST states that vulnerability analyses for custom software applications may require additional approaches, such as static analysis. This type of analysis can help developers identify and reduce or eliminate potential flaws. However, VA did not conduct such analyses for both of the web applications we reviewed. According to VA officials from the Office of Cybersecurity, the department began conducting source code reviews using a static analysis tool in January 2013. Although developers for both of the applications had received the scanning tool, only developers for one of the applications had begun performing source code scans at the time of our review. According to VA officials, they have drafted a policy requiring the use of static analysis tools and it is in the executive approval process. Until VA ensures that its key web applications undergo source code scanning, it risks not detecting critical security vulnerabilities. NIST guidance and VA policy both require periodic vulnerability scanning, including scanning for patch levels; assessment of risk and risk-based decisions; and tracking and verifying remedial actions, such as applying patches to identified vulnerabilities. In addition, a 2012 VA memo requires that critical patches be applied within 30 days. VA reiterated this requirement in a February 2014 memorandum on patch management and elaborated on its policy. Specifically, the 2014 memorandum states, among other things, that in cases where patches cannot be applied or impact availability, features, or functionality, the department will work with system personnel to develop short-term compensating controls and longer-term plans to migrate to newer platforms, hardware, and/or technologies where security patches can be applied and new security features enabled. VA periodically scans its network devices—predominantly workstations (e.g., laptop computers)—to identify vulnerabilities that have been identified by software vendors. The department’s NSOC scans workstations across VA’s network at least monthly and develops executive summaries that show, among other things, the most critical vulnerabilities, such as those requiring patches to remediate them. However, VA has not always addressed these vulnerabilities in a timely fashion consistent with department policy. As of May 2014, the 10 most prevalent critical vulnerabilities identified by VA’s scans were software patches that had not been applied. Regarding these missing patches, they had been available for periods ranging from 4 to 31 months; there were multiple occurrences of each of the 10 missing patches, ranging from approximately 9,200 to 286,700; and each patch is intended to mitigate multiple potential known vulnerabilities, ranging from 5 to 51 vulnerabilities, with an average of about 30 and a total of 301 vulnerabilities. One reason that some of these vulnerabilities continued to exist is that VA decided not to apply patches for the top three vulnerabilities until further testing could determine the effect the patches would have on various applications. However, this decision was not timely. The decision memorandum was dated April 2014, even though the patches covered by the decision had been available from 3 to 10 months, exceeding the 30- day period for critical patches. In this decision memo, the department did not describe whether it had developed compensating controls to address instances where patches were not applied or discuss longer-term plans to migrate to newer platforms, hardware, and/or technologies where security patches can be applied and new security features enabled, as called for by its 2014 patch management memorandum. For the other patches, VA did not provide any documentation of decisions not to apply them. At the end of our audit, VA officials told us they had implemented compensating controls, but did not provide sufficient detail for us to evaluate their effectiveness. Without applying patches or developing compensating controls, VA increases the risk that known vulnerabilities could be exploited, potentially exposing veterans’ information to unauthorized modification, disclosure, or loss. Our findings are consistent with those of VA’s Office of Inspector General (OIG), which identified patch management as an issue in its fiscal year 2013 FISMA report. Specifically, the report identified significant deficiencies in configuration management controls intended to ensure that VA’s critical systems have appropriate security baselines and up-to-date vulnerability patches. The OIG found that VA had unsecure web application servers, excessive permissions on database platforms, a significant number of outdated and vulnerable third-party applications and operating system software, and a lack of common platform security standards across the department. To address these issues, the OIG recommended that VA implement a patch and vulnerability management program. In its response to the report, VA stated that in February 2013 it had implemented vulnerability scanning and continued to build on and improve its patch and vulnerability program and that the OIG’s recommendation should therefore be closed. However, as our findings suggest, the department has not yet effectively implemented a program to manage vulnerabilities and apply associated patches. Until it does so, it will remain at increased risk that known vulnerabilities could be exploited. In addition, the scanning procedures that VA used may not detect certain vulnerabilities. Specifically, for Windows systems, VA scanned in “authenticated” mode, but for other systems, such as Linux, its scans were performed in “unauthenticated mode.” The vendor of the scanning tool used by VA recommends scanning in authenticated mode. The unauthenticated scans cannot check for certain patches, potentially allowing for multiple vulnerabilities on these systems to go undetected. This increases the risk that VA would not detect vulnerabilities and take steps to mitigate them, which could allow users to escalate privileges, crash the system, gain administrator access, or manipulate network traffic. VA also has an initiative under way to facilitate the remediation of known vulnerabilities. In May 2013, it established an organization tasked with overseeing the Service Delivery and Engineering group’s process for identifying, prioritizing, and remediating vulnerabilities on VA information systems; ensuring baseline configurations and security standards are updated as new vulnerabilities are discovered and remediated; ensuring software standards are continually reviewed and updated and that installed software versions comply with these standards; identifying, collecting, analyzing, and reporting performance metrics to measure the effectiveness of the patch and vulnerability management, baseline configuration maintenance, and software standards maintenance processes; and proposing changes to improve these processes. This organization has taken initial steps to carry out its responsibilities. For example, it plans to create a database to track remediation and patch implementation. However, VA has yet to identify specific actions, priorities, and milestones for accomplishing these tasks. As noted previously, elements such as specific actions, priorities, and milestones are desirable for evaluating progress, achieving results in specific time frames, and ensuring effective oversight and accountability. Until VA establishes these elements for the new organization, it does not have assurance that these efforts will be effective. Ensuring effective security over its information and systems continues to be a challenge for VA. While the department has taken steps to respond to incidents and identify and mitigate vulnerabilities, more can be done to fully address these issues. Specifically, by not keeping sufficient records of its incident response activities, VA lacks assurance that incidents have been effectively addressed and may be less able to effectively respond to future incidents. In addition, without fully addressing an underlying vulnerability that allowed a serious intrusion to occur, increased risk exists that such an incident could recur. While VA has efforts under way to improve its incident response capabilities, until it identifies specific actions, priorities, and milestones for completing these efforts, it will be difficult to gauge its progress. Further, limitations in VA’s approach to identifying and addressing vulnerabilities in key web applications, such as not developing plans of action and milestones to address identified vulnerabilities and not scanning all application source code for defects, put veterans’ sensitive information at greater risk of compromise. Moreover, VA has yet to fully implement an effective program for identifying and mitigating vulnerabilities in workstations and other network devices, including applying security patches, performing an appropriate level of scanning, and identifying compensating controls and mitigation plans. These shortcomings leave its networks and devices susceptible to exploitation of known security vulnerabilities. While the department has established an organization intended to improve remediation efforts, without identifying specific actions, priorities, and milestones for accomplishing these tasks, this organization’s effectiveness will be limited. To address previously identified security vulnerabilities, we are recommending that the Secretary of Veterans Affairs take the following eight actions: Update the department’s standard operating procedure to require evidence associated with security incidents to be maintained for at least 3 years, consistent with NARA guidance. Fully implement the solution to address the weaknesses that led to the 2012 intrusion incident. Establish time frames for completing planned initiatives to improve incident response capabilities. Develop plans of action and milestones for critical and high-risk vulnerabilities affecting two key web applications. Finalize and implement the policy requiring developers to conduct source code scans on key web applications. Apply missing critical security patches within established time frames, or in cases where security patches cannot be applied, document compensating controls or, as appropriate, longer-term plans to migrate to newer platforms, hardware, and/or technologies where security patches can be applied and new security features enabled. Scan non-Windows network devices in authenticated mode. Identify specific actions, priorities, and milestones for accomplishing tasks to facilitate vulnerability remediation. We provided a draft of this report to VA for review and comment. In its written comments (reprinted in appendix II), VA stated that it generally agreed with our conclusions and concurred with our recommendations. VA also stated that it has already taken actions to address six of our eight recommendations and has plans in place to address the remaining two. Although we have not yet validated the actions described or determined whether they effectively address the issues raised in this report, we are concerned that the actions VA described as completed for at least two of the six recommendations may not comprehensively address the weaknesses we identified. Specifically, for our recommendations related to applying critical security patches and establishing milestones and priorities for facilitating vulnerability remediation, VA's comments focus on its monthly scans, among other things, but do not address application of patches, or identification of milestones and priorities. In this report, we recognize the importance of the monthly scans conducted by the department in accordance with NIST guidance and VA policy. While we acknowledge that VA has efforts underway to address previously identified weaknesses, until it comprehensively and effectively addresses the weaknesses, sensitive personal information entrusted to the department will be at increased risk of unauthorized access, modification, disclosure, or loss. We believe that our recommendations, if effectively implemented, should help the department improve its security posture. We intend to monitor VA’s implementation of our recommendations. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 4 days from the report’s date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512-4499. We can also be reached by e-mail at wilshuseng@gao.gov and barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to determine the extent to which selected, previously identified vulnerabilities continue to exist on Department of Veterans Affairs (VA) computer systems. To address this objective, we reviewed actions taken to address vulnerabilities that had been identified by VA’s Network and Security Operations Center (NSOC). Specifically, we reviewed the details of a critical incident that NSOC had detected in which VA’s network had been compromised and the department’s efforts to respond to it. We selected this incident because it was highlighted in a June 2013 testimony by VA’s former Chief Information Security Officer. We reviewed a detailed investigation report prepared by NSOC and interviewed center officials regarding actions taken to detect, analyze, contain, eradicate, and recover from this incident. We also reviewed an internal memorandum related to an underlying vulnerability that contributed to this incident. We compared VA’s efforts to address this incident to National Institute of Standards and Technology (NIST) guidance on security controls and incident handling. We also reviewed VA’s standard operating procedure for forensics analysis and compared it to guidance issued by the National Archives and Records Administration. We also reviewed a prior GAO report on agencies’ (including VA’s) incident response practices. Further, we interviewed NSOC officials to determine what initiatives the department has planned or under way to further improve incident response capabilities. We also reviewed vulnerabilities NSOC had identified in two key VA web applications. We selected these applications based on their processing of veterans’ sensitive personally identifiable information. For these web applications, we reviewed the results of NSOC testing, particularly findings that the testers had categorized as critical or high risk, and compared the dates the vulnerabilities were identified and the dates corrective actions were validated. We also met with VA information security officials and web application developers to determine (1) if plans of actions and milestones had been developed for uncorrected vulnerabilities and (2) the extent to which the department was using tools to conduct software code reviews in order to identify root causes of software vulnerabilities. We evaluated VA actions in accordance with NIST guidance on security testing, developing plans of actions and milestones, and vulnerability analysis and VA’s policy on testing applications prior to authorization. NIST, Special Publication 800-53, rev. 4. We conducted this performance audit from February 2014 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objective. In addition to the contacts named above, Jeffrey Knott, Lon Chin, Harold Lewis, and Chris Warweg (assistant directors); Jennifer R. Franks; Lee McCracken; and Tyler Mountjoy made key contributions to this report. | In carrying out its mission to ensure the health, welfare, and dignity of the nation's veterans, VA relies extensively on information technology systems that collect, process, and store veterans' sensitive information. Without adequate safeguards, these systems and information are vulnerable to a wide array of cyber-based threats. Moreover, VA has faced long-standing challenges in adequately securing its systems and information, and reports of recent incidents have highlighted the serious impact of inadequate information security on the confidentiality, integrity, and availability of veterans' personal information. GAO was asked to review VA's efforts to address information security vulnerabilities. The objective for this work was to determine the extent to which selected, previously identified vulnerabilities continued to exist on VA computer systems. To do this, GAO reviewed VA actions taken to address previously identified vulnerabilities, including a significant network intrusion, vulnerabilities in two key web-based applications, and security weaknesses on devices connected to VA's network. GAO also reviewed the results of VA security testing; interviewed relevant officials and staff; and reviewed policies, procedures, and other documentation. While the Department of Veterans Affairs (VA) has taken actions to mitigate previously identified vulnerabilities, it has not fully addressed these weaknesses. For example, VA took actions to contain and eradicate a significant incident detected in 2012 involving a network intrusion, but these actions were not fully effective: The department's Network and Security Operations Center (NSOC) analyzed the incident and documented actions taken in response. However, VA could not produce a report of its forensic analysis of the incident or the digital evidence collected during this analysis to show that the response had been effective. VA's procedures do not require all evidence related to security incidents to be kept for at least 3 years, as called for by federal guidance. As a result, VA cannot demonstrate the effectiveness of its incident response and may be hindered in assisting in related law enforcement activities. VA has not addressed an underlying vulnerability that allowed the incident to occur. Specifically, the department has taken some steps to limit access to the affected system, but, at the time of GAO's review, VA had not fully implemented a solution for correcting the associated weakness. Without fully addressing the weakness or applying compensating controls, increased risk exists that such an incident could recur. Further, VA's policies did not provide the NSOC with sufficient authority to access activity logs on VA's networks, hindering its ability to determine if incidents have been adequately addressed. In an April 2014 report, GAO recommended that VA revise its incident response policies to ensure the incident response team had adequate authority, and VA concurred. Further, VA's actions to address vulnerabilities identified in two key web applications were insufficient. The NSOC identified vulnerabilities in these applications through testing conducted as part of the system authorization process, but VA did not develop plans of action and milestones for correcting the vulnerabilities, resulting in less assurance that these weaknesses would be corrected in a timely and effective manner. Finally, vulnerabilities identified in VA's workstations (e.g., laptop computers) had not been corrected. Specifically, 10 critical software patches had been available for periods ranging from 4 to 31 months without being applied to workstations, even though VA policy requires critical patches to be applied within 30 days. There were multiple occurrences of each missing patch, ranging from about 9,200 to 286,700, and each patch was to address an average of 30 security vulnerabilities. VA decided not to apply 3 of the 10 patches until it could test their impact on its applications; however, it did not document compensating controls or plans to migrate to systems that support up-to-date security features. While the department has established an organization to improve its vulnerability remediation, it has yet to identify specific actions and milestones for carrying out related responsibilities. Until VA fully addresses previously identified security weaknesses, its information is at heightened risk of unauthorized access, modification, and disclosure and its systems at risk of disruption. GAO is making eight recommendations to VA to address identified weaknesses in incident response, web applications, and patch management. In commenting on a draft of this report, VA stated that it concurred with GAO's recommendations. |
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) replaced the individual entitlement to benefits under the 61-year-old Aid to Families with Dependent Children (AFDC) program with TANF block grants to states and emphasized the transitional nature of assistance and the importance of reducing welfare dependence through employment. Administered by HHS, TANF provides states with $16.5 billion each year, and in fiscal 2002, the total TANF caseload consisted of 5 million recipients. PRWORA provides states with the flexibility to set a wide range of TANF program rules, including the types of programs and services available and the eligibility criteria for them. States may choose to administer TANF directly, devolve responsibility to the county or local TANF offices, or contract with nonprofit or for-profit providers to administer TANF. Some states have also adopted “work first” programs, in which recipients typically are provided orientation and assistance in searching for a job; they may also receive some readiness training. Only those unable to find a job after several weeks of job search are then assessed for placement in other activities, such as remedial education or vocational training. While states have great flexibility to design programs that meet their own goals and needs, they must also meet several federal requirements designed to emphasize the importance of work and the temporary nature of TANF aid. For example, TANF established stronger work requirements for those receiving cash benefits than existed under AFDC. Furthermore, to avoid financial penalties, states must ensure that a steadily rising specified minimum percentage of adult recipients are participating in work or work-related activities each year. To count toward the state’s minimum participation rate, adult TANF recipients in families must participate in a minimum number of hours of work or a work-related activity a week, including subsidized or unsubsidized employment, work experience, community service, job search, providing child care for other TANF recipients, and (under certain circumstances) education and training. If recipients refuse to participate in work activities as required, states must impose a financial sanction on the family by reducing the benefits, or they may opt to terminate the benefits entirely. States must also enforce a 60-month limit (or less at state option) on the length of time a family may receive federal TANF assistance, although the law allows states to provide assistance beyond 60 months using state funds. The TANF caseload includes, as did AFDC, low-income individuals with physical or mental impairments considered severe enough to make them eligible for the federal SSI program. Administered by SSA, SSI is a means- tested income assistance program that provides essentially permanent cash benefits for individuals with a medically determinable physical or mental impairment that has lasted or is expected to last at least 1 year or to result in death and prevents the individual from engaging in substantial gainful activity. To qualify for SSI, an applicant’s impairment must be of such severity that the person is not only unable to do previous work but is also unable to do any other kind of substantial gainful work that exists in the national economy. Work is generally considered substantial and gainful if the individual’s earnings exceed a particular level established by statute and regulations. SSA also administers the Disability Insurance program (DI), which uses the same definition of disability, but is not means-tested and requires an individual to have a sufficient work history. For both DI and SSI, SSA uses the Disability Determination Service (DDS) offices to make the initial eligibility determinations. If the individual is not satisfied with this determination, he or she may request a reconsideration of the decision with the same DDS. Another DDS team will review the documentation in the case file, as well as any new evidence, and determine whether the individual meets SSA’s definition of disability. If the individual is not satisfied with the reconsideration, he or she may request a hearing before an Administrative Law Judge (ALJ). The ALJ conducts a new review and may hear testimony from the individual, medical experts, and vocational experts. If the individual is not satisfied with the ALJ decision, he or she may request a review by SSA’s Appeals Council, which is the final administrative appeal within SSA. Despite recent improvements to the process, going through the entire process, including all administrative appeals, can average over 2 years. In most states, SSI eligibility also entitles individuals to Medicaid benefits. TANF recipients may apply for Medicaid benefits and are likely to qualify, but receipt of TANF benefits does not automatically qualify a recipient for Medicaid. While SSA has recently expanded policies and initiated demonstration projects aimed at helping DI and SSI beneficiaries enter or return to the workforce and achieve or at least increase self-sufficiency, its disability programs remain grounded in an approach that equates impairment with inability to work. This approach exists despite medical advances and economic and social changes that have redefined the relationship between impairment and the ability to work. The disconnect between SSA’s program design and the current state of science, medicine, technology, and labor market conditions, along with similar challenges in other programs, led GAO in 2003 to designate modernizing federal disability programs, including DI and SSI, as a high-risk area urgently needing attention and transformation. The Ticket to Work and Work Incentives Improvement Act of 1999 amended the Social Security Act to create the Ticket to Work and Self- Sufficiency Program (Ticket Program). This program provides most DI and SSI beneficiaries with a voucher, or “ticket,” which they can use to obtain vocational rehabilitation, employment, or other return-to-work services from an approved provider of their choice. The program, while voluntary, is only available to beneficiaries after the lengthy eligibility determination process. Once an individual receives the ticket, he or she is free to choose whether or not to use it, as well as when to use it. Generally, disability beneficiaries age 18 through 64 are eligible to receive tickets. The Ticket Program has been implemented in phases and is to be fully implemented in 2004. The Social Security Advisory Board (Advisory Board) has questioned whether Social Security’s definition of disability is appropriately aligned with national disability policy. The definition of disability requires that individuals with impairments be unable to work, but then once found eligible for benefits, individuals receive positive incentives to work. Yet the disability management literature has emphasized that the longer an individual with an impairment remains out of the workforce the more likely the individual is to develop a mindset of not being able to work and the less likely the individual is to ever return to work. Having to wait for return-to-work services until determined eligible for benefits may be inconsistent with the desire of some individuals with impairments who want to work but still need financial and medical assistance. The Advisory Board, in recognizing that these inconsistencies need to be addressed, has suggested some alternative approaches. One option they discussed in a recent report is to develop a temporary program, which would be available while individuals with impairments were waiting for eligibility determinations for the current program. This temporary program might have easier eligibility rules and different cash benefit levels but stronger and more individualized medical and other services needed to support a return to work. SSA has also realized that one approach may not work for all beneficiaries, and in recent years it has begun to develop different approaches for providing assistance to individuals with disabilities. One example of these efforts is the proposed Temporary Allowance Demonstration, which would provide immediate cash and medical benefits for a specified period to individuals who meet SSA’s definition of disability and who are highly likely to benefit from aggressive medical care. SSA is also in the process of developing the Early Intervention Demonstration. This demonstration project will test alternative ways to provide employment-related services to disability applicants. Although both of these demonstration projects only cover the DI program, SSA also has the authority to conduct other demonstration projects with SSI applicants and recipients. Estimates from our nationwide survey of county TANF offices indicated that almost all offices reported that they refer at least some recipients with impairments to apply for SSI. But the level of encouragement these individuals receive from their local TANF office to apply for SSI varies, with many offices telling the individual to apply for SSI and some offices helping the recipient complete the application. Because TANF offices are referring individuals to SSI, these referrals will have some effect on the SSI caseload. However, findings regarding the impact that these SSI referrals from TANF have on SSI caseload growth are inconclusive, due to data limitations. Based on estimates from our survey, 97 percent of all counties refer at least some of their adult TANF recipients with impairments to SSA to apply for SSI. As table 1 shows, 33 percent of county TANF offices said that it is their policy to refer to SSI only those adults whose impairments are identified as limiting or preventing their ability to work. However, another 32 percent of county TANF offices said that it is their policy to refer all TANF recipients identified with impairments to SSI for eligibility determinations. TANF offices reported that they rely on several methods to identify an individual’s impairment and assess whether the individual could work or should be referred to SSI. Estimates from our survey indicated that all county offices rely on the applicant to disclose his or her impairment. In addition, 96 percent of all counties rely on caseworker observation, about 57 percent use a screening tool, and about 60 percent use an intensive assessment. Once recipients are identified as having impairments, TANF offices need to decide which individuals to refer to SSI. As table 2 shows, many counties rely on multiple forms of documentation or other information to make this decision, rather than referring all individuals with impairments. Specifically, 94 percent of all counties reported that they use documentation from a recipient’s physician, and 95 percent reported that they use self-reported information from the recipient. While nearly all county TANF offices reported that they refer at least some individuals with impairments to SSI, the level of encouragement such individuals receive from their local TANF office appears to vary. About 98 percent of county TANF offices reported that they tell these recipients to call or go to SSA to apply for SSI. About 61 percent reported that they will also assist a recipient in completing the SSI application, and about 74 percent reported that they follow up to ensure the application process is complete. Some of the variation in the level of encouragement may be explained by the fact that some states are work first states. Officials we interviewed in four states acknowledged that they try to get all TANF recipients to work, including recipients with impairments. Therefore, while they make referrals to SSI, officials in these work first states told us that they try to encourage work more than the SSI application process. However, officials in all five of the states we visited stated that if they feel an individual has a severe impairment, they would have the individual apply for SSI. Since county TANF offices refer individuals with impairments to SSI, these referrals will have some effect on the SSI caseload. To determine the magnitude of the effect that these TANF referrals have had on SSI caseload growth, SSA would need to know who among their applicants are TANF recipients. However, SSA headquarters officials told us that the agency does not know who is referred or how people are referred because it does not collect those data. Although the SSI application specifically asks whether the applicant is receiving TANF, this information is combined with other income assistance based on need in SSA’s database. Therefore, while the working age (18-64) SSI caseload has increased 33 percent over the last decade, SSA does not have an easy way to accurately determine the magnitude of the effect that the TANF referrals have had on the growth of the SSI rolls. Also, in a study funded by SSA and conducted by The Lewin Group, researchers found little, if any, evidence that TANF had increased referrals to SSI. Only one of the five states the researchers visited remarked of a perceptible increase in transitions to SSI. The authors noted that the likely reason for not finding a significant increase in referrals due to welfare reform is the fact that referrals to SSI had already been occurring under AFDC, and that the full impact of the welfare reform changes would not be known until the time limit for benefit receipt had elapsed. However, to date there have not been any studies that looked at this issue. In addition to SSA not knowing the magnitude of the effect that TANF referrals have had on SSI caseload growth, TANF officials we interviewed stated that they generally do not have historical data on SSI referrals, approvals, and denials. But officials in most states that we visited said they are in the process of improving their data collection in this respect, including tracking methods to determine the status of an SSI application, which should provide them with better data in the future. TANF offices vary in whether they make work requirements mandatory for their adult recipients with impairments awaiting SSI eligibility determinations. Even though estimates from our survey showed that 83 percent of county TANF offices reported offering noncash services to TANF recipients with impairments who are awaiting SSI eligibility determinations, these services may not be available or are not fully utilized. Reasons for this low service utilization may include exemptions from the work requirements and an insufficient number of job training or related services. Estimates from our survey showed that about 86 percent of county TANF offices have policies that always or sometimes exempt from the work requirements adult TANF recipients with impairments who are referred to SSI for eligibility determinations. Also, about 31 percent of county TANF offices consider the number of times a recipient is denied and appeals an SSI decision as a factor when deciding to exempt recipients from the work requirements. Our survey further found that 82 percent of counties reported exempting recipients, in part, on the basis of the degree to which the impairment limits the recipient’s ability to work. In addition, about 69 percent of county TANF offices reported that the severity of the impairment was a major factor in their decisions to exempt people with impairments who are awaiting SSI determinations from work requirements. One TANF official we interviewed told us that the recipients’ impairments were too great to participate in work activities. However, some of the state and county TANF officials we interviewed explained that they have developed alternative practices to help recipients with impairments participate in work activities. TANF officials from two of the states we visited told us that they have developed a modified work requirement for adult TANF recipients with impairments. A TANF official from one of these states said that the modified work requirements encourage individuals with impairments to work, but they do not expect that these individuals will be able to work in a full-time capacity. One county TANF official we interviewed explained that the work requirements and services provided for their recipients with impairments are very individualized, based on recommendations of the doctors who meet with the recipients. However, in all of the states and counties we visited, TANF officials said that individualized services can be costly. One state official said that his state’s program does not have the funds to pay for the training needed by people with learning disabilities. The official added that when people with impairments need substantial help, there were limits as to what could be funded in a work first state. Even though about 51 percent of county TANF offices do not require adult TANF recipients awaiting SSI determinations to participate in any type of job services, education services, work experience programs, or other employment services, 83 percent of county TANF offices reported that they are still willing to provide work-related or support services to this population. One state official we interviewed reported that the services provided are the same for persons with or without impairments. Officials in this state explained that these services include transportation, child care, medical assistance, tuition assistance, vocational rehabilitation, and assistance with obtaining SSI benefits. Even though county TANF offices may be willing to offer noncash services to their recipients, among those counties that could provide us with information on service utilization, utilization of these services tended to be low. While the low utilization of services may be due to exemptions from the work requirements, service availability may also be an issue. Estimates from our survey showed that 40 percent of county TANF offices reported one of the reasons adult TANF recipients with impairments, who are awaiting SSI eligibility determinations, are not participating in work activities is that there are an insufficient number of job training or related services available for them to use. In addition, some TANF officials that we interviewed cited not only limited funding, but also their offices’ own TANF policies as factors that might explain why services may not be available to recipients with impairments. For example, a state TANF official we interviewed said that state budget cuts have resulted in trimming of support services made available to recipients. Another state official explained that adult recipients with impairments who are placed in an exempted status are allowed access to medical services but not work- related support services, such as transportation, clothing, or vehicle repairs. The official further explained that those services are limited to those individuals who are in work activities. In addition, estimates from our survey showed that 50 percent of county TANF offices reported recipients’ motivation to apply for SSI was one of the conditions that might challenge or hinder their offices in providing employment services. Some state and county TANF officials we interviewed also believe that one of the main reasons why there is low utilization of services is recipients’ fear of jeopardizing their SSI applications. While participation in a work activity does not necessarily preclude an individual from obtaining disability benefits from SSA, estimates from our survey showed that 41 percent of county TANF offices reported that their recipients with impairments, awaiting SSI eligibility determinations, are unsure whether or not the demonstration of any work ability would hinder or disqualify their chances for SSI eligibility. State and county TANF officials we interviewed explained that recipients applying for SSI or awaiting an SSI decision fear participating in work activities. Some of the county TANF officials we interviewed explained that this population does not want to participate in work-related services for fear of jeopardizing their applications. These officials noted that compounding recipients’ fears are attorneys who may be attempting to protect their clients’ interests by sending TANF offices notices saying that any work activity could jeopardize their clients’ SSI applications. These fears have led to TANF workers having some difficulty in getting their recipients with impairments to explore work options during the time they are applying for SSI. One state TANF official we interviewed pointed out that conversations with their recipients about work activities have generally occurred because the recipients want to volunteer for such activities. A county TANF official explained that there is a challenge in providing work services to this population, as the recipients are so focused on getting on SSI that it is difficult to get them to focus on anything else. Yet another reason for the low use of noncash service is that some of the county TANF officials we interviewed expressed some uncertainty as to how to best serve their adult TANF recipients with impairments, explaining that they are sending mixed signals when it comes to encouraging work. One county TANF official we interviewed said that on one hand, recipients are being told about using TANF services to obtain employment, and then, on the other hand, recipients are being told to apply for SSI benefits, which require an applicant to focus on his or her inability to work. Some TANF offices also allow TANF recipients with impairments to count applying for SSI as a work activity. Estimates from our survey showed that about 30 percent of county TANF offices reported that they consider the SSI application process an activity that satisfies the work requirement. Also, another county official we interviewed stated that if a client goes into an exempted status, the client must participate in at least one activity a week, but not necessarily a work activity. It can be any service the TANF office has to offer, including physical therapy or assistance in completing the SSI application. Some county TANF offices have developed interactions with SSA offices, but such interactions have been of a limited nature and have focused on the SSI application process. Estimates from our survey indicated that some TANF offices have some form of interaction with SSA. Estimates from our survey also showed that two frequently reported forms of interaction between county TANF offices and SSA include having a contact at SSA with whom to discuss cases and following up with SSA regarding applications for SSI. In describing his office’s interactions with SSA, one state TANF official we interviewed said that his office, SSA, and DDS have a good working relationship, which includes cross training between the agencies and discussions concerning the SSI application process. However, estimates from our survey showed about 95 percent of county TANF offices reported that they would like to develop a relationship, or improve their relationship, with their local SSA field office with regard to adult TANF recipients applying for SSI. One state TANF official that we interviewed said that his office does not have much of a relationship with SSA. He noted that he had no contacts within SSA but would like to develop a formal relationship with DDS so that they could make faster determinations for the deferred TANF caseload. A county TANF official we interviewed said that her office’s communication with SSA is largely one-sided. This TANF official explained that even though her office sends documentation that supports a recipient’s SSI application, SSA does not inform them of any eligibility decisions it makes with TANF applicants. As a result, TANF staff must rely on their recipients telling them about decisions or on a computer system that indicates if an individual is receiving benefits. Finally, in all of the states we visited, TANF officials told us that they interact with SSA to assist their TANF recipients with impairments get onto SSI. Estimates from our survey also showed that 64 percent of counties reported that their interactions were TANF officials following up with SSA regarding a recipient’s SSI application, and 53 percent reported having a contact at SSA to discuss cases. TANF offices identified a number of ways they would like to improve interactions with SSA, but most of these focused on making the SSI application process more efficient and not on working together to assist TANF recipients with impairments toward employment and self- sufficiency. Estimates from our survey showed about 57 percent of the county TANF offices said that they would like to receive training from SSA regarding the SSI application process and eligibility requirements, 50 percent said they would like to have a contact at SSA with whom to discuss cases, and 41 percent said they would like to have regular meetings or working groups with SSA regarding interactions and other issues related to serving low-income individuals with impairments. In addition, one TANF official we interviewed would like interactions with SSA to be improved and thinks they could be if he knew what DDS was looking for in the application process, such as what it requires for evidence. In contrast, only 6 percent of county TANF offices reported that they would like to improve interactions with SSA specifically related to providing SSA with information on employment-related services received while on TANF. Although TANF offices reported an interest in developing a close working relationship with SSA, based on their interactions with SSA, some state and county TANF officials believed that they had to take the lead in developing these relationships. For example, one TANF official we interviewed explained that he had attempted to make contact with SSA to discuss a potential partnership and address some of the county’s issues with the SSI application process but received no response. The county official then wrote a letter to a top SSA regional official asking about partnering opportunities. In response, the regional official instructed the SSA area director, along with the local SSA and state DDS office, to meet with county officials. One SSA headquarters official we interviewed told us there is no SSA policy that directs or encourages their field offices to interact with TANF offices. The official also told us that SSA would consider such a partnership with TANF offices but would want assurances of what the benefits would be for SSA. In addition, the official said that the agency does not want to start up a partnership that would overly tax its already high workloads. The official further said that if it were to develop a relationship with TANF offices, SSA would then have to develop a training program and then administer it to all operations personnel. The official noted that developing and administering such a training program would not be a small task. SSA officials did state that if a TANF office makes a request for training sessions, SSA would be willing to provide training on the application process. However, about 27 percent of county TANF offices reported that they were discouraged in their attempts to establish a relationship with SSA because the local SSA field office told the TANF office that SSA did not have the time or the interest. While officials at SSA headquarters stated that they are largely unaware of any partnerships or interactions between TANF offices and local SSA field offices, some local SSA officials have found such relationships beneficial. In particular, one SSA official has found his office’s relationship with the local TANF office to be a form of outreach for SSA by helping his office identify people who would qualify for SSI. He explained that his local SSA office does not always have the time or staff to conduct outreach. He further explained that TANF case managers can explain the benefits and provide assistance to the TANF recipient applying for SSI. Thus, when a letter comes from the DDS that initially denies the claim, the individual is less likely to throw it away, as he or she is more aware of the process. This could save SSA time and money as the applicant knows that he or she must appeal within a certain amount of time, thereby reducing the need to start over because of missed deadlines. While 34 percent of those county TANF offices that provide services to recipients awaiting SSI eligibility determinations reported interacting with SSA in some manner to serve adult TANF recipients with impairments, a much higher proportion reported receiving assistance from other agencies or programs. For example, as table 3 shows, 91 percent of county TANF offices reported that at least some of their recipients awaiting SSI determinations received assistance from the state vocational rehabilitation agencies, and 86 percent of all offices reported that at least some of their recipients received assistance from the state or local mental health agency. Further, in all of the states we visited, TANF offices reported working with other agencies, such as the Department of Education and the Department of Labor, to help TANF recipients with impairments find work. With the new emphasis on work and self-sufficiency taken by TANF and SSI, and the overlap in the populations served by both programs, opportunities exist to improve the way these two programs interact in order to help individuals with impairments become more self-sufficient. While some interactions between TANF offices and SSA do exist, they are often limited to how best to assist a TANF recipient with impairments become eligible for essentially permanent cash benefits under SSI. Moreover, the practice by most TANF offices of exempting individuals from work requirements while awaiting SSI eligibility determination, as well as SSA’s policy of offering return-to-work services and incentives only after a lengthy eligibility process, undermines both programs’ stated goals of promoting self-sufficiency. In addition, this practice runs counter to the disability management literature that has emphasized that the longer an individual with an impairment remains out of the workforce the less likely the individual is to ever return to work. In recognition of this, SSA is planning demonstration projects that will test alternative ways to provide benefits and employment supports to DI applicants. However, TANF recipients with impairments, because of their low income and assets, are more likely to apply and qualify for SSI. Moreover, TANF recipients with impairments often receive assessments of their conditions and capacity to work while on TANF. Since SSA cannot easily identify who among its applicants are TANF recipients, SSA is also unable to systematically identify the types of services that the SSI applicant may have received through TANF or know whether the SSI applicant has been assessed as having the capacity to work or not. Being able to identify the receipt of TANF benefits, as well as the noncash services received through TANF, may help SSA accomplish its mission of promoting the employment of beneficiaries with impairments. By sharing information and establishing better working relationships with TANF agencies, SSA could identify, among its applicants who are or were TANF recipients, those individuals capable of working and could then target them for employment-related services and help them achieve self-sufficiency or at least reduce their dependency on cash benefits. Although the disconnect in work requirements between TANF and SSA’s disability programs and the timing of when employment-related services are provided to SSI recipients could be barriers to establishing a continuity of services, the earlier provision of employment-related services, as part of a demonstration project, could mitigate these potential barriers. While some county TANF officials we interviewed have developed working relationships with their local SSA office, other counties have not or may be unaware of the possibilities for interactions with SSA and how to go about establishing these relationships. Sharing best practices about how TANF agencies can distinguish, among the recipients they have referred to SSI, those individuals without the capacity to work from those with the capacity to work and who could benefit from employment-related services could help ensure that those individuals with work capacity be given the assistance they need to help them obtain employment. Moreover, sharing best practices for establishing useful interactions with SSA could help ensure that employment-related services could continue after the person becomes eligible for SSI. To help individuals with impairments become more self-sufficient and to address the gap in continuous work services between the TANF and SSI programs, we are recommending that SSA, as part of a new demonstration project, work with TANF offices to develop screening tools, assessments, or other data that would identify those TANF recipients with impairments who while potentially eligible for SSI may also be capable of working. Once these recipients have been identified, the TANF offices and SSA could work together to coordinate aggressive medical care and employment-related services that would help the individual obtain employment and achieve or at least increase self-sufficiency. In order to facilitate and encourage a sharing of information among TANF offices regarding the development of interactions with SSA that might increase self-sufficiency of recipients with impairments, we are recommending that HHS provide space on its Web site to serve as a clearinghouse for information regarding best practices and opportunities for TANF agencies to interact with SSA. This would allow state and county TANF officials to share information on what they are doing, what works, and how to go about establishing relationships with SSA. It would also provide states and counties with access to the research of federal agencies, state and county offices, and other researchers that they may need in order to develop a strong functional relationship with SSA and help TANF recipients with impairments move toward economic independence. HHS should be able to minimize its work and expense by using its Web site to share this information. We provided a draft of this report to HHS and SSA for comment. Both agencies generally agreed with our recommendations and indicated that they look forward to working together to help low-income individuals with impairments become more self-sufficient. Specifically, SSA stated that it would be pleased to work with HHS on the planning and design of a demonstration project. Likewise, HHS stated that it would be pleased to have its staff work with SSA to develop a process or criteria for identifying individuals who could benefit from employment services. In addition, in response to the findings of our report, SSA said it would take immediate measures to ensure that it responds to all requests from TANF offices for training on SSA’s programs. Also in its comments, SSA suggested that we include in our report the fact that states may exempt up to 20 percent of their caseload from the time limits and that many states waive work requirements for persons applying for SSI. In both the draft we sent to SSA and the final version, we included a footnote explaining the time limit exemptions, and in the body of the report we discussed the issue of work requirement exemptions for persons applying for SSI. HHS’ comments appear in appendix II and SSA’s comments appear in appendix III. In addition, both HHS and SSA provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies to the Secretary of HHS, the Commissioner of Social Security, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Carol Dawn Petersen on (202) 512-7215. Other staff who made key contributions are listed in appendix IV. To determine the extent that Temporary Assistance for Needy Families (TANF) recipients with impairments are encouraged to apply for Supplemental Security Income (SSI), whether work requirements are imposed, the range of services provided during the period of SSI eligibility determination, and the extent that interactions exist between the SSI and TANF programs, we conducted a nationally representative survey of 600 county TANF administrators from October 14, 2003, through February 20, 2004. For the most part, TANF services are provided at the county level, so we selected a random probability sample of counties for our survey. We derived a nationwide listing of counties from the U.S. Bureau of the Census’s county-level file with 2000 census data and yearly population estimates for 2001 and 2002. We selected a total sample of 600 counties out of 3,141 counties. To select this sample, we stratified the counties into two groups. The first group consisted of the 100 counties in the United States with the largest populations, using the 2002 estimates. The second group consisted of the remaining counties in the United States. We included all of the 100 counties with the largest populations in our sample to ensure that areas likely to have large concentrations of TANF recipients were represented. From the second group, consisting of all the remaining counties, we selected a random sample of 500 counties. After selecting the sample of counties, we used the American Public Human Services Association’s Public Human Services Directory (2002-2003) to determine the name and address of the TANF administrator for each county. In states with regional TANF programs, we asked the regional director to fill out a questionnaire for each county in the region. We obtained responses from 527 of 600 counties, for an overall response rate of about 88 percent. The responses are weighted to generalize our findings to all county TANF offices nationwide. Sample weights reflect the sample procedure, as well as adjusting for nonresponse. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results at a 95 percent confidence level at an interval of plus or minus 5 percentage points. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. In other words, we are 95 percent confident the confidence interval will include the true value of the study population. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of mitigating such nonsampling errors. In addition to those named above, David J. Forgosh, Cady Summers, Megan Matselboba, Christopher Moriarity, and Luann Moy made key contributions to this report. | The nation's social welfare system has been transformed into a system emphasizing work and personal responsibility, primarily through the creation of the Temporary Assistance for Needy Families (TANF) block grant. The Supplemental Security Income (SSI) program has expanded policies to help recipients improve self-sufficiency. Given that SSA data indicate an overlap in the populations served by TANF and SSI, and the changes in both programs, this report examines (1) the extent that TANF recipients with impairments are encouraged to apply for SSI and what is known about how SSI caseload growth has been affected by such TANF cases, (2) the extent that work requirements are imposed on TANF recipients applying for SSI, and the range of services provided to such recipients, and (3) the extent that interactions exist between the SSI and TANF programs to assist individuals capable of working to obtain employment. In our nationwide survey of county TANF offices, we found that nearly all offices reported that they refer recipients with impairments to SSI, but the level of encouragement to apply for SSI varies. While almost all of the county TANF offices stated that they advise such recipients with impairments to apply for SSI, 74 percent also follow up to ensure the application process is complete, and 61 percent assist recipients in completing the application. Because TANF offices are referring individuals with impairments to SSI, these referrals will have some effect on the SSI caseload. However, due to data limitations, the magnitude of the effect these referrals have on SSI caseload growth is uncertain. While SSA can identify whether SSI recipients have income from other sources, it cannot easily determine whether this income comes from TANF or some other assistance based on need. In addition, past research has not found conclusive evidence regarding the impact that TANF referrals have on SSI caseload growth. Estimates from our survey found that although some TANF offices impose work requirements on individuals with impairments, about 86 percent of all offices reported that they either sometimes or always exempt adult TANF recipients awaiting SSI determinations from the work requirements. One key reason for not imposing work requirements on these recipients is the existence of state and county TANF policies and practices that allow such exemptions. Nevertheless, county TANF offices, for the most part, are willing to offer noncash services, such as transportation and job training, to adult recipients with impairments who have applied for SSI. However, many recipients do not use these services. This low utilization may be related to exempting individuals from the work requirement, but it may also be due to the recipients' fear of jeopardizing their SSI applications. Another reason for the low utilization of services is that many services are not necessarily available; budgetary constraints have limited the services that some TANF offices are able to offer recipients with impairments. Many county TANF offices' interactions with SSA include either having a contact at SSA to discuss cases or following up with SSA regarding applications for SSI. Interactions that help individuals with impairments increase their self-sufficiency are even more limited. In all the states we visited, we found that such interactions generally existed between TANF agencies and other agencies (such as the Departments of Labor or Education). In addition, 95 percent of county TANF offices reported that their interactions with SSA could be improved. State and county TANF officials feel they have to take the lead in developing and maintaining the interaction with SSA. One SSA headquarters official stated that SSA has no formal policy regarding outreach to TANF offices but would consider a partnership provided there is some benefit for SSA. Still, about 27 percent of county TANF offices reported that they were discouraged in their attempts to establish a relationship with SSA because staff at the local SSA field office told them that they did not have the time or the interest. |
UAS represent one of many DOD airborne ISR assets available to support ongoing combat operations. Unmanned aircraft are deployed and controlled at different levels of command and can be categorized into three main classes: man-portable, tactical, and theater. Table 1 illustrates examples of UAS in each category. Man-portable UAS are small, self- contained, and portable and are generally used to support the small ground combat teams in the field. Tactical UAS are larger systems that are generally used to support operational units at tactical levels of command such as the battalion or brigade. Tactical UAS are locally operated and controlled by the units. Theater UAS are operated and controlled by the Joint Forces Air Component Commander (JFACC) and are generally used to support combatant commander ISR priorities, although in certain circumstances they can be assigned to support tactical operations, such as when troops are being fired on. Theater UAS traditionally have been more capable than tactical or man-portable systems. For example, theater UAS typically contain characteristics that make them more capable than other categories of UAS, such as their more robust communications architecture and more capable payloads that allow for production of more diverse intelligence data products. However, some tactical systems, such as the Army’s Warrior UAS, are being developed that are capable of performing theater-level requirements and, as currently envisioned, will be embedded in and controlled at the tactical level by units. DOD uses an annual process for allocating or distributing available DOD theater-level airborne ISR assets, including UAS, to the combatant commanders. The allocation process is managed by U.S. Strategic Command’s Joint Functional Component Command for Intelligence, Surveillance and Reconnaissance (JFCC-ISR). In 2003, DOD altered its unified command plan to give U.S. Strategic Command responsibility for planning, integrating, and coordinating ISR in support of strategic and global operations. To execute this responsibility, U.S. Strategic Command established the JFCC-ISR in March 2005. The JFCC-ISR is charged with recommending to the Secretary of Defense how DOD’s theater-level ISR assets should be allocated, or distributed, among combatant commanders and for the integration and synchronization of DOD, national, and allied ISR capabilities and collection efforts. Once DOD’s ISR assets are allocated to the combatant commanders, they are available to be assigned or tasked based on combatant commander priorities against specific missions in support of ongoing operations. Authority for tasking ISR assets, including UAS, is generally determined by the level of the objective the asset is deployed to support and the command level of the unit that controls the asset. Therefore, most theater- level UAS assets that are controlled and tasked by the JFACC are generally used to support theater-level objectives and priorities, as established by the combatant commander. Most tactical UAS assets controlled by the services or the U.S. Special Operations Command are used to support tactical objectives and priorities, which may differ from theater-level priorities. For example, authority to task the Army’s Hunter resides with the commander of the unit in which it is embedded, whereas authority for tasking the Air Force’s Predator resides with the JFACC. In August 2005 DOD issued its current UAS Roadmap which was developed to assist DOD in developing a long-range strategy for UAS development, acquisition, and other planning efforts as well as to guide industry in developing UAS related technology. According to DOD officials, DOD is in the process of developing an update to this Roadmap and expects to issue the updated version in late summer 2007. The UAS Roadmap is intended to guide UAS planning; however, it does address limited operational aspects such as operational issues or challenges that have emerged as a result of operating UAS in support of ongoing operations. For example, the Roadmap acknowledges that the limited number of bandwidth frequencies constrains DOD’s ability to operate multiple unmanned aircraft simultaneously. DOD components have developed guidance—such as a Multi-Service Tactics, Techniques, and Procedures for the Tactical Employment of Unmanned Aircraft Systems and a Joint Concept of Operations for UAS— to facilitate UAS integration. However, DOD continues to face UAS integration challenges, such as the lack of interoperability and limited communications bandwidth. These challenges may be exacerbated because DOD has not established DOD-wide advance coordination procedures for integrating UAS into combat operations. Until DOD takes steps to address the need for DOD-wide advance coordination, it may continue to face challenges in successfully integrating UAS into combat operations and may exacerbate existing integration challenges. DOD components have developed guidance to facilitate the integration of UAS into combat operations. For example, in August 2006 DOD issued its Multi-Service Tactics, Techniques, and Procedures for the Tactical Employment of Unmanned Aircraft Systems. This document was designed to serve as a planning, coordination, and reference guide for the services and provides a framework for warfighters employing UAS. Furthermore, in March 2007 DOD issued its Joint Concept of Operations for Unmanned Aircraft Systems, which provides overarching principles, a discussion of UAS capabilities, operational views, and a discussion of UAS use in various operational scenarios. Each of the above documents represent an important first step for the use of UAS in combat operations, and DOD officials acknowledge these documents will continue to evolve as DOD learns more about the capabilities of UAS and their application in combat operations. DOD continues to face challenges, such as interoperability and communications bandwidth, in integrating UAS into combat operations. In December 2005 we reported that challenges such as the lack of interoperability and limited communications bandwidth have emerged to hamper recent joint operations or prevent timely UAS employment. Specifically, some UAS cannot easily exchange data, sometimes even within a single service, because they were not designed with interoperable communications standards. Additionally, as we previously reported, U.S. forces are unable to interchangeably use some payloads from one type of UAS on another, a capability known as “payload commonality.” Furthermore, electromagnetic spectrum frequencies, often referred to as bandwidth, are congested by a large number of UAS and other weapons or communications systems using the same frequency simultaneously. While some UAS can change to different, less congested, frequency bands, most UAS were built without the ability to change frequency bands. Thus, commanders have had to delay certain missions until frequency congestion cleared. DOD is taking steps to address these challenges such as equipping UAS with the Tactical Common Data Link and, according to DOD officials, it is developing common ground control stations to improve interoperability of its UAS. Existing UAS integration challenges may be exacerbated because DOD has not established DOD-wide advance coordination procedures for integrating UAS and other ISR assets into combat operations. Specifically, DOD officials indicate that assets arriving in theater without advance coordination may exacerbate UAS integration challenges, such as further taxing the limited available bandwidth. As additional ISR assets are rapidly acquired and fielded to meet the increasing demand for ISR support in ongoing operations, CENTCOM has recognized that advance coordination is a critical factor in integrating UAS into combat operations by enabling efficient deployment of assets and effective utilization of them once they are in theater. Furthermore, advance knowledge of system requirements is crucial to allow the combatant commander sufficient time to adequately plan to support incoming assets. DOD officials acknowledge that having to incorporate assets quickly into the theater infrastructure creates additional challenges and further emphasizes the need for advance coordination. In response to this issue, CENTCOM has developed procedures to ensure the services coordinate their plans prior to deploying UAS to CENTCOM’s theater of operations. In May 2005 CENTCOM established the Concept of Operations for Employment of Full Motion Video Assets, which states that when a full-motion video-capable asset or weapons system is scheduled for deployment to CENTCOM’s theater of operations, the controlling unit will notify CENTCOM of the deployment no later than 30 days prior to arrival of the asset in theater. It also states that the controlling unit will provide a system and platform concept of operations to CENTCOM no later than 15 days prior to the asset’s arrival. According to CENTCOM officials, they distributed these procedures to each of CENTCOM’s service components, such as Central Command Air Forces and U.S. Naval Forces Central Command. However, they were unaware if the procedures were distributed further to the services, and service officials we interviewed, including those at the service Headquarters as well as those stationed within units returning from ongoing operations, indicated they were not aware of the requirement. CENTCOM officials indicate that the procedures have not always been followed. The Warrior Alpha, which was fielded by the Joint Improvised Explosive Device Defeat Organization and operated by the Army to aid in the identification and elimination of improvised explosive devices, illustrates why this advance coordination is so critical. As a result of coordinating with CENTCOM, the Army was made aware of limitations such as bandwidth and limited ramp space and decided to deploy the Warrior Alpha to an alternate location. While CENTCOM and Army officials disagree on whether the coordination was completed in a timely manner, all agree it was ultimately completed. While this example is limited to CENTCOM’s area of operations, the potential exists for DOD to have to quickly establish operations in other areas of the world, which makes the need for advance coordination even more critical. CENTCOM officials acknowledge the need for advance coordination for all ISR assets entering CENTCOM’s theater of operations, not just those assets that are capable of full-motion video. To address this need, CENTCOM developed in November 2006 an ISR Systems Concept of Operations Standardization Memo. CENTCOM officials stated that the ISR memo is intended to provide CENTCOM with awareness of what assets are coming into theater and to allow CENTCOM to ensure the asset is able to be incorporated into the existing infrastructure, given operational challenges such as limited communications bandwidth. This memo requires the inclusion of certain elements in all ISR system concepts of operations, including how the asset will be tasked; how intelligence will be processed, exploited, and disseminated; and system bandwidth requirements that must be coordinated with CENTCOM prior to deployment of ISR assets. This ISR memo applies only to CENTCOM’s theater of operations and does not constitute DOD-wide guidance. While the Warrior Alpha example is limited to CENTCOM, the potential exists for DOD to need to establish operations in other areas of the world very quickly. A DOD-wide procedure for advance coordination would be critical for quickly supporting UAS and other ISR assets once deployed. Until DOD takes steps to address the need for DOD-wide advance coordination, it may be unable to successfully integrate UAS and other ISR assets into combat operations and existing integration challenges may be exacerbated. DOD’s current approach to allocating and tasking its ISR assets, including UAS, does not consider the capabilities of all ISR assets because it lacks an awareness or visibility over all ISR capabilities available to support the combatant commanders and how DOD ISR assets are being used, which hinders DOD’s ability to optimize the use of its assets. Although DOD has established a process for allocating available DOD ISR assets, including UAS, to the combatant commanders to meet their needs, it does not have an awareness of all ISR assets, which impairs its ability to distribute or allocate DOD assets while considering the capabilities of all ISR assets. Additionally, DOD’s process for tasking its ISR assets does not currently allow for information at all levels into how DOD’s ISR assets are being used on a daily basis, which hinders its ability to leverage other assets operating in an area and to avoid unnecessary duplicative taskings. Without an approach to its allocation and tasking processes that considers all ISR capabilities, DOD is not in a sound position to fully leverage all the capabilities of available ISR assets and to optimize the use of those assets, and therefore cannot be assured that it is addressing warfighter needs in the most efficient and effective manner. DOD recognizes the opportunity to better plan for and control its ISR assets and has initiated a study to examine the issue. Although DOD has established a process for allocating available DOD ISR assets to the combatant commanders to meet the warfighters’ needs, it does not have an awareness or visibility over the total number and types of ISR assets available to support combatant commanders or the capabilities represented by those assets. DOD uses an annual process for allocating or distributing its available ISR assets, including UAS, to the combatant commanders to meet theater-level needs. That process is managed by U.S. Strategic Command’s JFCC-ISR, which is tasked with making recommendations to the Secretary of Defense on how best to allocate DOD ISR resources for theater use across the combatant commands and ensuring the integration and synchronization of DOD, national, and allied ISR capabilities and collection efforts. DOD officials indicate that annual allocation levels are constrained by the number of ISR assets in DOD’s inventory and believe that JFCC-ISR is, therefore, not able to allocate to the combatant commanders ISR assets in sufficient numbers to meet all requests for ISR support. However, our work suggests that additional information is needed to assess the true demand for ISR assets and the best way to meet this demand. Specifically, JFCC-ISR’s ability to fulfill its mission of integrating DOD, national, and allied partner ISR capabilities and making recommendations on how best to allocate ISR assets to support the warfighter depends, in part, on the extent to which it has awareness and visibility over all ISR assets, including DOD, national, and allied ISR assets. JFCC-ISR does not have complete visibility into all assets that could be used to support combatant commanders’ needs, which hinders its ability to optimally distribute or allocate DOD ISR assets. JFCC- ISR officials estimate it has 80–90 percent visibility into DOD ISR assets but does not have the same level of visibility into other national and allied ISR assets available to support theater-level requirements, such as assets that are owned and controlled by U.S. national intelligence agencies such as the National Security Agency or by our allies supporting ongoing operations. According to JFCC-ISR officials, although they are working to gain better visibility over all ISR assets, they currently do not have this level of visibility because DOD does not currently have a mechanism for obtaining information on all ISR assets—including all DOD, national, and allied assets—operating in each of the combatant commanders’ area of operations. Absent such a mechanism, JFCC-ISR has been trying to learn more about the capabilities of non-DOD ISR assets by building relationships with other national and allied intelligence agencies and addressing limitations related to intelligence agency system access. Without an approach to its allocation process that considers all available ISR capabilities, JFCC-ISR does not have all the information it needs to leverage the capabilities of all available ISR assets and to optimize the allocation of DOD’s ISR assets. DOD’s process for tasking its airborne ISR assets, including UAS, does not provide for visibility at all levels into how DOD airborne ISR assets are being used on a daily basis. Once DOD ISR assets have been allocated, those assets are available to the combatant commanders to be assigned, or tasked, against specific requests for ISR support in ongoing operations. The JFACC is responsible for planning, coordinating, and monitoring joint air operations to focus the effect of air capabilities and for assuring their effective and efficient use in achieving the combatant commanders’ objectives. However, while the JFACC has visibility into how all theater- level ISR assets, like the Air Force’s Predator, are being used, he or she does not have visibility into how tactical ISR assets, such as the Army’s Hunter, are being used on a daily basis or what missions they are supporting. The JFACC generally tasks assets that support theater-level objectives, while assets that support tactical-level objectives are tasked and controlled by the services or by the U.S. Special Operations Command. Tactical units utilize their embedded, or tactical, assets first to satisfy unit intelligence needs. However, when tactical assets are not available or capable of satisfying a unit’s need for ISR support, the unit requests theater-level ISR support. Requests for most theater-level assets are entered into a central DOD database, but there is no similar database that captures requests for tactical-level assets. While there are procedures, such as the Air Tasking Order and Airspace Control Order, for tracking where theater- and tactical-level assets are operating for airspace control and deconfliction purposes, a comparable mechanism for tracking the missions these assets are supporting or how they are being used on a daily basis does not exist. For example, the Air Tasking Order would track the time, date, and location where a UAS was operating, but there is no mechanism that would track what intelligence the UAS was supposed to gather on a mission or why the UAS was being used on a mission. Without a database or similar mechanism providing visibility into how tactical-level assets are being tasked, the JFACC is limited in his or her awareness of how those assets are being used on a daily basis, which hinders the JFACC’s ability to optimize the use of those assets. This lack of visibility limits the JFACC’s ability to leverage those assets using techniques such as cross-cueing, which is the collaborative effort of using capabilities offered by multiple ISR platforms to fulfill a mission. By using techniques such as cross-cueing, the JFACC has been able to use the different types of capabilities brought by different theater-level manned and unmanned ISR assets to maximize the intelligence collected. For example, a manned Joint Surveillance Target Attack Radar System was tasked to monitor an area. When this system sensed movement in the area, a Predator was then tasked to collect imagery to confirm suspected activity. Without visibility into how tactical assets are being utilized, the JFACC is limited in his or her ability to optimize the use of all available DOD ISR assets and to focus the effect of these assets to ensure their efficient and effective use. Such visibility will become even more important given that services such as the Army are acquiring, and planning to embed in units, ISR assets capable of satisfying theater-level requirements, such as the Extended Range/Multi-Purpose or Warrior UAS, which could otherwise be leveraged to support JFACC requirements. Duplicative taskings that occur are often driven by a lack of visibility into where ISR assets at all levels are operating and what they are tasked to do. For example, a DOD official shared with us an example of unnecessary duplication where an Army unit requested a full-motion video-capable asset to support a high-priority requirement. When the asset, a Predator UAS, arrived to support the requirement, its operator realized the Army unit had also tasked one of its tactical assets, a Hunter UAS, against the requirement. As a result of the lack of visibility over all assets, the potential exists for multiple ISR aircraft to be tasked to operate in the same area and against the same requirement. However, some level of duplication may be necessary when driven by mission requirements and system capabilities. Certain missions, such as special operations, often need a certain amount of duplication in order to achieve the desired result. For example, a mission intended to track activity of suspected terrorists may require multiple systems to follow identified individuals who flee the scene in different directions. Furthermore, assets such as the Predator UAS experience system limitations when equipped with a full-motion- video sensor in that they are only able to provide surveillance of a narrow or “soda straw” view. A certain level of duplication of UAS may be necessary to support a mission to obtain a complete view of the area under surveillance. Greater visibility at the tactical level could provide units with a greater awareness of where other ISR assets, including both theater-level and those assets embedded in other units, are operating and what they are being used to do. A mechanism that provides this visibility would allow tactical units, when appropriate, to leverage other assets operating in their area to optimize the information captured and avoid unnecessary duplicative taskings. DOD recognizes the opportunity to better plan for and control its ISR assets and has initiated a Persistent ISR Capabilities Based Assessment Study. The study, sponsored by the Battlespace Awareness Functional Capabilities Board, focuses on what other actions such as better planning, direction, command and control, and better fusion and exploitation of information can provide the warfighter with more persistent surveillance capability. The study is expected to be completed in the August– September 2007 time frame. DOD is unable to fully evaluate the performance of its ISR assets because it lacks a complete set of metrics and does not consistently receive feedback from operators and intelligence personnel to ensure the warfighter’s needs are met. Specifically, although JFCC-ISR is tasked with developing metrics and standards of performance to measure the success of DOD ISR missions, existing metrics are limited and no DOD-wide milestones have been established. Furthermore, DOD officials acknowledged that they do not consistently receive feedback from operators and intelligence analysts to ensure the warfighter’s needs are met. Without feedback and a complete set of metrics for evaluating its ISR assets, DOD may not be in the best position to validate how well the warfighter needs are being met, the true demand for ISR assets, and whether it is optimizing the use of existing assets, or to acquire new systems that best support warfighting needs. DOD is working to develop additional quantitative ISR metrics as well as qualitative metrics to measure the success of its ISR assets, but existing quantitative metrics are limited and no milestones have been established. The JFCC-ISR is tasked with developing metrics and standards of performance to assess DOD ISR mission accomplishment. Moreover, we recommended in a December 2005 report that DOD ensure its performance measurement systems measure how effectively UAS perform their missions, identify performance indicator information that needs to be collected, and systematically collect identified performance information. We continue to believe this recommendation has merit, and DOD officials agree that metrics are needed not only for UAS, but for all ISR missions. However, DOD currently assesses its ISR missions with limited quantitative metrics such as the number of targets planned versus the number collected against. While these metrics are a good start, DOD officials acknowledge that the current metrics do not take into account all of the qualitative considerations associated with measuring ISR asset effectiveness such as the cumulative knowledge provided by numerous ISR missions, whether the ISR asset did what it was intended to do, whether it had the intended effect, and whether the intelligence captured contributed towards accomplishment of the mission. The JFCC-ISR is working with the combatant commands to develop additional quantitative ISR metrics as well as qualitative metrics to assess the effectiveness of ISR assets, although DOD officials acknowledge the progress in developing metrics has been limited. In developing these metrics, the JFCC-ISR is leveraging national intelligence attributes, which include characteristics such as whether the intelligence is comprehensive to perform all missions anywhere and at anytime in any weather; credible to allow users to make sound decisions and take appropriate action; persistent to collect often and long enough to get the job done; and timely to meet user needs. Furthermore, the JFCC-ISR has not made any progress in establishing DOD-wide milestones for the development of these metrics. Milestones are the required steps and planned dates for completion of those steps leading up to metrics development. DOD officials indicate that determining the success of ISR missions is difficult given the nature of intelligence collection. Specifically, hundreds of hours of ISR missions and target tracking could culminate in the capture of a high value target; however, it may be difficult to measure the effectiveness of each individual ISR mission that led to the ultimate capture and mission success. This cumulative knowledge provided by ISR assets is difficult to quantify. An official from the Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics also acknowledged that it may be more difficult to evaluate the success of ongoing operations due to the dynamic and subjective nature of requirements. The official noted, however, that DOD is better equipped to measure the success of its more mature and traditional ISR missions, such as sensitive reconnaissance operations, because the objectives are better defined allowing more direct determination of success. In addition to metrics, DOD also relies on feedback for evaluating how successful its ISR assets are in meeting the warfighter’s needs. However, DOD lacks consistent feedback on whether ISR assets meet the needs of the warfighters. Joint Publication 2-01 calls for intelligence personnel and consumers to evaluate and provide immediate feedback on how well intelligence operations perform to meet commander’s intelligence requirements. This information could be used to inform DOD’s acquisition, allocation, and tasking of ISR assets. While DOD officials indicate they occasionally receive feedback on ISR asset performance, they acknowledge that feedback specific to how ISR assets performed in individual ISR missions is not consistently occurring. While there is real- time communication among unmanned aircraft system operators, requesters, and intelligence personnel during an operation, and agency officials indicate this communication is beneficial to providing real-time feedback, there is little to no feedback after the operation to determine whether the warfighters’ needs were met. Officials indicate that the fast pace of operations in theater affects the ability of end users to provide feedback on every ISR mission. For example, according to Marine Corps officials, there is a mechanism for Marine Corps units to provide feedback, but the feedback is not consistently provided because there is no systematic process in place to ensure that this feedback is captured. Without developing metrics and systematically gathering feedback that enables it to assess the extent to which ISR assets are successful in supporting warfighter needs, DOD is not in a position to validate the true demand for ISR assets, determine whether it is allocating and tasking its ISR assets in the most effective manner, or acquire new systems that best support warfighting needs. DOD has achieved operational success with UAS in ongoing operations, but it continues to face operational challenges that limit its ability to fully optimize the use of these assets. These operational challenges have been exacerbated by the lack of advance coordination when new assets are being deployed in theater. While operations in Iraq and Afghanistan have been ongoing for some time, the potential exists for DOD to need to establish operations in other areas of the world very quickly. A DOD-wide procedure for advance coordination is critical to enable DOD to quickly support ISR assets once deployed to ongoing operations. Until DOD takes steps to address the need for DOD-wide advance coordination, it may be limited in its ability to efficiently deploy and utilize UAS assets and may not allow the combatant commander time to plan to support incoming assets. With the operational successes that have been realized with UAS, commanders are requesting them in greater numbers. In spite of a dramatic increase in UAS funding, DOD officials indicate that annual allocation levels are constrained by the number of ISR assets in the inventory and JFCC-ISR is, therefore, not able to allocate to the combatant commanders DOD ISR assets in sufficient numbers to meet all requests for ISR support. However, our work indicates that DOD’s approach to UAS may not leverage all of the DOD ISR assets currently available and DOD may not be in the best position to determine if perceived demand is well- founded. Given the substantial investment DOD is making in UAS and the increasing demand for them, it is critical that DOD’s approach to managing its ISR assets, including UAS, allow it to optimize the use of these assets. Without an approach to its allocation and tasking processes that considers all ISR capabilities, DOD may not be in a position to leverage all available ISR assets and to optimize the use of those assets. Moreover, DOD lacks visibility over the true demand for and use of ISR assets, which could hinder its ability to make informed decisions about the need to purchase additional UAS assets and what quantities should be purchased. Furthermore, without developing metrics and systematically gathering feedback that enables DOD to assess the extent to which ISR missions are successful in supporting warfighter needs, decision makers may not be in a position to determine which UAS systems would best support the warfighters’ needs. To mitigate challenges in integrating UAS, and other ISR assets, into combat operations, we recommend that the Secretary of Defense, in conjunction with the service secretaries and combatant commanders, take the following three actions: establish DOD-wide requirements for coordinating with the combatant commanders in advance of bringing UAS into the theater of operations; develop a plan for communicating those requirements throughout DOD; establish a mechanism to ensure the services comply with these requirements. To ensure DOD has the information needed to consider all ISR assets when allocating and tasking these assets, we recommend that the Secretary of Defense develop a mechanism for obtaining information on all ISR assets, including all DOD, national, and allied assets, operating in each of the combatant commanders’ area of operations; and allowing users at all levels within DOD to gain real-time situational awareness on where DOD ISR assets are operating and, where not prohibited by the mission, what they are being used to do. To improve DOD’s ability to evaluate the performance of its ISR missions, we recommend the Secretary of Defense establish DOD-wide milestones for development of qualitative and develop a process for systematically capturing feedback from intelligence and operations communities to assess how effective ISR assets are in meeting warfighters’ requirements; and create a mechanism to ensure this information is used to inform DOD’s acquisition, allocation, and tasking of its ISR assets. In written comments on a draft of this report, DOD generally concurred with all of our recommendations. DOD generally agreed with our recommendation that the Secretary of Defense, in conjunction with the service secretaries and combatant commanders, establish DOD-wide requirements for coordinating with the combatant commanders in advance of bringing UAS into the theater of operations; develop a plan for communicating those requirements throughout DOD; and establish a mechanism to ensure the services comply with these requirements. DOD noted that it currently has a well-defined process to coordinate with the combatant commanders on the introduction of UAS into theater and cited several examples including the annual process for allocating theater-level UAS, and actions between stateside units and units in theater to plan for deployment of ISR capabilities. DOD, however, acknowledged that a more standardized method could improve efficiency of the coordination process and stated that the Joint Chiefs of Staff would be tasked to look at standardizing the coordination process and evaluate and provide direction for an improved coordination process. Further, DOD noted that, based on this evaluation, if direction is required, it will be issued via a Chairman’s directive which is mandatory and therefore establishes the mechanism that ensures compliance. We recognize that DOD has various processes related to UAS but note that none, including the examples cited by DOD, represent a standardized, DOD-wide approach that the services and combatant commanders can follow in coordinating the specific details of deploying UAS assets, regardless of geographic area. Furthermore, we believe that a directive requiring coordination, by itself, does not ensure compliance, and would encourage DOD to include provisions detailing how implementation of the directive will be monitored. DOD also generally concurred with our recommendation that the Secretary of Defense develop a mechanism for obtaining information on all ISR assets—including all DOD, national, and allied assets—operating in each of the combatant commanders’ area of operations; and allowing users at all levels within DOD to gain real-time situational awareness on where DOD ISR assets are operating and, where not prohibited by the mission, what they are being used to do. Specifically, DOD agrees that a mechanism for obtaining information on all ISR assets is needed and commented that work is underway within the JFCC-ISR to develop such a mechanism. DOD commented that it is not currently practical to provide situational awareness on some UAS such as the small, hand-launched UAS at the lowest operational level because these systems do not have the capacity or capability to communicate their position to a common point. DOD noted that it will determine the UAS operational levels that will provide widespread situational awareness, including operational details and timelines of data reporting. We recognize that situational awareness may not currently be practical for some UAS but would encourage the department to seek to maximize coverage in exploring options for improved situational awareness. DOD concurred with our recommendation that the Secretary of Defense establish DOD-wide milestones for development of qualitative and quantitative metrics and stated that JFCC-ISR is standing up an Assessments Division that will be responsible for the development of metrics. We recognize the Assessment Division has been tasked with development of ISR metrics and reemphasize the need to develop milestones for metrics development. DOD partially concurred with our recommendations that it develop a process for systematically capturing feedback from intelligence and operations communities to assess how effective ISR assets are in meeting warfighters’ requirements and create a mechanism to ensure this information is used to inform DOD’s acquisition, allocation, and tasking of its ISR assets. DOD agreed that an improved and standardized process for collection and reporting of feedback would enhance visibility and provide more effective warfighter support, but pointed out that organizations within the department collect feedback or conduct lessons learned studies. We acknowledge that DOD has organizations such as the Army’s Center for Lessons Learned that are responsible for capturing feedback and developing lessons learned based on that feedback. However, these organizations are charged with capturing lessons learned on a number of issues and are not focused on ISR effectiveness. Furthermore, our recommendation pertains to DOD’s guidance which states it is imperative that intelligence personnel and consumers to evaluate and provide immediate feedback on how well individual intelligence operations perform to meet commanders’ intelligence requirements. While the feedback that may be captured by those lessons learned organizations is noteworthy, it is often not immediate and specific to individual missions. As we noted in our report, DOD officials acknowledged that feedback specific to how ISR assets performed in individual ISR missions is not consistently occurring. DOD further commented that it has mechanisms in place to inform its decision making processes on the acquisition, allocation, and tasking of its ISR assets such as the Joint Capabilities Integration and Development System which assesses, among other things, capability gaps and solutions. We agree that the mechanisms mentioned in DOD’s response exist; however, DOD currently does not have sufficient qualitative and quantitative metrics needed to collect data on UAS performance nor does it have a means for incorporating such data into the processes currently used to make decisions on ISR assets. The full text of DOD’s written comments is reprinted in appendix II. DOD also provided technical comments separately and we have made adjustments where appropriate. In particular, the Army provided additional information on the coordination of the Warrior Alpha UAS in its technical comments, including a timeline for introduction of the asset into theater. We are sending copies of this report to the Secretary of Defense. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix III. To assess the extent to which the Department of Defense (DOD) has taken steps to facilitate the integration of unmanned aircraft systems (UAS) into combat operations, we examined DOD and military service publications and documentation on UAS such as the 2005–2030 UAS Roadmap, the Multi-Service Tactics, Techniques, and Procedures for the Tactical Employment of Unmanned Aircraft Systems, the Joint Concept of Operations for Unmanned Aircraft Systems, the Concept of Operations for Employment of Full Motion Video Assets, and the ISR Systems Concept of Operations Standardization Memo. Additionally, we met with key DOD and service officials, including those from the Joint UAS Center of Excellence and the Unmanned Aircraft Systems Planning Task Force within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, and the Air Land Sea Application Center. We also met with officials from U.S. Central Command and the services, including units that had returned from deployment to the theater, or that were currently supporting ongoing operations, to discuss the integration of UAS into U.S. Central Command’s area of responsibility and to better understand integration challenges. To determine the extent to which DOD’s approach to allocating and tasking its intelligence, surveillance, and reconnaissance (ISR) assets, including UAS, considers all available ISR assets to optimize their capabilities, we met with key DOD and service officials, including those from U.S. Central Command and associated Army and Air Force component commands, the Combined Air Operations Center at Al Udeid Air Base in Qatar, the Joint Functional Component Command for Intelligence, Surveillance, and Reconnaissance and other organizations. We interviewed and obtained documentation including the fiscal year 2007 ISR allocation briefing from officials of the Joint Functional Component Command for Intelligence, Surveillance, and Reconnaissance to better understand the allocation process. We also reviewed documentation such as joint publications and briefings that explain the process for tasking ISR assets and interviewed officials at U.S. Central Command, Central Command Air Forces, and the Combined Air Operations Center in Qatar to better understand how ISR assets are assigned to specific missions. To understand how requests for ISR support are generated and satisfied at the tactical level, we spoke with units that recently returned from, or are currently supporting, ongoing operations in Iraq as well as units within the services such as the Marine Corps’ Tactical Fusion Center that are involved in determining if tactical assets are available to satisfy those requests or if the requests need to be forwarded for theater-level support. To understand how manned and unmanned assets are being leveraged to optimize the intelligence captured, we met with manned and unmanned units stationed at the Al Dhafra Air Base in the United Arab Emirates. To understand DOD’s ongoing efforts to study its process for tasking ISR assets, we reviewed documentation and interviewed an official from the Battlespace Awareness Functional Capabilities Board. To assess whether DOD evaluates the performance of its ISR assets, including UAS, to ensure that warfighters’ needs are met, we interviewed DOD and service officials to discuss the metrics for evaluating the performance of its ISR assets. We discussed with the Joint Functional Component Command for Intelligence, Surveillance, and Reconnaissance its efforts to establish metrics for evaluating ISR assets performance. We reviewed metrics routinely captured to assess the success of DOD’s ISR missions. We also met with service officials and service units recently returned from Iraq to determine the extent to which feedback is received on how effective ISR support is in meeting the warfighters’ needs. We performed our work from June 2006 to June 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, Patty Lentini, Assistant Director; Renee Brown; Jamie Khanna; Kate Lenane; LaShawnda Lindsey; Elisha Matvay; and Susan Tindall made key contributions to this report. Defense Acquisitions: Greater Synergies Possible for DOD’s Intelligence, Surveillance, and Reconnaissance Systems. GAO-07-578. Washington, D.C.: May 17, 2007. Intelligence, Surveillance, and Reconnaissance: Preliminary Observations on DOD’s Approach to Managing Requirements for New Systems, Existing Assets, and Systems Development. GAO-07-596T. Washington, D.C.: April 19, 2007. Unmanned Aircraft Systems: Improved Planning and Acquisition Strategies Can Help Address Operational Challenges. GAO-06-610T. Washington, D.C.: April 6, 2006. Unmanned Aircraft Systems: DOD Needs to More Effectively Promote Interoperability and Improve Performance Assessments. GAO-06-49. Washington, D.C.: December 13, 2005. Unmanned Aerial Vehicles: Improved Strategic and Acquisition Planning Can Help Address Emerging Challenges. GAO-05-395T. Washington, D.C.: March 9, 2005. | Combatant commanders carrying out ongoing operations rank the need for intelligence, surveillance, and reconnaissance (ISR) capabilities as high on their priority lists. The Department of Defense (DOD) is investing in many ISR systems, including unmanned aircraft systems (UAS), to meet the growing demand for ISR assets to support the warfighter. GAO was asked to evaluate DOD's efforts to integrate UAS into ongoing operations while optimizing the use of all DOD ISR assets. Specifically, this report addresses the extent that (1) DOD has taken steps to facilitate the integration of UAS into combat operations, and (2) DOD's approach to allocating and tasking its ISR assets considers all available ISR capabilities, including those provided by UAS. GAO also reviewed the extent that DOD evaluates the performance of its ISR assets, including UAS, in meeting warfighters' needs. To perform this work, GAO analyzed data and guidance on the use of ISR assets, and interviewed DOD officials, including those supporting ongoing operations in Iraq and Afghanistan. DOD components have developed guidance to facilitate the integration of UAS into combat operations; however, further steps are needed to coordinate the deployment of these assets. For example, DOD developed guidance for the tactical employment of UAS and a Joint UAS Concept of Operations. This guidance is an important first step but does not address coordinating UAS and other ISR assets prior to deploying them to ongoing operations, which U.S. Central Command recognized is a critical factor in integrating UAS into combat operations. Until DOD addresses the need for DOD-wide advance coordination, it may continue to face challenges in successfully integrating UAS and other ISR assets into combat operations and may exacerbate integration challenges such as limited bandwidth. DOD's approach to allocating and tasking its ISR assets, including UAS, hinders its ability to optimize the use of these assets because it does not consider the capabilities of all available ISR assets. The command charged with recommending how theater-level DOD ISR assets should be allocated to support operational requirements does not have awareness of all available ISR assets because DOD does not have a mechanism for obtaining this information. Similarly, the commander responsible for coordinating ongoing joint air operations does not have information on how assets controlled by tactical units are being used or what missions they've been tasked to support. Nor do tactical units have information on how theater-level assets and ISR assets embedded in other units are being tasked, which results in problems such as duplicative taskings. This lack of visibility occurs because DOD does not have a mechanism for tracking the missions both theater- and tactical-level ISR assets are supporting or how they are being used. Without an approach to allocation and tasking that includes a mechanism for considering all ISR capabilities, DOD may be unable to fully leverage all available ISR assets and optimize their use. DOD is unable to fully evaluate the performance of its ISR assets because it lacks a complete set of metrics and does not consistently receive feedback to ensure the warfighter's needs were met. Although the Joint Functional Component Command for ISR has been tasked with developing ISR metrics, DOD currently assesses its ISR missions with limited quantitative metrics such as the number of targets planned versus captured. While these metrics are a good start, DOD officials acknowledge that the current metrics do not capture all of the qualitative considerations associated with measuring ISR asset effectiveness such as the cumulative knowledge provided by numerous ISR missions. There is an ongoing effort within DOD to develop additional quantitative as well as qualitative ISR metrics, but no DOD-wide milestones have been established. Furthermore, DOD guidance calls for an evaluation of the results of joint operations; however, DOD officials acknowledge that this feedback is not consistently occurring due to the fast pace of operations in theater. Without metrics and feedback, DOD may not be able to validate how well the warfighters' needs are being met, whether it is optimizing the use of existing assets, or which new systems would best support warfighting needs. |
In our review of the 240 visa revocations, we found examples where information on visa revocations did not flow between the State Department and appropriate units overseas and within INS and the FBI. State Department officials from the Visa Office told us that when they revoke a visa in Washington, they are supposed to take the following steps: (1) notify consular officers at all overseas posts that the individual is a suspected terrorist by entering a lookout on the person into State’s watch list, the Consular Lookout and Support System, known as CLASS; (2) notify the INS Lookout Unit via a faxed copy of the revocation certificate so that the unit can enter the individual into its watch list and notify officials at ports of entry; and (3) notify the issuing post via cable so that the post can attempt to contact the individual to physically cancel his visa. Information-only copies of these cables are also sent to INS’s and FBI’s main communications enters. State officials told us they rely on INS and FBI internal distribution mechanisms to ensure that these cables are routed to appropriate units within the agencies. Figure 1 demonstrates gaps that we identified in the flow of information from State to INS and the FBI, and within these agencies, as well as the resulting inconsistencies in the posting of lookouts to the agencies’ respective watch lists. The top arrow in the diagram shows the extent of communication on visa revocations between the State Department’s Bureau of Consular Affairs and State’s overseas consular posts. We found that State had not consistently followed its informal policy of entering a lookout into its CLASS lookout system at the time of the revocation. State officials said that they post lookouts on individuals with revoked visas in CLASS so that, if the individual attempts to get a new visa, consular officers at overseas posts will know that the applicant has had a previous visa revoked and that a security advisory opinion on the individual is required before issuing a new visa. Without a lookout, it is possible that a new visa could be issued without additional security screening. We reviewed CLASS records on all 240 individuals whose visas were revoked and found that the State Department did not post lookouts within a 2-week period of the revocation on 64 of these individuals. The second arrow depicts the information flow on revocations between State and the INS Lookout Unit, which is the inspections unit that posts lookouts on INS’s watch list to prevent terrorists (and other inadmissible aliens) from entering the United States. Officials from the INS Lookout Unit told us they had not received any notice of the revocations from State in 43 of the 240 cases. In another 47 cases, the INS Lookout Unit received the revocation notice only via a cable; however, these cables took, on average, 12 days to reach the Lookout Unit, although in one case it took 29 days. An official from the INS communications center told us that, because State’s cables were marked “information only,” they were routed through the Inspections division first, which was then supposed to forward them to the Lookout Unit. He told us that if the cables had been marked as “action” or “urgent,” they would have been sent immediately to the Lookout Unit. In cases where the INS Lookout Unit could document that it received a notification, it generally posted information on these revocations in its lookout database within one day of receiving the notice. When it did not receive notification, it could not post information on these individuals in its lookout database, precluding INS inspectors at ports of entry from knowing that these individuals had had their visas revoked. The third arrow on the diagram shows the communication between State and INS’s National Security Unit that is responsible for investigations. This broken arrow shows that the State Department did not send copies of the faxed revocation certificates or cables to the unit. Further, in cases where the INS Lookout Unit received the revocation notification from State, INS Lookout Unit officials said that they did not routinely check to see whether these individuals had already entered the United States or notify investigators in the National Security Unit of the visa revocations. Without this notification, the National Security Unit would have no independent basis to begin an investigation. In May 2003, an official from the Lookout Unit said that her unit recently established a procedure in which, upon receiving notification of a revocation, she will query the Interagency Border Inspection System to determine if the individual recently entered the country. She will then give this information to investigators in the National Security Unit, which is now part of the Bureau of Immigration and Customs Enforcement. The bottom arrow on the diagram shows the information flow on visa revocations from State to the FBI’s Counterterrorism units. We found that that these units did not consistently receive information on visa revocations. FBI officials said that the agency’s main communications center received the notifications but the officials could not confirm if the notifications were then distributed internally to the appropriate investigative units at the FBI or to the agency’s watch list unit, known as the Terrorist Watch and Warning Unit. The Department of Justice said that to add a person to its watch list, additional information must be provided to the FBI, such as the person’s full name, complete date of birth, physical descriptors, and watch list-specific classification information. The revocation notifications did not include most of this information. Our analysis shows that thirty individuals with revoked visas have entered the United States and may still remain in the country. Twenty-nine of these individuals entered before State revoked their visas. An additional person who may still be in the country entered after his visa was revoked. INS inspectors allowed at least three other people to enter the country even though their visas had already been revoked, largely due to breakdowns in the notification system. These three people have left the country. Despite these problems, we noted cases where the visa revocation process prevented possible terrorists from entering the country or cleared individuals whose visas had been revoked. For example, INS inspectors successfully prevented at least 14 of the 240 individuals from entering the country because the INS watch list included information on the revocation action or had other lookouts on them. In addition, State records showed that a small number of people reapplied for a new visa after the revocation. State used the visa issuance process to fully screen these individuals and determined that they did not pose a security threat. The INS and the FBI did not routinely attempt to investigate or locate any of the individuals whose visas were revoked and who may be in the country. Due to congressional interest in specific cases, INS investigators located four of the persons in the United States but did not attempt to locate other revoked visa holders who may have entered the country. INS officials told us that they generally do not investigate these cases because it would be challenging to remove these individuals unless they were in violation of their immigration status even if the agency could locate them. A visa revocation by itself is not a stated grounds for removal under the Immigration and Nationality Act (INA). Investigators from INS’s National Security Unit said they could investigate individuals to determine if they were violating the terms of their admission, for example by overstaying the amount of time they were granted to remain in the United States, but they believed that under the INA, the visa revocation itself does not affect the alien’s legal status in the United States—even though the revocation was for terrorism reasons. They and other Homeland Security officials raised a number of legal issues associated with removing an individual from the country after the person’s visa has been revoked. Our report discusses these issues in detail. FBI officials told us that they did not routinely attempt to investigate and locate individuals with revoked visas who may have entered the United States. They said that State’s method of notifying them did not clearly indicate that visas had been revoked because the visa holder may pose terrorism concerns. Further, the notifications were sent as “information only” and did not request specific follow-up action by the FBI. Moreover, State did not attempt to make other contact with the FBI that would indicate any urgency in the matter. The weaknesses I have outlined above resulted from the U.S. government’s limited policy guidance on the visa revocation process. Our analysis indicates that the U.S. government has no specific policy on the use of visa revocations as an antiterrorism tool and no written procedures to guide State in notifying the relevant agencies of visa revocations on terrorism grounds. State and INS have written procedures that guide some types of visa revocations; however, neither they nor the FBI has written internal procedures for notifying their appropriate personnel to take specific actions on visas revoked by State Department headquarters officials, as was the case for all the revoked visas covered in our review. While State and INS officials told us they use the visa revocation process to prevent suspected terrorists from entering the United States, neither they nor FBI officials had policies or procedures that covered investigating, locating, and taking appropriate action in cases where the visa holder had already entered the country. In conclusion, Mr. Chairman, the visa process could be an important tool to keep potential terrorists from entering the United States. Ideally, information on suspected terrorists would reach the State Department before it decides to issue a visa. However, there will always be some cases when the information arrives too late and State has already issued a visa. Revoking a visa can mitigate this problem, but only if State promptly notifies appropriate border control and law enforcement agencies and if these agencies act quickly to (1) notify border control agents and immigration inspectors to deny entry to persons with a revoked visa, and (2) investigate persons with revoked visas who have entered the country. Currently there are major gaps in the notification and investigation processes. One reason for this is that there are no specific written policies and procedures on how notification of a visa revocation should take place and what agencies should do when they are notified. As a result, there is heightened risk that suspected terrorists could enter the country with a revoked visa or be allowed to remain after their visa is revoked without undergoing investigation or monitoring. State has emphasized that it revoked the visas as a precautionary measure and that the 240 persons are not necessarily terrorists or suspected terrorists. State cited the uncertain nature of the information it receives from the intelligence and law enforcement communities on which it must base its decision to revoke an individual’s visa. We recognize that the visas were revoked as a precautionary measure and that the persons whose visas were revoked may not be terrorists. However, the State Department determined that there was enough derogatory information to revoke visas for these persons because of terrorism concerns. Our recommendations, which are discussed below, are designed to ensure that persons whose visas have been revoked because of potential terrorism concerns be denied entry to the United States and those who may already be in the United States be investigated to determine if they pose a security threat. To remedy the systemic weaknesses in the visa revocation process, we are recommending that the Secretary of Homeland Security, who is now responsible for issuing regulations and administering and enforcing provisions of U.S. immigration law relating to visa issuance, work in conjunction with the Secretary of State and the Attorney General to: develop specific policies and procedures for the interagency visa revocation process to ensure that notification of visa revocations for suspected terrorists and relevant supporting information are transmitted from State to immigration and law enforcement agencies, and their respective inspection and investigation units, in a timely manner; develop a specific policy on actions that immigration and law enforcement agencies should take to investigate and locate individuals whose visas have been revoked for terrorism concerns and who remain in the United States after revocation; and determine if any persons with visas revoked on terrorism grounds are in the United States and, if so, whether they pose a security threat. In commenting on our report, Homeland Security agreed that the visa revocation process should be strengthened as an antiterrorism tool. State and Justice did not comment on our recommendations. I would be happy to answer any questions you or other members of the subcommittee may have. For future contacts regarding this testimony, please call Jess Ford or John Brummet at (202) 512-4128. Individuals making key contributions to this testimony included Judy McCloskey, Kate Brentzel, Mary Moutsos, and Janey Cohen. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Strategy for Homeland Security calls for preventing the entry of foreign terrorists into our country and using all legal means to identify; halt; and where appropriate, prosecute or bring immigration or other civil charges against terrorists in the United States. GAO reported in October 2002 that the Department of State had revoked visas of certain persons after it learned they might be suspected terrorists, raising concerns that some of these individuals may have entered the United States before or after State's action. Congressional requesters asked GAO to (1) assess the effectiveness of the visa revocation process and (2) identify the policies and procedures of State, the Immigration and Naturalization Service (INS), and the Federal Bureau of Investigation (FBI) that govern their respective actions in the process. Our analysis shows that the visa revocation process was not being fully utilized as an antiterrorism tool. The visa revocation process broke down when information on individuals with revoked visas was not shared between State and appropriate immigration and law enforcement offices. It broke down even further when individuals had already entered the United States prior to revocation. INS and the FBI were not routinely taking actions to investigate, locate, or resolve the cases of individuals who remained in the United States after their visas were revoked. In our review of 240 visa revocations, we found that (1) appropriate units within INS and the FBI did not always receive notifications of all the revocations; (2) names were not consistently posted to the agencies' watch lists of suspected terrorists; (3) 30 individuals whose visas were revoked on terrorism grounds had entered the United States and may still remain; and (4) INS and the FBI were not routinely taking actions to investigate, locate, or resolve the cases of individuals who remained in the United States after their visas were revoked. These weaknesses resulted from the U.S. government's limited policy guidance on the process. None of the agencies have specific, written policies on using the visa revocation process as an antiterrorism tool. |
Although policies concerning compensation for deployed civilians are generally comparable across agencies, we found some issues that affect the amount of compensation these civilians receive—depending on such things as the agency’s pay system or the civilian’s grade/band level—and the accuracy, timeliness, and completeness of this compensation. Specifically, the six agencies included in our 2009 review provided similar types of deployment-related compensation to civilians deployed to Iraq or Afghanistan. Agency policies regarding compensation for federal employees—including deployed civilians—are subject to regulations and guidance issued by either OPM or other executive agencies, in accordance with underlying statutory personnel authorities. In some cases, the statutes and implementing regulations provide agency heads with flexibility in how they administer their compensation policies. For example, agency heads are currently authorized by statute to provide their civilians deployed to combat zones with certain benefits—such as death gratuities and leave benefits—comparable to those provided the Foreign Service, regardless of the underlying pay system of the employee’s agency. However, some variations in compensation available to deployed civilians result directly from the employing agency’s pay system and the employee’s pay grade/band level. For example, deployed civilians, who are often subject to extended work hours, may expect to work 10-hour days, 5 days a week, resulting in 20 hours of overtime per pay period. A nonsupervisory GS-12 step 1 employee receives a different amount of compensation for overtime hours than a nonsupervisory employee who earns an equivalent salary under NSPS. Specifically, the NSPS nonsupervisory employee is compensated for overtime at a rate equivalent to 1.5 times the normal hourly rate while the GS nonsupervisory employee is compensated for overtime at a rate equivalent to 1.14 times the normal hourly rate. Further, we noted that a GS-12 step 1 employee receives a different rate of compensation for overtime hours than a GS-12 step 6 employee. Specifically, the GS-12 step 1 employee is compensated for overtime at a rate equivalent to 1.14 times the normal hourly rate, while the GS-12 step 6 employee is compensated for overtime at the normal hourly rate. Additionally, deployed civilians may receive different compensation based on their deployment status. Agencies have some discretion to determine the travel status of their deployed civilians based on a variety of factors— DOD, for example, looks at factors including length of deployment, employee and agency preference, and cost. Generally though, deployments scheduled for 180 days or less are classified as “temporary duty” assignments, whereas deployments lasting more than a year generally result in an official “change of station” assignment. Nonetheless, when civilians are to be deployed long term, agencies have some discretion to place them in either temporary duty or change of station status, subject to certain criteria. The status under which civilians deploy affects the type and amount of compensation they receive. For example, approximately 73 percent of the civilians who were deployed between January 1, 2006, and April 30, 2008, by the six agencies we reviewed were deployed in temporary duty status and retained their base salaries, including the locality pay associated with their home duty stations. Civilians deployed to Iraq or Afghanistan as a change of station do not receive locality pay, but they do receive base salary and may be eligible for a separate maintenance allowance, which varies in amount based on the number of dependents the civilian has. The civilian’s base salary also impacts the computation of certain deployment-related pays, such as danger pay and post hardship differential, as well as the computation of premium pay such as overtime. Consequently, whether a civilian’s base salary includes locality pay or not can significantly affect the total compensation to which that civilian is entitled—resulting in differences of several thousand dollars. As a result of these variations, deployed civilians at equivalent pay grades who work under the same conditions and face the same risks may receive different compensation. As mentioned previously, the Subcommittee on Oversight and Investigations, House Armed Services Committee, recommended in April 2008 that OPM develop a benefits package for all federal civilians deployed to war zones, to ensure that they receive equitable benefits. But at the time of our 2009 review, OPM had not developed such a package or provided legislative recommendations. In September 2009, OPM officials stated that DOD had initiated an interagency working group to discuss compensation issues and that this group had developed some proposals for legislative changes. However, they noted, at that time, that these proposals had not yet been submitted to Congress, and they did not, according to DOD officials, represent a comprehensive package for all civilians deployed to war zones, as recommended by the Subcommittee. Furthermore, compensation policies were not always implemented accurately or in a timely manner. For example, based on our survey results, we project that approximately 40 percent of the estimated 2,100 civilians deployed from January 1, 2006, to April 30, 2008, experienced problems with compensation—including not receiving danger pay or receiving it late, for instance—in part because they were unaware of their eligibility or did not know where to go for assistance to start and stop these deployment-related pays. In fact, officials at four agencies acknowledged that they have experienced difficulties in effectively administering deployment-related pays, in part because there is no single source of guidance delineating the various pays associated with deployment of civilians. As we previously reported concerning their military counterparts, unless deployed personnel are adequately supported in this area, they may not be receiving all of the compensation to which they are entitled. Additionally, in January 2008, Congress authorized an expanded death gratuity—under the Federal Employees’ Compensation Act (FECA)—of up to $100,000 to be paid to the survivor of a deployed civilian whose death resulted from injuries incurred in connection with service with an armed force in support of a contingency operation. Congress also gave agency heads discretion to apply this death gratuity provision retroactively for any such deaths occurring on or after October 7, 2001, as a result of injuries incurred in connection with the civilian’s service with an armed force in Iraq or Afghanistan. At the time of our 2009 review, Labor—the agency responsible for the implementing regulations under FECA—had not yet issued its formal policy on administering this provision. Labor officials told us in May 2009 that, because of the recent change in administration, they could not provide us with an anticipated issue date for the final policy. Officials from the six agencies included in our review stated at that time that they were delaying the development of policies and procedures to implement the death gratuity until after Labor issued its policy. As a result, some of these agencies had not moved forward on these provisions when we issued our report. We therefore recommended that (1) OPM oversee an executive agency working group on compensation for deployed civilians to address any differences and if necessary make legislative recommendations; (2) the agencies included in our review establish ombudsman programs or, for agencies deploying small numbers of civilians, focal points to help ensure that deployed civilians receive the compensation to which they are entitled; and (3) Labor set a time frame for issuing implementing guidance for the death gratuity. We provided a copy of the draft report to the agencies in our review. With the exception of USAID, which stated that it already had an ombudsman to assist its civilians, all of the agencies generally concurred with these recommendations. USAID officials, however, at the time of our testimony, had not provided any documentation to support the existence of the ombudsman position. In the absence of such documentation, we continue to believe our recommendation has merit. In comments on our final report, OPM officials stated that an interagency group was in the process of developing proposals for needed legislation. However, at the time of this testimony, these officials stated that no formal legislative proposals have been submitted. In addition, some of the agencies have taken action to create ombudsman programs. Specifically, DOD and USDA officials stated that their ombudsman programs have been implemented. Additionally, Justice and State officials stated that they would take action such as developing policy and procedures for their ombudsman programs; however, at the time of this testimony, USDA, Justice, and State had not provided documentation to support their statements. Finally, the Department of Labor published an interim final rule implementing the $100,000 death gratuity under FECA in August 2009, and finalized the rule in February 2010. Although agency policies on medical benefits are similar, our 2009 review found some issues with policies related to medical treatment following deployment and with the implementation of workers’ compensation and post-deployment medical screening that affect the medical benefits of these civilians. DOD and State guidance provides for medical care of all civilians during their deployments—regardless of the employing agency. For example, DOD policies entitle all deployed civilians to the same level of medical treatment while they are in theater as military personnel. State policies entitle civilians serving under the authority of the Chief of Mission to treatment for routine medical needs at State facilities while they are in theater. While DOD guidance provides for care at military treatment facilities for all DOD civilians—under workers’ compensation—following their deployments, we reported that the guidance does not clearly define the “compelling circumstances” under which non-DOD civilians would be eligible for such care. Because DOD’s policy was unclear, we found that confusion existed within DOD and other agencies regarding civilians’ eligibility for care at military treatment facilities following deployment. Furthermore, officials at several agencies were unaware that civilians from their agencies were potentially eligible for care at DOD facilities following deployment, in part because these agencies had not received the guidance from DOD about this eligibility. Because some agencies were not aware of their civilians’ eligibility for care at military treatment facilities following deployment, these civilians could not benefit from the efforts DOD has undertaken in areas such as post traumatic stress disorder. Moreover, civilians who deploy may also be eligible for medical benefits through workers’ compensation if Labor determines that their medical condition resulted from personal injury sustained in the performance of duty during deployment. Our review of all 188 workers’ compensation claims related to deployments to Iraq or Afghanistan that were filed with the Labor Department between January 1, 2006, and April 30, 2008, found that Labor requested additional information in support of these claims in 125 cases, resulting in increased processing times that in some instances exceeded the department’s standard goals for processing claims. Twenty- two percent of the respondents to our survey who had filed workers’ compensation claims stated that their agencies provided them with little or no support in completing the paperwork for their claims. Labor officials stated that applicants failed to provide adequate documentation, in part because they were unaware of the type of information they needed to provide. Furthermore, our review of Labor’s claims process indicated that Labor’s form for a traumatic injury did not specify what supporting documents applicants had to submit to substantiate a claim. Specifically, while this form states that the claimant must “provide medical evidence in support of disability,” the type of evidence required is not specifically identified. Without clear information on what documentation to submit in support of their claims, applicants may continue to experience delays in the process. Additionally, DOD requires deploying civilians to be medically screened both before and following their deployments. However, we found that post-deployment screenings are not always conducted, because DOD lacks standardized procedures for processing returning civilians. Approximately 21 percent of DOD civilians who responded to our survey stated that they did not complete a post-deployment health assessment. In contrast, we determined that State generally requires a medical clearance as a precondition to deployment but has no formal requirement for post- deployment screenings of civilians who deploy under its purview. Our prior work has found that documenting the medical condition of deployed civilians both before and following deployment is critical to identifying conditions that may have resulted from deployment, such as traumatic brain injury. To address these matters, we recommended that (1) DOD clarify its guidance concerning the circumstances under which civilians are entitled to treatment at military treatment facilities following deployment and formally advise other agencies that deploy civilians of its policy governing treatment at these facilities; (2) Labor revise the application materials for workers’ compensation claims to make clear what documentation applicants must submit with their claims; (3) the agencies included in our review establish ombudsman programs or, for agencies deploying small numbers of civilians, focal points to help ensure that deployed civilians get timely responses to their applications and receive the medical benefits to which they are entitled; (4) DOD establish standard procedures to ensure that returning civilians complete required post-deployment medical screenings; and (5) State develop post-deployment medical screening requirements for civilians deployed under its purview. The agencies generally concurred with these recommendations, with the exception of USAID, which stated that it already had an ombudsman to assist its civilians. USAID officials, however, at the time of this testimony had not provided any documentation to support the existence of the ombudsman position. In the absence of such documentation, we continue to believe our recommendation has merit. To clarify DOD’s guidance concerning the availability of medical care at military treatment facilities following deployment for non-DOD civilians and to formally advise other agencies that deploy civilians of the circumstances under which care will be provided, DOD notified these agencies about its policies in an April 1, 2010 letter. Specifically, the letter identified information the department posted on its Civilian Expeditionary Workforce Web site. This information included (1) a training aid explaining the procedures for requesting access to a military treatment facility following deployment, (2) a standard form to request approval to receive treatment at a military treatment facility following deployment, and (3) frequently asked questions that DOD states provides further clarity on its policies. In addition, DOD has taken some steps to standardize procedures for ensuring civilians returning from deployment complete required post-deployment medical screenings. For example, guidance on DOD’s Civilian Expeditionary Workforce Web site states that deployment out-processing will include completion of the post- deployment health assessment. On the other hand, State officials noted that they would implement post-deployment screenings in 2010; however, as of April 2010, State had not provided documentation supporting that it established such requirements. Finally, officials from some of the agencies told us that they have taken action to create ombudsman programs. Specifically, officials from DOD and USDA said that their programs have been implemented. In addition, officials from Justice and State stated that they would take action such as developing policy and procedures for their ombudsman programs; however, at the time of this testimony, USDA, Justice, and State had not provided documentation to support their statements. While each of the agencies we reviewed was able to provide a list of deployed civilians, none of these agencies had fully implemented policies and procedures to identify and track its civilians who have deployed to Iraq and Afghanistan. DOD, for example, issued guidance and established procedures for identifying and tracking deployed civilians in 2006 but concluded in 2008 that its guidance and associated procedures were not being consistently implemented across the agency. In 2008 and 2009, DOD reiterated its policy requirements and again called for DOD components to comply. The other agencies we reviewed had some ability to identify deployed civilians, but they did not have any specific mechanisms designed to identify or track location-specific information on these civilians. As we have previously reported, the ability of agencies to report location-specific information on employees is necessary to enable them to identify potential exposures or other incidents related to deployment. Lack of such information may hamper these agencies’ ability to intervene quickly to address any future health issues that may result from deployments in support of contingency operations. We therefore recommended that (1) DOD establish mechanisms to ensure that its policies to identify and track deployed civilians are implemented and (2) the five other executive agencies included in our review develop policies and procedures to accurately identify and track standardized information on deployed civilians. The agencies generally concurred with these recommendations, with the exception of USAID, which stated that it already had an appropriate mechanism to track its civilians who had deployed but was consolidating its currently available documentation. We continue to disagree with USAID’s position since it does not have an agencywide system for tracking civilians and believe that our recommendation is appropriate. Additionally, the other agencies are now in various stages of implementation. For example, DOD officials stated, at the time of this testimony, that they were in the process of developing a new DOD instruction that would include procedures for the department’s components to track its civilians. Justice officials stated that they will establish policies and procedures while USDA officials said they would rely on State Department led offices in Iraq and Afghanistan, along with internal measures such as spreadsheets and travel authorizations, for tracking of its personnel. State Department officials noted, after talking with executive agencies including DOD, they planned to establish their own tracking mechanisms. Deployed civilians are a crucial resource for success in the ongoing military, stabilization, and reconstruction operations in Iraq and Afghanistan. Most of the civilians—68 percent of those in our review— who deploy to these assignments volunteered to do so, are motivated by a strong sense of patriotism, and are often exposed to the same risks as military personnel. Because these civilians are deployed from a number of executive agencies and work under a variety of pay systems, any inconsistencies in the benefits and compensation they receive could affect that volunteerism. Moreover, DOD’s and State’s continued efforts to develop cadres of deployable civilians demonstrates that these agencies recognize the critical role that federal civilians play in supporting ongoing and future contingency operations and stabilization and reconstruction efforts throughout the world. Given the importance of the missions these civilians support and the potential dangers in the environments in which they work, agencies should make every reasonable effort to ensure that the compensation and benefits packages associated with such service overseas are appropriate and comparable for civilians who take on these assignments. It is equally important that federal executive agencies that deploy civilians make every reasonable effort to ensure that these civilians receive all of the medical benefits and compensation to which they are entitled. These efforts include maintaining sufficient data to enable agencies to inform deployed civilians about any emerging health issues that might affect them. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) and other executive agencies increasingly deploy civilians in support of contingency operations in Iraq and Afghanistan. Prior GAO reports show that the use of deployed civilians has raised questions about the potential for differences in policies on compensation and medical benefits. When these civilians are deployed and serve side by side, differences in compensation or medical benefits may become more apparent and could adversely impact morale. This statement is based on GAO's 2009 congressionally requested report, which compared agency policies and identified any issues in policy or implementation regarding (1) compensation, (2) medical benefits, and (3) identification and tracking of deployed civilians. GAO reviewed laws, policies, and guidance; interviewed responsible officials at the Office of Personnel Management (OPM); and conducted a survey of civilians deployed from the six agencies between January 1, 2006 and April 30, 2008. GAO made ten recommendations for agencies to take actions such as reviewing compensation laws and policies, establishing medical screening requirements, and creating mechanisms to assist and track deployed civilians. Seven of the agencies--including DOD-- generally agreed with these recommendations; U.S. Agency for International Development did not. This testimony also updates the actions the agencies have taken to address GAO's recommendations. While policies concerning compensation for deployed civilians are generally comparable, GAO found some issues that can lead to differences in the amount of compensation and the accuracy, timeliness, and completeness of this compensation. For example, two comparable supervisors who deploy under different pay systems may receive different rates of overtime pay because this rate is set by the employee's pay system and grade/band. While a congressional subcommittee asked OPM to develop a benefits package for civilians deployed to war zones and recommend enabling legislation, at the time of GAO's 2009 review, OPM had not yet done so. Also, implementation of some policies may not always be accurate or timely. For example, GAO estimates that about 40 percent of the deployed civilians in its survey reported experiencing problems with compensation, including danger pay. In June 2009, GAO recommended, among other things, that OPM oversee an executive agency working group on compensation to address differences and, if necessary, make legislative recommendations. OPM generally concurred with this recommendation and recently informed GAO that an interagency group is in the process of developing proposals for needed legislation. Although agency policies on medical benefits are similar, GAO found some issues with medical care following deployment and post deployment medical screenings. Specifically, while DOD allows its treatment facilities to care for non-DOD civilians after deployment in some cases, the circumstances are not clearly defined and some agencies were unaware of DOD's policy. Further, while DOD requires medical screening of civilians before and following deployment, State requires screenings only before deployment. Prior GAO work found that documenting the medical condition of deployed personnel before and following deployment was critical to identifying conditions that may have resulted from deployment. GAO recommended, among other things, that State establish post-deployment screening requirements and that DOD establish procedures to ensure its post-deployment screening requirements are completed. While DOD and State agreed, DOD has developed guidance establishing procedures for post-deployment screenings; but, as of April 2010, State had not provided documentation that it established such requirements. Each agency provided GAO with a list of deployed civilians, but none had fully implemented policies to identify and track these civilians. DOD had procedures to identify and track civilians but concluded that its guidance was not consistently implemented. Some agencies had to manually search their systems. Thus, agencies may lack critical information on the location and movement of personnel, which may hamper their ability to intervene promptly to address emerging health issues. GAO recommended that DOD enforce its tracking requirements and the other five agencies establish tracking procedures. While DOD and four agencies concurred with the recommendations and are now in various stages of implementation, U.S. Agency for International Development disagreed stating that its current system is adequate. GAO continues to disagree with this agency's position. |
In 1995, the latest year for which complete data were available, about 65 percent (or about 647,000) of the inmates in custody in federal and state places of confinement participated in 1 or more types of work programs.These work programs included prison industries (e.g., involving the manufacture of license plates, wood products, and textiles); facility support services (e.g., doing office and administrative work, food service, laundry, and building maintenance); farming/agriculture; and public works assignments (i.e., inmates working outside the facility on road, park, or other public maintenance work). Data entry was the type of work that most often allowed inmates access to personal information. One mission of the Federal Prison Industries (FPI), a BOP component, is to employ and provide skills training to the greatest practicable number of inmates and to produce market priced quality goods in a self-sustaining manner that minimizes potential impact on private business and labor. FPI markets about 150 types of products and services to federal agencies. Some states had similar programs and provisions. For example, Alabama generally requires state departments, institutions, and political subdivisions to purchase their products and services from Alabama Correctional Industries, to the extent to which they can be supplied. In addition, only those entities can purchase Correctional Industries products. According to the Alabama Correctional Industries purpose statement, it exists primarily for the purpose of providing a work-training program for inmates of the Department of Corrections. Another important purpose is to assist all state departments, institutions, and political sub- divisions of the State to secure their requirements to the greatest possible extent. To obtain information on the assignment objectives, we surveyed BOP and state correctional industry officials by mail. We asked the officials to answer questions on correctional industry work programs in federal, state, and privately run facilities for which the federal or state government or state-appointed organizations had oversight. We limited the questionnaire to work programs associated with secure, confined facilities, including youth authorities but excluding programs associated with prerelease facilities and city and county jails. We asked if on September 30, 1998, they had inmates who, through performing (1) work on correctional industry work program contracts that were either in progress or were agreed to but the work had not been started or (2) support work for the industry work program operations, had access to personal information or only names and addresses or telephone numbers; what prison procedures, statutes, regulations, pending legislation, or other guidelines provided guidance on (1) limiting which inmates perform work involving access to personal information and (2) preventing personal information from being retained by inmates or being transferred to unauthorized inmates or other persons; what the total gross income was for the correctional industry work program and the income generated by those contracts that resulted in inmates having access to personal information in the most recently completed fiscal year; and what incidents of misuse occurred as a result of inmates having access to the information through correctional industry work programs. We received responses from BOP, 47 states, and the District of Columbia. We did not independently verify the information provided by questionnaire respondents. We did, however, compare the questionnaire responses to the results of our current literature and legal database searches. After we consolidated the data received from the questionnaire respondents in the tables included in this report, we faxed the compiled information to all of the questionnaire respondents for confirmation of the accuracy of the data displayed and made corrections as necessary. We interviewed BOP and state officials. We also contacted states’ attorneys general to obtain information on (1) incidents of misuse of which they were aware and (2) state statutes or regulations, pending legislation, or other guidelines that provided guidance on work programs involving personal information. We requested comments on a draft of this report from BOP and the Correctional Industries Association, Inc. They provided written comments that are summarized at the end of this report and are reprinted in appendixes X and XI. We performed our work from June 1998 to June 1999 in accordance with generally accepted government auditing standards. Appendix I provides more details on our objectives, scope, and methodology. On September 30, 1998, about 1,400 inmates in BOP and 19 state prison systems had access to personal information through correctional industry work programs, according to the questionnaire respondents. This number accounts for (1) about one-tenth of 1 percent of all inmates in custody as of June 30, 1998, (or approximately 1.2 million) and (2) about 2 percent of all inmates participating in correctional industry work programs (approximately 61,500). Almost all of the inmates who had access to personal information were being held in federal or state-run facilities (1,332 inmates) as opposed to privately run facilities (25 inmates). The number of inmates with access to personal information in each of the 19 states ranged from 6 in New Jersey to 426 in California. The types of information to which the largest number of inmates had access were (1) names and dates of birth or (2) Social Security numbers. About 30 percent of the inmates had access to names and (1) drivers’ license numbers or (2) vehicle makes and models. Appendix II shows the number of inmates in BOP and individual state prison systems that had access to personal information on September 30, 1998, and the types of information to which they had access. Most of the inmates who had access to personal information were performing work for federal, state, or local governments (93 percent) as opposed to private sector companies (7 percent). Over half of the inmates with access to personal information were involved in data entry work. Another about 25 percent of the inmates were duplicating or scanning documents. Types of information processed in these work programs included medical records; state, county, or local licenses; automobile registrations; unemployment records; student enrollment data; and accident reports. The length of time the contracts that resulted in inmates having access to personal information had been in effect ranged from less than 1 year to 19 years. About 1 quarter of the contracts had been in place from 10 to 19 years; the remainder were more recent. The reasons BOP and states most commonly identified for selecting the contracts that resulted in inmates having access to personal information were the contracts provided valuable job skills training, satisfied a need or demand for a service, were needed to provide work for more inmates, were profitable, and provided work that was relatively easy for training inmates. Questionnaire respondents from 11 states said they planned to add and/or expand existing correctional industry work programs that allow inmates access to personal information. Respondents from 29 states said they did not plan to add or expand existing work programs that would allow inmates access to personal information, and respondents from 8 states said they did not know whether their states had plans to add and/or expand existing correctional industry work programs that would allow inmates access to personal information. In response to our survey, 29 states indicated that inmates did not have access to personal information on September 30, 1998. The more commonly stated reasons were that the opportunity had not presented itself, the prisons prohibited such work programs, and public opinion limited the feasibility of implementing such work programs. BOP and each state that had work programs in which inmates had access to personal information reported that they had in place a variety of safeguards to prevent inmates from misusing personal information. In addition, BOP and most of the states in which inmates had access to personal information reported that they had prison procedures that limited which inmates could perform work that would give them access to personal information. The federal government and seven states in which inmates had access to personal information were identified as having either enacted statutes or had bills pending that related to limiting which inmates could perform work involving personal information. The safeguards most frequently reported as being used when inmates had access to personal information were close supervision; selective hiring (e.g., excluding inmates convicted of sex offenses or fraud); confidentiality agreements; and security checks at the work area exits. Other commonly used safeguards included security checks at the work area entrances, no photocopy machines in the work area, and monitored telephone calls. Appendix III provides additional information on the safeguards cited by questionnaire respondents. BOP and most of the 19 states in which inmates had access to personal information reported that they had prison procedures that placed limitations on which inmates could perform work that would give them access to personal information. Questionnaire respondents from BOP and 18 states said that they screened inmates before hiring them for work programs involving personal information. For example, one state respondent said that inmates who were convicted of rape or who have life sentences were ineligible to work on contracts where they would have access to personal information. In addition, in the course of our work, statutes or proposed legislation related to this issue were identified in seven of the states as well as the federal government in which inmates had access to personal information. A brief summary of these provisions is provided in appendix IV, table IV.1. Further, six states were identified in which inmates did not have access to personal information that had enacted statutes or introduced legislation that related to this issue. For more information on these statutes and pending bills, see appendix IV, table IV.2. Less than one-hundredth of 1 percent of the BOP’s annual gross correctional industry income of $568 million was generated from its contract that allowed inmates access to personal information. For those states in which inmates had access to personal information, no more than 22 percent of any state’s gross fiscal year 1998 correctional industry income was generated from these contracts; six states reported that less than 1 percent of their gross correctional industry income was earned from these contracts. In total, these states grossed about $18 million in 1998 from correctional industry work program contracts that allowed inmates access to personal information, compared to an annual gross correctional industry income of about $515 million. Appendix V provides information on the income generated from these contracts. About 5,500 inmates, in BOP and 31 state prison systems, had access to only names and addresses or telephone numbers through correctional industry work programs. Over half of these inmates were in the custody of BOP. Appendix VI presents these data by BOP and state. The types of work inmates were performing in the largest number of states in which they had this access were order fulfillment, data entry, shipping, and printing. For additional information on the types of work performed by inmates with access to only names and addresses or telephone numbers, see appendix VII. The safeguards that BOP and most states reported using when inmates had access to only names and addresses or telephone numbers were similar to those reported being used when inmates had access to personal information. The most commonly used safeguards reported by states included close supervision while working, security checks at the exits from the work areas, selective hiring, and security checks at the entrances to the work areas. For additional information on safeguards that BOP and states used when inmates had access to only names and addresses or telephone numbers, see appendix VIII. Questionnaire respondents from eight states reported a total of nine incidents in which inmates misused personal information or names and addresses or telephone numbers that they obtained from a correctional industry work program. We defined misuse of information as any action that threatened or caused injury to the physical, psychological, or financial well-being of any member of the public. Each of these incidents was associated with a different contract. Six of the incidents involved inmates contacting individuals identified through a work program by telephone or by mail (in one of these instances, the inmate in the work program passed information on an individual to another inmate, who then contacted the individual). Two incidents involved inmates using credit card numbers that they obtained through participating in a work program. The other incident involved two inmates’ attempts to smuggle copies of documents out of the prison through the U.S. mail. Five of the contracts related to these incidents were terminated after the incident occurred. In three of the four other incidents, the prison responded by either adding new safeguards or reinforcing existing safeguards used on the contract. In the remaining incident, the prison’s procedures remained the same. For more information on these incidents, see appendix IX. Questionnaire respondents also provided information on four additional incidents that did not meet the previously described criteria for misuse of personal information. On the basis of one or more of the following reasons, these four incidents were not included in appendix IX: no reported injury, a court finding of no wrongdoing, or termination of the inmate from the work program on the basis of an allegation or suspected wrongdoing. These incidents, however, resulted in some type of program change. The types of program changes ranged from adding or reinforcing policies and safeguards to program termination. Briefly, these incidents, as reported by the respondents, consisted of the following: An inmate was processing accident reports in a data entry work program. He told another inmate, not in the work program, about an individual involved in one of the accident reports he processed. The other inmate contacted the individual involved in the accident. The questionnaire respondent reported that nobody was harmed, safeguards did not fail, and no sanctions were taken. After this incident, the state reportedly reinforced its policies and safeguards associated with this contract. An inmate working in a data entry work program saw, reportedly by accident, a state document that had information about one of his family members. He spoke with another member of his family about the information he saw. A member of his family filed a lawsuit claiming that the inmate should not have had access to this information. The questionnaire respondent reported that the case was dismissed because the information was covered by an open record regulation whereby birth records are considered to be public records. The state, however, canceled the contract for processing this type of information. An inmate working in a telemarketing work program was accused of harassing a customer. The inmate was terminated and transferred to maximum security on the basis of the allegation alone. The state reportedly implemented additional safeguards after the alleged incident was reported. An inmate wrote a letter to an individual, and it was suspected that the inmate obtained the individual’s name and address through the work program. According to the survey response, the inmate was disciplined and terminated from the work program, and a measure providing for the closer monitoring of inmates was instituted. In commenting on our report, BOP concurred with our report with one exception. BOP noted that since our survey, it changed its procedures, and no inmates in the BOP prison system have access to personal information. Since our methodology was to report on the number of inmates who had access to personal information on September 30, 1998, we did not eliminate the 25 BOP inmates who we reported as having access to personal information. (See app. X.) The Correctional Industries Association, Inc., in its comments said that our report was fair and thorough and presented the facts objectively. However, it took two exceptions with the report. First, the Association said that the information on inmates’ access to personal information is presented largely out of context. We disagree. Our draft report said that of approximately 1.2 million inmates, about 1,400 in BOP and 19 state prison systems had access to personal information through correctional industry work programs. We noted that less than one-hundredth of 1 percent of BOP’s and no more than 22 percent of any state’s fiscal year 1998 gross correctional industry income was generated from contracts that resulted in inmates having access to personal information. Further, we pointed out that about a quarter of the contracts that resulted in inmates having access to personal information had been in place from 10 to 19 years. Second, the Association said that a benchmark is needed against which the success or failure of correctional industries to control access issues can be measured. We did not judge whether the correctional industries have succeeded or failed in their attempt to prevent the misuse of personal information to which inmates had access as the result of work programs because we are not aware of criteria by which to make such a judgment. However, given that the inmates with access to personal information are individuals who have been incarcerated for crimes, and given that the institutional settings permit work program officials to exercise close scrutiny over the inmates and work places, breaches of security and misuses of personal information are a cause for concern. (See app. XI.) As agreed, unless you announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Honorable Janet Reno, Attorney General; the Honorable Kathleen Hawk Sawyer, Director, BOP; Ms. Gwyn Smith Ingley, Executive Director, Correctional Industries Association, Inc.; the states that responded to our survey; and other interested parties. Copies will also be made available to others upon request. The major contributors to this report are acknowledged in appendix XII. If you or your staff have any questions about the information in this report, please contact me or Brenda Bridges on (202) 512-8777. The objectives of our study were to determine the extent to which inmates in the BOP and state prison systems had access to personal information through correctional industry work programs; identify prison safeguards and procedures, statutes and regulations, and proposed legislation that addressed correctional industry work programs involving personal information; determine the extent to which contracts that provided inmates access to personal information contributed to BOP’s and states’ correctional industry income; determine the extent to which inmates in the BOP and state prison systems had access to only names and addresses or telephone numbers through correctional industry work programs; and identify incidents of inmates misusing information obtained through a correctional industry work program, including how safeguards failed and what, if any, changes were made as a result of the incidents. For our study, we defined correctional industry work programs as programs that produced products and services for sale to government agencies and/or to the private sector. We excluded institutional work programs, i.e., programs that would involve activities such as housekeeping, food services, day-to-day maintenance, and community service, as well as support programs in which an inmate may have inadvertently seen personal information. The scope of our study included work programs that were (1) overseen by BOP, a state government, or a state-appointed commission; (2) associated with federal, state, or privately run facilities; and (3) associated with secure, confined facilities—including youth authorities—but not programs associated with prerelease facilities or city or county jails. We defined “personal information” as information that could be used to threaten an individual’s physical, psychological, or financial well-being. This information would include (1) credit card numbers (personal or business); (2) Social Security numbers; or (3) names in combination with physical descriptions or financial, medical, or motor vehicle information. We also collected data on inmates’ access to “names and addresses or telephone numbers,” which included a name and one or more of the following: work or home address or telephone number, name of employer, or job title but no other item that we defined as personal information. To meet the assignment objectives, we surveyed, by mail, correctional industry officials in BOP, all 50 states, and the District of Columbia. The questionnaire asked for information on the following: correctional industry work program contracts that involved personal information that were either orders-in-progress or that had been agreed to but had not yet been started on September 30, 1998; the number of inmates who had access to personal information or to names and addresses or telephone numbers through correctional industry work program contracts or support work; safeguards that were in place to prevent inmates from misusing the statutes, regulations, procedures, other guidelines, and proposed legislation that dealt with correctional industry work programs involving personal information; the gross income in the most recently completed fiscal year for the correctional industry work program overall and for those contracts that involved personal information; and incidents of misuse of information that occurred at any time as a result of inmate access to the information through a correctional industry work program. We asked questionnaire respondents for information on inmates who had access to (1) personal information or (2) names and addresses or telephone numbers, either through working on a correctional industry work program contract or through performing support work for the industry work program operations. We defined a contract as a formal or informal agreement to produce a specific product or perform a specific service. We defined inmates who were performing support work as inmates who were not associated with a specific correctional industry work program contract but who performed tasks—such as order taking, order fulfillment, manufacturing or customer support, complaint resolution, or shipping—that supported the industry work program operations. In designing our questionnaire, we received input from the Correctional Industries Association, Inc. (a nonprofit professional organization representing individuals and agencies engaged in and concerned with correctional industries) and federal and state correctional industry officials. We revised the questionnaire based on the feedback these officials provided. We made further changes based on input from correctional industry officials as a result of pilot testing the survey instrument in Maryland and Virginia. To identify questionnaire recipients, we called the contact point for each state’s correctional industry program as identified in the 1998 Correctional Industries Association, Inc., Directory. We informed them of our assignment and asked whether they would be the proper recipients for the questionnaire. We asked these officials if their state had any privately run prisons that housed inmates from their state prison system or from other states’ prison systems. If they had such facilities, we asked them to identify the individual who had oversight responsibilities for work programs in these facilities. To further ensure that we had a respondent for each privately run facility that met our criteria (i.e., the facility was a secure, confined facility— including youth authorities—but not a prerelease facility or city or county jail, and any work programs in the facility would be overseen by BOP, a state government, or a state-appointed commission), we obtained a list of privately run correctional facilities from the Private Corrections Project Internet web site. We then contacted the individuals whom we had identified as overseeing work programs at privately run facilities to ensure that they had responsibility for each facility that met our criteria. If they stated that they did not have responsibility, we asked them who did and repeated this procedure until we reached the appropriate party. We mailed a total of 63 questionnaires: 1 to BOP, 1 to each state and the District of Columbia, 1 to a youth authority, 1 to a joint venture program, and 1 each to 9 privately run facilities that had been identified by the method described above. Representatives from two states, Arizona and Tennessee, informed us that they would not be participating in our survey. Ohio’s representative also indicated that he would not be completing the questionnaire but told us that Ohio does not permit inmates involved in data entry to have access to credit card numbers or Social Security numbers. When we received the questionnaires, we followed-up by telephone on missing or incomplete data, consolidated the data into the tables displayed in this report, faxed the completed tables to all questionnaire respondents for confirmation of the accuracy of the data displayed, and made corrections as necessary. Questionnaire respondents were provided only with compiled data concerning their individual states. We also conducted literature and legal database searches to identify published articles, reports, studies, statutes, proposed bills, and other documents dealing with the assignment objectives. We contacted representatives from various organizations to determine what information they may have that related to our assignment objectives. These organizations included the American Correctional Association; Correctional Industries Association, Inc.; American Jail Association; American Federation of Labor and Congress of Industrial Organizations; and Union of Needletraders, Industrial and Textile Employees. We contacted each state’s attorney general’s office and the District of Columbia’s Corporation Counsel to identify any additional (1) incidents of inmates misusing information obtained through correctional industry work programs and (2) state statutes or regulations, proposed legislation, or other guidance that dealt with correctional industry work programs involving personal information. We did not verify the completeness of the information provided. We contacted various federal agencies with investigatory responsibilities to determine if they were aware of instances of inmates misusing personal information that they obtained through correctional industry work programs. Within the Department of the Treasury, we contacted the Internal Revenue Service’s Criminal Investigation Division and the U.S. Secret Service. Within the Department of Justice, we contacted the Federal Bureau of Investigation. Finally, we contacted the U.S. Postal Service and the Social Security Administration. We performed our work between June 1998 and June 1999 in accordance with generally accepted government auditing standards. Oklahoma (cont.) State agency 20 Oregon State agency 1 State agency 2 Pennsylvania Rhode Island South Carolina South Dakota Vermont Virginia Washington West Virginia Wisconsin 84 Note 1: Personal information means information that can be used to threaten an individual’s physical, psychological, or financial well-being. This information would include (1) credit card numbers (personal or business); (2) Social Security numbers; or (3) names in combination with physical descriptions or financial, medical, or motor vehicle information. This table does not include inmates who had access to only names and one or more of the following: work or home address or telephone number, name of employer, or job title. For that information, see appendix VI. Note 2: States with “NR” in each category did not return a questionnaire. We received a questionnaire from Arizona’s privately run facilities. These facilities did not have any inmates who had access to names, addresses, telephone numbers, or other types of personal information. A representative from Ohio’s state-run facilities informed us that inmates involved in data entry work programs did not have access to credit card numbers or Social Security numbers. We did not receive any information from respondents in state-run correctional facilities in Arizona or Tennessee. Note 3: The numbers shown above represent the maximum numbers of inmates who would have had access to each type of personal information. Some inmates worked on more than one contract. Consequently, as in Oklahoma, totals are not the sum of the number of inmates shown for each contract. Also, we asked respondents for the types of personal information to which inmates had access. However, each inmate may not have had access to all of the types of personal information involved in a contract. Note 4: According to the questionnaire respondents, the data from Idaho represent the combined information from two contracts, and the data from New Hampshire were combined from five contracts. Illinois’ data represent one contract situated in two geographic locations. Note 1: Personal information means information that can be used to threaten an individual’s physical, psychological, or financial well-being. This information would include (1) credit card numbers (personal or business); (2) Social Security numbers; or (3) names in combination with physical descriptions or financial, medical, or motor vehicle information. Note 2: A blank means that the questionnaire respondent did not report using the safeguard in the work program. Note 3: According to the questionnaire respondents, the data from Idaho represent the combined information from two contracts, and the data from New Hampshire were combined from five contracts. Illinois’ data represent one contract situated in two geographic locations. California Penal Code, Section 5071: in general, prohibits prison inmates convicted of offenses involving, for example, misuse of a computer, misuse of personal/financial information of another person, or a sex offense from performing prison employment functions that provide such inmates with access to certain types of personal informationSee also California Welfare Institutions Code, Section 219.5: (language similar to above code section-- applicable to juveniles) New York Pending Assembly Bill 4753 (1999): in general, inmates involved in correctional institution work would be prohibited from accessing, collecting, or processing certain types of personal information See also New York Pending Assembly Bill 4842 (1999): (language similar to the above bill) Wisconsin Pending Assembly Bill 31 (1999): would prohibit the Department of Corrections from entering into any contract or other agreement if, in the performance of the contract or agreement, a prisoner would have access to any personal information of individuals who are not prisoners This section also was identified by the state as requiring that such persons in prison work programs disclose that fact before taking any personal information from anyone. Percentage of FY 1998 correctional industry gross income from personal information contracts Note 1: Personal information means information that can be used to threaten an individual’s physical, psychological, or financial well-being. This information would include (1) credit card numbers (personal or business); (2) Social Security numbers; or (3) names in combination with physical descriptions or financial, medical, or motor vehicle information. Note 2: Dollar amounts were rounded to the nearest thousand. Totals may not add due to rounding. Percentages were rounded to the nearest tenth. Less than $1,000. State does not have a breakdown by individual contract. Note 1: Names and addresses mean names and one or more of the following: work or home addresses or telephone numbers, names of employer, or job titles but no other item that we defined as personal information. Note 2: States with “NR” in each category did not return a questionnaire. We received a questionnaire from Arizona’s privately run facilities. These facilities did not have any inmates who had access to names, addresses, telephone numbers, or other types of personal information. A representative from Ohio’s state-run facilities informed us that inmates involved in data entry work programs did not have access to credit card numbers or Social Security numbers. We did not receive any information from respondents in state-run correctional facilities in Arizona or Tennessee. C = Type of work performed by inmates who had access to information through work program contracts, which is a formal or informal agreement to produce a specific product or perform a specific service. S = Type of work performed by inmates who had access to information through support work, which is not associated with a specific contract, but tasks such as order taking or shipping that supported overall industry work program operations. C/S = Inmates performed this type of work both on contracts and through support work. Inmates working in Washington’s correctional facilities have access to names, work addresses, and work telephone numbers only. Wyoming failed to designate type of work performed by inmates. C C = Safeguard applied to inmates who had access to types of information through a contract, which is a formal or informal agreement to produce a specific product or perform a specific service. S = Safeguard applied to inmates who had access to types of information through performing support work, which is not associated with a specific contract, but tasks such as order taking or shipping that supported overall industry work program operations. C/S = Safeguard applied to inmates who had access to types of information as a result of employment on both contracts and through support work. Note 1: Names and addresses mean names and one or more of the following: work or home addresses or telephone numbers, names of employer, or job titles but no other item that we defined as personal information. Note 2: A blank means that the questionnaire respondent did not report using the safeguard. Note 3: This table does not include inmates who had access to names and addresses or telephone numbers and any other item(s) that we defined as personal information. See appendix III for a list of safeguards that respondents reported using for inmates who had access to personal information. IPersonal information was segmented among inmates. Surveillance mirrors, security cameras, restricted work area, raw materials/supplies control, and random strip searches were employed. Inmates working in Washington’s correctional facilities had access to names, work addresses, and work telephone numbers only. Date and description of incident In 1991, while on parole an inmate used credit card numbers previously obtained from a prison telemarketing work program. In 1995, an inmate wrote a letter to a Medicare patient identified from information obtained in a data entry work program. In the mid-90s, an inmate participating in a work program provided another inmate with a name and address obtained through the work program. The second inmate wrote a letter to the individual whose name and address were provided. In about 1990, an inmate obtained information, through participating in a data entry work program, about an individual’s medical expenses and wrote the individual a letter. In 1995, two inmates attempted to smuggle copies of birth certificates obtained through a work program out of prison through the U.S. mail system. The birth certificates were sent back to the prison via return mail. In 1995, an inmate continued to call a particular individual identified through a work program that telemarketed local newspaper subscriptions. Date and description of incident In 1990 or 1991, an inmate used a credit card number, obtained from a work program making motel reservations, for personal purchases. In the early 1990’s, an inmate wrote a letter to an individual identified through a data entry work program and included personal information also obtained through the work program. In 1997, an inmate sent a Christmas card to an individual identified through a 1-800 information line. The individual had called for information on state parks. Mary Lane Renninger Nancy A. Briggs Geoffrey R. Hamilton David P. Alexander Stuart M. Kaufman Michael H. Little The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on: (1) the extent to which inmates in the Bureau of Prisons (BOP) and state prison systems had access to personal information through correctional industry work programs; (2) prison safeguards and procedures, statutes and regulations, and proposed legislation that addressed correctional industry work programs involving personal information; (3) the extent to which contracts that provided inmates access to personal information contributed to BOP's and states' correctional industry income; (4) the extent to which BOP and state prison inmates had access to only names and addresses or telephone numbers through correctional industry work programs; and (5) incidents of inmates misusing information obtained through correctional industry work programs, including how safeguards failed and what, if any, changes were made as a result of the incidents. GAO noted that: (1) on September 30, 1998, of approximately 1.2 million inmates, about 1,400 in BOP and 19 state prison systems had access to personal information through correctional industry work programs, based on the questionnaire responses from correctional industry officials; (2) of these 1,400 inmates, about 1,100 had access to names and dates of birth or Social Security numbers; (3) these inmates were performing work, such as data entry, for the federal, state, or local governments; (4) BOP and all the 19 states reported using a variety of safeguards to prevent inmates from misusing the information; (5) the safeguards cited by the largest number of states were close supervision, selective hiring (e.g., excluding inmates convicted of sex offenses or fraud), confidentiality agreements, and security checks at the exits from the work areas; (6) the federal government and seven states in which inmates had access to personal information were identified as having either enacted statutes or had bills pending that related to limiting which inmates could perform work involving personal information; (7) less than one-hundredth of 1 percent of BOP's and no more than 22 percent of any state's fiscal year 1998 gross correctional industry income was generated from contracts that resulted in inmates having access to personal information; (8) six states reported that less than 1 percent of their gross correctional industry income was earned from these contracts; (9) about 5,500 inmates in BOP and 31 state prison systems had access to only names and addresses or telephone numbers through correctional industry work program contracts or support work; (10) the three safeguards that the largest number of states and BOP reported using were similar to those used when inmates had access to personal information--close supervision, security checks at the exits from the work areas, and selective hiring; (11) questionnaire respondents described nine incidents in which inmates misused personal information or names and addresses or telephone numbers obtained from correctional industry work programs; (12) in four of the nine incidents, inmates removed information from the work areas, either physically or by memorization; and (13) in five of the incidents, the work programs were discontinued. |
BLM’s mission is to sustain the health, diversity, and productivity of the public lands for the use and enjoyment of present and future generations. It manages approximately 264 million acres of public land in 28 states—about one-eighth of the land in the United States. It also manages the subsurface mineral resources on another 300 million acres of lands administered by other government agencies or owned by private interests. Public resources managed by BLM include rangelands, timber, minerals, watersheds, wildlife habitats, wilderness and recreation areas, and archaeological and historical resources. The bureau has 210 state, district, and resource area offices that manage over 1 billion paper documents, including land surveys and surveyor notes, records of land ownership, mining claims, and oil and gas leases. According to BLM, most of the paper documents are deteriorating and becoming increasingly difficult to read. During the energy boom in the early 1980s, BLM found that it could not handle the case processing workload associated with a peak in the number of applications for oil and gas leases. It recognized that to keep up with increased demand, it needed to automate its manual records and case processing activities. Thus, in the mid-1980s, the bureau began planning to acquire an automated land and mineral case processing system. The scope and functionality of the planned system changed over the years, ranging from a system to automate paper documents and records and case processing activities to a system that would provide improved efficiency for recording, maintaining, and retrieving land description, ownership, and use information and provide geographic information system (GIS)capabilities. In 1993, BLM decided on the scope and functionality of the ALMRS/Modernization. The bureau designated it a critical system for (1) automating land and mineral records and case processing activities and (2) providing information to support land and resource management activities. The ALMRS/Modernization is expected to more efficiently record, maintain, and retrieve land description, ownership, and use information to support BLM, other federal programs, and interested parties. It is to do this by establishing a common information technology platform, integrating multiple databases into a single geographically referenced database, shortening the time to complete case processing activities, and replacing costly manual records with automated ones. The ALMRS/Modernization consists of the ALMRS IOC, geographic coordinate database (GCDB), and modernization of BLM’s computer and telecommunications infrastructure and rehosting of selected management and administrative systems. These components are described more fully below. The ALMRS IOC is the flagship of the ALMRS/Modernization. With new software and upgraded hardware, it is to provide (1) support for case processing activities, including leasing oil and gas reserves, recording valid mining claims, processing mineral patents, and granting rights-of-way for roads and power corridors and (2) information for land and resource management activities, including timber sales and grazing leases. ALMRS IOC is to replace various manual and ad hoc automated BLM systems currently operating on older mainframe computers. GCDB is the database that is to contain geographic coordinates and survey information for land parcels. Other databases, such as those containing land and mineral records, are to be integrated with GCDB. ALMRS IOC will tie BLM’s records and land and mineral resource data to the legal descriptions of specific land parcels. The information technology modernization and rehost component consists of installing computer and telecommunications equipment and office automation applications, and converting selected management and administrative systems to a relational database system to be used throughout BLM. Some elements of the ALMRS/Modernization, such as new computer and telecommunications equipment, e-mail, and office automation, were installed at BLM offices from fiscal years 1994 through 1996. The 12 administrative applications have been rehosted and are operational. According to BLM’s latest estimates, the ALMRS/Modernization is expected to cost about $594 million through fiscal year 2002, about 47 percent more than the $403 million estimate provided to the Office of Management and Budget in 1993. According to the Assistant Director for Information Resources Management (IRM), the increase is largely due to costs that were not included in the original agreement with OMB, including almost $105 million for technology refreshment. Concerned that BLM might deploy the system prematurely, the House and Senate appropriations committees in fiscal year 1996 directed BLM to (1) test, verify, and validate that ALMRS operates as specified and (2) certify to them that it performs accurately and effectively and provides the expected capabilities prior to deployment. BLM retained a contractor to conduct the independent verification and validation testing and an operational assessment, testing, and evaluation and expects to base its certification to the committees on these tests. In our March 1997 report,we stated that BLM would not be ready to deploy ALMRS until it has completed essential management plans, policies, or procedures to help ensure a successful transition and operating environment. As of February 20, 1998, BLM had not fully implemented the recommendations contained in our March 1997 report. BLM’s efforts to develop a security plan and an architecture, transition plans, and operations and maintenance plans were incomplete. BLM had taken substantial action to establish a configuration management program, but it had not yet produced a credible project schedule. These management tools are essential to manage the remainder of the project, help ensure system availability and performance, and avoid security and operational problems. Security focuses on the ability to ensure the confidentiality, integrity, and availability of stored and processed data. Unsecured or poorly secured systems are highly vulnerable to external and internal attacks and unauthorized use. Security planning includes the identification of high-level security requirements, including mission, management, and technical security requirements; functional security requirements that cover users’ security needs; data-sensitivity analysis to identify data requiring special protection; and a security architecture that describes the security controls and relationships among the various system components. The ALMRS/Modernization security plan should define the policies and procedures for operating and maintaining a secure environment. In our March 1997 report, we recommended that before deploying ALMRS IOC, BLM develop a system security architecture and plan, including security policies and procedures; disaster and recovery plans; and security test, evaluation, and certification plans to reduce risks to the availability and integrity of stored and processed data. BLM has not yet developed a security architecture. It has developed a security plan, finalized some policies—such as those governing user access to ALMRS/Modernization components—and has been working to complete contingency plans for the state offices and their subordinate district and area offices. Also, in October 1997, BLM conducted a risk assessment for the planned deployment of ALMRS IOC to the New Mexico State Office. In January 1998, ALMRS IOC was certified for operation in New Mexico by the Department of the Interior’s Information Technology Security Manager. Our review of BLM’s security plan and related documents shows that the plan is not based on a documented risk assessment of ALMRS and does not provide sufficient detail to manage the security of ALMRS and its databases. Because BLM has no documented risk assessment of the ALMRS, it has no basis for asserting that the system is secure or the plan adequately addresses the various vulnerabilities and risks attendant to a nationwide client-server system. Also, the risk assessment performed at the New Mexico State Office focused on policies, procedures, and conditions at that office but did not deal with the security of, or assess the vulnerabilities of and risks to, ALMRS. The process of deploying a major information system that people will use to do their jobs requires careful planning. Many of the 210 BLM offices nationwide that will receive ALMRS/Modernization—designed to automate many manual functions—have little or no experience implementing client-server systems. The transition from automated capabilities provided by a centrally managed mainframe system to a locally managed client-server environment requires changes in organizational roles, responsibilities, and interrelationships among the units and people using the system. A transition plan should address these issues and guide BLM in defining new operational procedures. In our March 1997 report, we recommended that before deploying ALMRS IOC, BLM develop transition plans outlining the changes in organizational roles, responsibilities, and interrelationships among the units and people using the ALMRS/Modernization system to reduce the risk associated with those changes. BLM’s National Information Resources Management Center developed the ALMRS Transition/Deployment Plan, dated September 2, 1997, to be used as a guide for deploying the needed upgrades for the hardware and software and transitioning to the ALMRS/Modernization platform and ALMRS IOC. According to a senior program analyst for the ALMRS project, 4 of the 12 state offices have prepared transition plans for their operations and the offices under their jurisdictions. BLM provided a copy of the 4 state offices’ plans. Our review of the ALMRS Transition/Deployment Plan showed that while the plan generally addresses transition, its primary focus is on deployment activities. BLM notes that subsequent versions of the plan will provide more transition information to help each state office make use of ALMRS in the most effective and efficient way. The Assistant Director for IRM told us that the ALMRS Transition/Deployment Plan will be updated to incorporate the recent work of user advisory teams and lessons from final ALMRS testing. Our review of the 4 state offices’ plans showed that only 1 of them identified and addressed transition issues, such as how the state and subordinate offices will deal with oil and gas, mining, and solid mineral business process changes resulting from the implementation of ALMRS. Unless BLM ensures that the revised plans adequately address transition issues, BLM faces increased risks of disruptions to its work processes and impairments to its ability to (1) conduct its land and mineral management business and (2) use ALMRS most effectively. Operations and maintenance of information systems based on a client-server architecture require a large number of highly skilled people. Unlike the centrally managed legacy mainframe systems that have been supporting BLM operations, the ALMRS/Modernization will require management and technical support at each major BLM site. This support includes UNIX system managers, database administrators, user support and telecommunication specialists, and security officers. In our March 1997 report, we recommended that before deploying ALMRS IOC, BLM develop operations and maintenance plans addressing the acquisition, management, and maintenance of managerial and technical support for the ALMRS/Modernization to help ensure successful operations. BLM has developed a draft operations and maintenance plan for the National Information Resources Management Center. This plan describes the (1) routine operations and maintenance services that the National Information Resources Management Center will provide and (2) approach that will be used to provide management and technical guidance necessary for the operations and maintenance of ALMRS. The plan, however, does not address how BLM will provide for operations and maintenance functions at the major BLM sites that will be responsible for operating and maintaining ALMRS on a daily basis. This is critical because BLM will be relying on ALMRS to conduct its business and maintain its official records. The Assistant Director for IRM stated that the state offices are being contacted to ascertain whether they need additional or more specific guidance to meet these responsibilities. Due to the many sites involved and the complexities of the systems, sites will need operations and maintenance plans that clearly describe how they are to fulfill their responsibilities and how these responsibilities will be handled when there are unexpected shortages of qualified staff. Configuration management plans, policies, and procedures are a set of management controls over the composition of and changes to computer and network systems components and documentation, including software code documentation. Configuration management is essential to successfully manage complex information systems and ensure integrity throughout their life cycles. System modifications without the safeguards imposed by the discipline of configuration management could lead to undesirable consequences. For example, they could cause system failures, endanger system integrity, increase security risks, and degrade system performance. In our March 1997 report, we recommended that before deploying ALMRS IOC, BLM establish a robust configuration management plan and related policies and procedures for establishing a program focused on managing the components of and all changes to all BLM information systems, including systems not related to the ALMRS/Modernization, to ensure successful management and integrity of the ALMRS/Modernization. Our review of the latest configuration management guidance and discussions with project officials show that BLM has taken action to establish a configuration management program. BLM has developed a draft configuration management plan and associated policies and procedures and has taken action to implement them. BLM’s configuration manager estimated that implementation of the configuration management program is about 85 percent complete. Since BLM’s plan is still in draft and actions are not fully completed, we have not yet reviewed the configuration management program. In March 1997, we reported that in its latest schedule, BLM planned to deploy ALMRS IOC in its Arizona, Idaho, and New Mexico offices by the end of fiscal year 1997 and complete the deployment to the remaining states in fiscal year 1998. We stated that BLM might not be able to maintain this schedule because it continued to allow insufficient time between critical milestones to deal with problems that were likely to arise. At that time, BLM’s own project management plans cited concern that milestones were overly optimistic, listed them as a major risk, and stated that the short time frames were influenced by BLM’s desire to begin deploying the system in fiscal year 1997. We recommended that BLM fully update the project schedule, including analyzing human resource usage and task relationships to establish reliable milestones and a critical path to complete the project. Although a complete, current, and accurate project schedule is essential to adequately manage and control the hundreds of tasks remaining to complete the project, BLM has not linked available staff resources to those tasks in developing the ALMRS project schedule. BLM revised the project schedule again in September 1997 without implementing our recommendation and was not able to meet critical milestones. BLM is again revising its plans and milestones, but although it is planning to analyze human resource usage and task relationships in establishing milestones for deployment activities, it is not planning to do so for its schedule to complete, test, and certify ALMRS. Table 1 shows the acceptance testing and deployment milestones BLM is anticipating pending formal revision of its plans and schedule. According to the anticipated milestones, initiation of deployment will be about 9 months behind the schedule in place at the time of our March 1997 report. This represents more than a 2-year delay from the schedule delivered to OMB when the project was approved in 1993. BLM expected to certify to the Appropriations Committees in December 1997 that ALMRS performs accurately and effectively and provides expected capabilities after completing beta testing in November 1997 and operational assessment test and evaluation (OAT&E) in December 1997. However, this milestone was not met because numerous problems were encountered during beta testing that required correction before BLM could begin OAT&E. Also, shortly after beta testing, BLM discovered that data converted from its legacy systems for ALMRS were not reliable because of errors in the conversion software. Since then, BLM has been making corrections to resolve the software and other problems and revising final testing plans and milestones. Continuing delays in implementing ALMRS may place BLM at risk of losing information technology support for core business processes because of the imminent Year 2000 computer problem. The following problems emerged during the beta test of ALMRS. BLM encountered unexpected workstation failures and slowdowns caused by insufficient workstation memory and by problems discovered in two BLM-developed software applications that had not been sufficiently tested. BLM had not yet determined with sufficient certainty how BLM staff will use ALMRS and the expected workload that they will generate in performing their day-to-day duties. A realistic operational usage definition of ALMRS workstations is essential for the design and conduct of OAT&E. After beta testing, BLM converted data from legacy systems in the New Mexico State Office’s jurisdiction to the database management system used in the ALMRS/Modernization and expanded the sample size for testing and validating the data. BLM discovered that some of the data were being converted incorrectly. BLM identified 43 software errors that resulted in missing land descriptions, incorrect associations, incomplete conversions to designated data elements, and accurate conversions being written into error files. BLM estimated that some of these errors will take up to 4 months to correct. According to the project comanager, BLM is analyzing the data conversion problems, performing further testing and validation, identifying those problems that must be corrected prior to performing OAT&E, and correcting the data conversion software. BLM also plans to reconvert and update the New Mexico database and analyze and validate the new database prior to deployment. As a result of problems found during and after beta testing, BLM slipped its schedule to allow time to correct and revise its strategy and milestones for OAT&E and independent verification and validation. In conjunction with its OAT&E and independent verification and validation contractor, BLM agreed that 12 conditions need to be satisfied before OAT&E can begin. The conditions include the completion of training manuals and aids for BLM-developed software establishment of data sharing procedures and a public room plan; establishment of a national help desk; development of a maintenance plan that delineates necessary activities for maintaining the contractor- and BLM-developed software components of ALMRS IOC; and identification of automated access and training requirements for the Mineral Management Service, another part of the Department of the Interior that uses BLM land and mineral data. At the end of our field work, most of the conditions had not been met and BLM had not made the requisite database corrections for OAT&E. BLM expected to conduct the OAT&E in March 1998, certify ALMRS IOC in April 1998, and deploy ALMRS IOC to the first state office jurisdiction in June 1998. However, the schedule estimates remain unreliable because BLM had not provided for unexpected problems or analyzed human resource usage and task relationships in establishing critical milestones in revising the project schedule, as we recommended in our March 1997 report. The recent and potential future delays in the ALMRS/Modernization program introduce the risk that BLM will lose information technology support for its core business processes because of the looming Year 2000 problem. The Year 2000 problem is rooted in the way dates are recorded and computed in many computer systems. For the past several decades, systems have typically used two digits to represent the year, such as “98” representing 1998, in order to conserve electronic data storage and reduce operating costs. With this two-digit format, the year 2000 is indistinguishable from 1900, 2001 from 1901, and so on. As a result of this ambiguity, computer systems or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. BLM has identified two legacy systems supporting its core business processes that are subject to the Year 2000 computer problem. These two mission-critical systems, the Case Recordation System and the Mining Claim Recordation System, are to be replaced with ALMRS IOC implementation. BLM presently uses these two systems to create and manage land and mineral case files. They capture and provide information on case type, customer, authorizations, and legal descriptions. Without these systems, BLM cannot create and record new cases, such as mining claims, or update case data. BLM’s initial assessment of the two mission-critical systems shows that the older computer mainframes on which these systems run are date-dependent and may malfunction in the year 2000. These two systems are to be replaced by ALMRS before the year 2000. However, the delays in implementing ALMRS introduce the risk that BLM will be forced to continue using these two systems beyond 2000. To mitigate this risk, BLM is considering upgrading the mainframes on which these two systems run. However, BLM has not yet completed an assessment to determine what this upgrading would entail or developed a contingency plan for key business processes to be supported by these systems in the event that ALMRS is not fully deployed by the year 2000. The BLM Year 2000 Program Coordinator expects the assessment of this and the resulting contingency plan to be completed in the near future, although we were told that no deadline has been established for these actions. BLM has not fully implemented the recommendations we made in our March 1997 report. It has not yet completed essential plans for system security, transition, operations and maintenance, and configuration management, exacerbating risks that ALMRS/Modernization will not be successfully implemented and meet operational needs. BLM understands the importance of these essential tools and has been working to develop them. However, until our prior recommendations have been implemented and necessary plans have been completed, approved, and put into place, BLM will not be ready to deploy the system. Continuing delays with the ALMRS/Modernization and the looming Year 2000 computer problem place BLM at risk that core business processes will not be supported beyond January 1, 2000. To reduce the risk that BLM will lose information technology support for core business processes, we recommend that the Director of the Bureau of Land Management (1) direct that the two mission-critical systems ALMRS is to replace be fully assessed to determine what actions are needed to ensure the continued use of these systems after January 1, 2000, and (2) develop a contingency plan to take those actions in the event that ALMRS is not fully deployed by that time. In comments on a draft of this report, the BLM Director stated that he generally agrees with our observations and provided some updated information. BLM agreed with our recommendation to perform a full assessment of the two mission-critical systems to be replaced by ALMRS and develop a contingency plan to take the needed actions in the event that ALMRS is not fully deployed by the year 2000. BLM stated that it (1) has taken significant steps to implement the six recommendations in our March 1997 report and (2) will implement them before deploying the system. BLM also described some efforts that it believes are indicative of progress to date. We are sending copies of this report to the Secretary of the Interior, the Director of the Bureau of Land Management, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available to others upon request. Should you or your staff have any questions concerning this report, please contact me at (202) 512-6253. I can also be reached by e-mail at willemssenj.aimd@gao.gov. Major contributors to this report are listed in appendix III. Our objectives were to assess BLM’s actions to address the recommendations contained in our March 1997 report and identify the status of BLM’s efforts to test, deploy, and implement ALMRS initial operating capability (IOC). To review BLM’s actions to address our recommendations (develop a credible project schedule, configuration management plan, security architecture and security plan, complete transition plans, and complete operations and maintenance plans), we reviewed the ALMRS Project Office’s project management and scheduling procedures; BLM’s National Configuration Management Board’s draft configuration management plan; BLM information technology security plans, ALMRS application security plan, and other security documentation; BLM’s Operations and Maintenance plan for the National IRM Center; and BLM’s Version 2.0 Transition and Deployment Plan and site-specific transition/deployment plans for New Mexico, Idaho, Arizona, and Colorado. We compared revised project milestones with past milestones and remaining project risks. We also reviewed Carnegie Mellon University’s Capability Maturity Model for Software and site readiness review results. To ascertain BLM’s efforts to test, deploy, and implement ALMRS IOC, we reviewed ALMRS/Modernization project documents, weekly activity reports and assessments by the independent verification and validation contractor, system integration meeting minutes, BLM’s exit criteria for system certification, software problem reports, and project management schedules. We also reviewed BLM’s submission to the Department of Interior’s Year 2000 Master Plan and status reports on BLM’s Year 2000 efforts. We attended the Department of the Interior’s October 1997 quarterly review of the development project at the ALMRS/Modernization project office in Lakewood, Colorado, and observed alpha IV testing at the ALMRS/Modernization pilot site offices in Santa Fe, Albuquerque, Farmington, and Taos, New Mexico and beta testing at the ALMRS/Modernization pilot site offices in Santa Fe, Albuquerque, and Taos, New Mexico. We also reviewed the results of alpha IV and beta testing. We discussed the project with prime contractor officials; contractor officials responsible for independent verification and validation and operational assessment testing and evaluation; a senior technical analyst and the Acting Chief Information Officer at the Department of the Interior; BLM’s Assistant Director and Deputy Assistant Director for IRM, and BLM’s ALMRS budget analyst. We further discussed the essential management plans with ALMRS project officials responsible for project management and scheduling, configuration management, security, deployment, transition, and operations and maintenance planning; and discussed software development risks, performance problems, planned system capabilities, software problem reports, system testing, and technical complexity with project officials responsible for systems engineering, software development, and testing. We discussed BLM’s Year 2000 efforts with the Bureau’s Year 2000 Program Coordinator. We performed our work at Interior’s information resources management headquarters in Washington, D.C.; BLM headquarters in Washington, D.C.; the ALMRS/Modernization project office in Lakewood, Colorado; the prime contractor’s office in Golden, Colorado; ALMRS pilot site offices in Santa Fe, Albuquerque, Farmington, and Taos, New Mexico; and the independent verification and validation contractor’s office in the ALMRS/Modernization project office in Lakewood, Colorado. The following are GAO’s comments on BLM’s April 13, 1998, letter. 1. This information is summarized in the “Agency Comments” section of the report. 2. In discussing BLM’s comments, the Assistant Director for IRM told us that a primary reason for the increased estimated cost from $403 million to $594 million is that $105 million of technology refreshment costs were not included in the estimate provided to OMB. We revised the report to clarify this point. We also note that technology refreshment costs, as well as the other costs BLM mentioned, are properly a part of life-cycle costs and should have been included in the initially approved $403 million life-cycle estimate provided to OMB. 3. In BLM’s comments on the ALMRS project schedule, it stated that GAO staff has supported placing emphasis on completing a successful pilot as opposed to meeting an artificially derived schedule. We agree that emphasis should be placed on successfully completing all testing, including the pilot test. Testing is an essential part of developing and deploying an efficient and effective system. We also agree that BLM should not try to meet artificially derived milestones. A complete, current, and accurate schedule with tasks linked to available resources is an essential tool to manage and control a large-scale project. This is the primary reason why we have addressed the project schedule in this report and in our two prior reports on ALMRS. The project schedule should have been based on tasks to be completed, resources associated with task completion, and a critical path with sufficient time allotted to deal with unanticipated problems. BLM has not done this. 4. BLM noted that the Configuration Management Plan and program were fully implemented about a month after we completed our fieldwork. As we note in the report, we did not assess the configuration management program during our review because the plan had not been completed and the program had not yet been fully implemented before the end of our fieldwork. 5. As we discuss in the report, the risk assessment performed at the New Mexico State Office focused on policies, procedures, and conditions at that office. The risk assessment did not deal with the security of, or assess the vulnerabilities of and risks to, ALMRS. In addition, until a full risk assessment of ALMRS is completed and documented, BLM has no basis for asserting that the system is secure or that the plan adequately addresses the vulnerabilities and risks attendant to a nationwide client-server system. 6. Our review of BLM’s updated transition plans showed that only one of the four plans identified and addressed transition issues. As we discuss in the report, the transition from automated capabilities provided by centrally managed mainframe legacy systems to the locally-managed client server environment of ALMRS will require changes in organizational roles, responsibilities, and interrelationships among the units and people using the system. A transition plan should address these issues and guide BLM in defining new operational procedures. Our concern is that with the complexity of ALMRS and the business process changes it will require, BLM needs to ensure that its transition plans provide the necessary guidance for successful transitions in its 210 state, district, and resource area offices. 7. Operations and maintenance plans are essential for operating and maintaining ALMRS on a daily basis. BLM noted that the states will update the operations and maintenance plans for their sites. In updating their plans, the state offices will need specific information that clearly describes how they are to fulfill their day-to-day responsibilities and how these responsibilities will be fulfilled when there are unexpected shortages of qualified staff. 8. We agree. Beta testing is testing of a prerelease version of software by selected cooperating users in order to uncover problems that were not discovered during laboratory testing. According to BLM, the beta test served that purpose. As we note in our report, these problems along with data conversion errors required correction before OAT&E could begin. Beta testing was conducted in November 1997 and OAT&E was scheduled to be completed in December 1997. David G. Gill, Assistant Director Mirko J. Dolak, Technical Assistant Director Keith Rhodes, Technical Director Marcia C. Washington, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided a follow-up assessment of the Bureau of Land Management's (BLM) actions to address the recommendations contained in its March 1997 report, focusing on the status of BLM's efforts to test, deploy, and implement the automated land and mineral records system's (ALMRS) initial operating capability. GAO noted that: (1) BLM has not yet fully implemented GAO's recommendations to mitigate risks and help ensure a successful transition and operating environment for ALMRS; (2) specifically, BLM does not have a security architecture and sound security plan, complete transition plans, and complete operations and maintenance plans for ALMRS; (3) BLM has developed a draft configuration management plan and has been implementing a configuration management program; (4) however, BLM has not developed a credible project schedule; (5) these tools are essential to manage the remainder of the project, help ensure system availability and performance, and avoid security and operational problems; (6) during beta testing of the ALMRS initial operating capability and validation testing of converted data, BLM identified computer workstation configuration and software problems; (7) the testing also surfaced operation concerns that had not been adequately addressed, such as how ALMRS will support public information needs and data exchanges between BLM and other organizations; (8) BLM is revising its project plan and schedule to address these problems before entering the final testing and certification phase; (9) BLM may not be able to maintain the modified schedule, however, because it: (a) is being developed without analyzing human resource usage and task relationships for predevelopment activities; and (b) contains optimistic timeframes for completing activities, leaving little time to deal with unanticipated problems that are likely to arise; (10) recent and potential future delays in implementing ALMRS place BLM at risk that existing systems supporting mission-critical business processes, which are to be replaced by ALMRS, will be subject to the year 2000 computer problem; and (11) while BLM is planning to provide the upgrades necessary to allow for the continued use of these systems if ALMRS is not fully deployed by the year 2000, it has not yet completed the requisite assessment to determine how to do this. |
The Congress wants a strong defense industrial base and has directed the Secretary of Defense to report annually on the ability of industry to support U.S. national security objectives. The Congress directed the Secretary to consider in the analysis for that report such factors as levels of spending for capital investment and research and development. In June 1993, the Department of Defense’s (DOD) Inspector General criticized DOD’s first report as being of limited use in helping congressional leaders make informed judgments because DOD lacked an adequate information system to carry out the assessment. Defense industry representatives also criticized that report. One common criticism was that the majority of the report’s data applied to corporations rather than their defense segments. The second DOD report, released in September 1994, also concentrated on the corporate level rather than individual business units. Other studies of the effects of the spending decline have also assessed the impact at the corporate level rather than the individual business unit. Another shortcoming in many of these assessments was that they measured the severity of the decline in defense expenditures only from the peak years of the 1980s and not from other years in the cycle of defense expenditures as well. Since the World War II drawdown, defense spending has experienced three peaks associated with the Korean War, the Vietnam War, and the Reagan administration military buildup. In fiscal year 1989, defense expenditures reached their highest peacetime level since World War II, exceeding defense spending at the peak of the Korean War and almost matching spending during the Vietnam War. Defense expenditures in fiscal year 1989 were $354.1 billion, but had declined to $274.5 billion by fiscal year 1994, a reduction of $79.6 billion or 22 percent. The Clinton administration is projecting defense expenditures of $224.5 billion in fiscal year 2000, which represents a $129.6-billion, or a 37-percent, decline in defense spending since fiscal year 1989. Figure 1 shows the trend in defense expenditures after the end of World War II. Although most defense authorities agree that the post-Cold War decline in spending is significant, it is comparable to the Vietnam War drawdown. As shown in table 1, the current decline is only 2 percentage points greater than the Vietnam War drawdown, which was spread over an 8-year period, whereas the post-Cold War drawdown is currently projected over a 10-year period. However, unlike the Vietnam War drawdown, defense contractors view the current decline as permanent, not cyclical. Measured against other years, the $260.2 billion in defense spending projected for fiscal year 1995 is about 13 percent greater than the $231 billion expended during fiscal year 1976 and about 4 percent greater than the $251.4 billion expended in fiscal year 1980. Fiscal year 1976 is a significant benchmark because it represents the lowest level in peacetime defense spending since the Korean War. Fiscal year 1980 is significant because it was the year prior to the beginning of the Reagan administration defense buildup. Based on current projections, peacetime defense spending will remain above the fiscal year 1976 level until fiscal year 1998, when spending is projected to decline to $225.1 billion. According to officials of the six business units we visited, the decline in defense spending since the late 1980s has significantly affected their defense sales. We compared the peak sales by these business units during the mid-to-late 1980s with their sales in 1993 and the latest year projected.Measured from their peak sales years, we found that the business units’ sales decreases ranged from 21 percent to 54 percent through 1993 and that the units were estimating decreases ranging from 50 percent to 73 percent through the latest year projected. The projected weighted average decline over the businesses was about 55 percent. Figure 2 shows the actual and projected sales decreases by business unit. Peak to latest projected year Although sales declines from the peak years are significant, several of the business units had sales that were actually lower in 1976 and 1980 than their future projections. Table 2 compares the business units’ forecasted sales with their 1976 and 1980 sales. As shown, two of the business units projected their future sales to be higher than their 1976 sales, and three of the business units projected their future sales to be higher than their 1980 sales. Defense contractors have taken and are continuing to take aggressive actions to reduce spending as a result of post-Cold War sales declines. The following discussion deals with actions taken in the areas of employment levels, IR&D/B&P expenditures, capital improvements, and facilities. The six business units have made large reductions in the number of employees since their peak employment years of the mid-to-late 1980s. Through 1993, the units’ workforce reductions ranged from 30 percent to 76 percent. Through the latest projected year, the units’ estimated reductions ranged between 44 percent to 79 percent. Three of the units projected reductions of over 75 percent, while the other three units projected reductions ranging from 44 percent to 57 percent. Figure 3 provides an overview of actual and projected employment reductions by business unit. Unlike sales where several of the business units projected higher figures in the future than in 1976 and 1980, all of the units for which data were available projected lower employment levels in the future than they had in 1976 and 1980. Table 3 compares projected employment with the 1976 and 1980 levels. The downward employment trend at these six business units is consistent with the findings of other studies on the private sector defense industry workforce. One report, for example, showed that defense-related private employment had declined from about 3.7 million workers in 1987 to about 2.7 million workers in 1993, which represents a 26-percent employment decline over the period. According to that report, the 20 leading defense contractors had experienced an average employment reduction of 22 percent between 1987 and 1993. Other studies have projected a continuing downward trend in defense employment over the next several years. For example, a report prepared by the Logistics Management Institute for the Defense Conversion Commission estimated that private sector defense-related employment would likely decline by about 803,000 jobs, or 27 percent, from 1992 to 1997. Similar to the reductions in employment levels, the six business units had made substantial cuts in their IR&D/B&P expenditures. Between their peak spending years and 1993, these units had reduced IR&D/B&P expenditures ranging from 31 percent to 71 percent and projected reductions ranging from 41 percent to 84 percent through the latest year projected. Figure 4 shows the actual and projected reductions. Peak to latest projected year The six business units’ forecasts show that they plan to spend an average of 54 percent less for IR&D/B&P than they spent during the mid-to-late 1980s. However, as shown in table 4, two of the business units projected future expenditures for IR&D/B&P to be more than they spent in 1976. Two other business units forecasted their future expenditures to be more than they spent in 1980. Several studies showed a correlation between the level of defense expenditures and the amounts contractors spend on IR&D. One study, for example, stated that the level of defense procurement directly affects IR&D activities, which are supported to a large extent by overhead charges in production contracts. The report stated that when large production runs were the rule, many companies willingly invested their own funds in IR&D because they could reasonably expect to recover their investment. Another report predicted that, with fewer defense procurements, IR&D payments would decrease and companies might not be willing to risk conducting their own IR&D. To determine whether these reports applied to the business units we visited, we compared the changes in the business units’ spending levels for IR&D/B&P with changes in their sales. For four of the six business units, we found that changes in IR&D/B&P expenditures generally correlated to their sales volume. For illustration purposes, figure 5 compares the trend in one business unit’s sales and its IR&D/B&P expenditures. Because of concerns that the quantity and quality of IR&D would decline as budget cuts forced the defense industry to limit overhead costs, the Congress made substantial legislative revisions to the IR&D program in fiscal years 1991 and 1992 to encourage defense contractors to continue IR&D activities. Even with these revisions, defense contractors have continued to cut their IR&D/B&P expenditures, as their defense sales have declined. The six business units we visited have significantly cut their capital expenditures from their peak spending levels in the 1980s. The units had made reductions through 1993 ranging from 52 percent to 92 percent and estimated reductions ranging from 55 percent to 85 percent through the latest projected year. The units projected a weighted average reduction of 76 percent in their capital expenditures. Figure 6 shows the actual and projected reductions in capital expenditures. Peak to latest projected year Table 5 compares the business units forecasted expenditures with their 1976 and 1980 expenditures. When 1976 was used as the base year, three business units projected higher capital expenditures in the future, but when 1980 was used as a base year, one business unit projected higher capital expenditures. Similar to IR&D/B&P expenditures, there is not a consistent trend in capital expenditures. For example, although Company A and Company C projected higher capital expenditures compared to 1976, the companies projected lower capital expenditures when compared to 1980. However, we found that changes in capital expenditures at three of the units correlated to the units’ sales volume. Two business units had formal programs to limit future capital expenditures. One unit, for example, established the following four categories in which proposed capital expenditures would be approved by management: firm contractual commitments required to keep existing products operational, environmental requirements mandated by law, health and safety requirements to meet Occupational Safety and Health new product requirements for specific new products. According to a report issued in May 1989 by the Center for Strategic and International Studies, the best measure of the U.S. defense industrial base’s ability to maintain its technological lead is the amount of capital spending in industry to expand capacity or improve productivity. Measured from their peak years, five of the business units had reduced their total square footage by as much as 34 percent through 1993, and five units projected reductions ranging from 6 percent to 43 percent, or an average of 26 percent, through the latest projected year. The units projected most of these reductions in their leased space. Figure 7 shows the actual and projected reductions in total square footage. Despite the past and planned reductions in space occupied, most of the units projected larger square foot usage than in 1976 and 1980. Table 6 compares the changes in the size of the business units’ facilities. Three business units expected to use more square footage in the futurethan occupied in 1976; four units expected to use more space than they occupied in 1980. According to Defense Contract Management Command records, defense contractors have significantly reduced the size of their facilities through such actions as vacating and selling buildings and terminating leases. We compiled and compared information on the declines and buildups in defense expenditures since the end of World War II. We also conducted literature searches and examined various reports, assessments, and other documents to determine how defense contractors throughout the industry have been affected by reduced defense spending. The business units provided us with data on their sales, employment levels, capital expenditures, IR&D/B&P, and facilities for 1976 through the latest projected year. We focused our work on these five elements because we believed they were most representative of the impact of reduced defense spending on defense contractor business units. For three of the business units, we were unable to obtain data as early as 1976 and therefore used the earliest data available. We accepted the data provided by the business units and did not attempt to validate the data. In some cases, the organization of the business units have changed since 1976, and the units had to compile or estimate data to reflect their organization since that time. We conducted our review from March 1994 to January 1995 in accordance with generally accepted government auditing standards. We did not obtain DOD comments on this report; however, we discussed the results of our work with contractor representatives from each of the six business units. We are sending copies of this report to the Secretary of Defense, officials of the six business units, and other interested congressional committees. We will make copies available to others upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. The major contributors to this report were John K. Harper, George C. Burdette, Anne-Marie Olson, and Amy S. Parrish. David E. Cooper Director, Acquisition Policy, Technology, and Competitiveness Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the impact of the recent decline in defense expenditures on individual business units of major defense contractors, focusing on a comparison of defense expenditures over a number of years and changes in the business units': (1) sales and employment levels; and (2) spending on independent research and development and bid and proposal (IR&D/B&P) preparation, capital improvements, and facilities. GAO found that: (1) measured from their peak years, the six business units GAO visited had experienced sales decreases ranging from 21 percent to 54 percent through 1993 and estimated declines ranging from 50 percent to 73 percent through the latest year projected; (2) the resulting employment reductions ranged from 30 percent to 76 percent through 1993 and planned reductions ranging from 44 percent to 79 percent through the latest year projected; (3) from their peak year spending levels through 1993, the six units had reduced IR&D/B&P spending ranging from 31 percent to 71 percent and projected reductions ranging from 41 percent to 84 percent through the latest year projected; (4) the six units had also reduced expenditures for capital improvements by an average of 80 percent through 1993 and, through the latest year projected, estimated an average reduction of 76 percent in these expenditures; (5) although these business units have significantly reduced spending in these areas, projections by some of the units are still higher than their: (a) 1976 levels, the lowest peacetime defense spending level since the Korean War buildup; and (b) 1980 levels, the year before the Reagan administration military buildup; (6) the defense industry has adjusted to previous spending reductions; (7) the current post-Cold War reduction is only 2 percentage points greater than the reduction after the Vietnam War and is taking place over a period that is 2 years longer; and (8) however, unlike other drawdowns, defense contractors view the current decline as permanent and have developed a variety of strategies to deal with reduced defense spending. |
In October 1998, the EPA Administrator announced plans to create an office with responsibility for information management, policy, and technology. This announcement came after many previous efforts by EPA to improve information management and after a long history of concerns that we, the EPA Inspector General, and others have expressed about the agency’s information management activities. Such concerns involve the accuracy and completeness of EPA’s environmental data, the fragmentation of the data across many incompatible databases, and the need for improved measures of program outcomes and environmental quality. The EPA Administrator described the new office as being responsible for improving the quality of information used within EPA and provided to the public and for developing and implementing the goals, standards, and accountability systems needed to bring about these improvements. To this end, the information office would (1) ensure that the quality of data collected and used by EPA is known and appropriate for its intended uses, (2) reduce the burden of the states and regulated industries to collect and report data, (3) fill significant data gaps, and (4) provide the public with integrated information and statistics on issues related to the environment and public health. The office would also have the authority to implement standards and policies for information resources management and be responsible for purchasing and operating information technology and systems. Under a general framework for the new office that has been approved by the EPA Administrator, EPA officials have been working for the past several months to develop recommendations for organizing existing EPA personnel and resources into the central information office. Nonetheless, EPA has not yet developed an information plan that identifies the office’s goals, objectives, and outcomes. Although agency officials acknowledge the importance of developing such a plan, they have not established any milestones for doing so. While EPA has made progress in determining the organizational structure of the office, final decisions have not been made and EPA has not yet identified the employees and the resources that will be needed. Setting up the organizational structure prior to developing an information plan runs the risk that the organization will not contain the resources or structure needed to accomplish its goals. Although EPA has articulated both a vision as well as key goals for its new information office, it has not yet developed an information plan to show how the agency intends to achieve its vision and goals. Given the many important and complex issues on information management, policy, and technology that face the new office, it will be extremely important for EPA to establish a clear set of priorities and resources needed to accomplish them. Such information is also essential for EPA to develop realistic budgetary estimates for the office. EPA has indicated that it intends to develop an information plan for the agency that will provide a better mechanism to effectively and efficiently plan its information and technology investments on a multiyear basis. This plan will be coordinated with EPA ‘s agencywide strategic plan, prepared under the Government Performance and Results Act. EPA intends for the plan to reflect the results of its initiative to improve coordination among the agency’s major activities relating to information on environment and program outcomes. It has not yet, however, developed any milestones or target dates for initiating or completing either the plan or the coordination initiative. In early December 1998, the EPA Administrator approved a broad framework for the new information office and set a goal of completing the reorganization during the summer of 1999. Under the framework approved by the EPA Administrator, the new office will have three organizational units responsible for (1) information policy and collection, (2) information technology and services, and (3) information analysis and access, respectively. In addition, three smaller units will provide support in areas such as data quality and strategic planning. A transition team of EPA staff has been tasked with developing recommendations for the new office’s mission and priorities as well as its detailed organizational and reporting structure. In developing these recommendations, the transition team has consulted with the states, regulated industries, and other stakeholders to exchange views regarding the vision, goals, priorities, and initial projects for the office. One of the transition team’s key responsibilities is to make recommendations concerning which EPA units should move into the information office and in which of the three major organizational units they should go. To date, the transition team has not finalized its recommendations on these issues or on how the new office will operate and the staff it will need. Even though EPA has not yet determined which staff will be moved to the central information office, the transition team’s director told us that it is expected that the office will have about 350 employees. She said that the staffing needs of the office will be met by moving existing employees in EPA units affected by the reorganization. The director said that, once the transition team recommends which EPA units will become part of the central office, the agency will determine which staff will be assigned to the office. She added that staffing decisions will be completed by July 1999 and the office will begin functioning sometime in August 1999. The funding needs of the new office were not specified in EPA’s fiscal year 2000 budget request to the Congress because the agency did not have sufficient information on them when the request was submitted in February 1999. The director of the transition team told us that in June 1999 the agency will identify the anticipated resources that will transfer to the new office from various parts of EPA. The agency plans to prepare the fiscal year 2000 operating plan for the office in October 1999, when EPA has a better idea of the resources needed to accomplish the responsibilities that the office will be tasked with during its first year of operation. The transition team’s director told us that decisions on budget allocations are particularly difficult to make at the present time due to the sensitive nature of notifying managers of EPA’s various components that they may lose funds and staff to the new office. Furthermore, EPA will soon need to prepare its budget for fiscal year 2001. According to EPA officials, the Office of the Chief Financial Officer will coordinate a planning strategy this spring that will lead to the fiscal year 2001 annual performance plan and proposed budget, which will be submitted to the Office of Management and Budget by September 1999. The idea of a centralized information office within EPA has been met with enthusiasm in many corners—not only by state regulators, but also by representatives of regulated industries, environmental advocacy groups, and others. Although the establishment of this office is seen as an important step in improving how EPA collects, manages, and disseminates information, the office will face many challenges, some of which have thwarted previous efforts by EPA to improve its information management activities. On the basis of our prior and ongoing work, we believe that the agency must address these challenges for the reorganization to significantly improve EPA’s information management activities. Among the most important of these challenges are (1) obtaining sufficient resources and expertise to address the complex information management issues facing the agency; (2) overcoming problems associated with EPA’s decentralized organizational structure, such as the lack of agencywide information dissemination policies; (3) balancing the demand for more data with calls from the states and regulated industries to reduce reporting burdens; and (4) working effectively with EPA’s counterparts in state government. The new organizational structure will offer EPA an opportunity to better coordinate and prioritize its information initiatives. The EPA Administrator and the senior-level officials charged with creating the new office have expressed their intentions to make fundamental improvements in how the agency uses information to carry out its mission to protect human health and the environment. They likewise recognize that the reorganization will raise a variety of complex information policy and technology issues. To address the significant challenges facing EPA, the new office will need significant resources and expertise. EPA anticipates that the new office will substantially improve the agency’s information management activities, rather than merely centralize existing efforts to address information management issues. Senior EPA officials responsible for creating the new office anticipate that the information office will need “purse strings control” over the agency’s resources for information management expenditures in order to implement its policies, data standards, procedures, and other decisions agencywide. For example, one official told us that the new office should be given veto authority over the development or modernization of data systems throughout EPA. To date, the focus of efforts to create the office has been on what the agency sees as the more pressing task of determining which organizational components and staff members should be transferred into the new office. While such decisions are clearly important, EPA also needs to determine whether its current information management resources, including staff expertise, are sufficient to enable the new office to achieve its goals. EPA will need to provide the new office with sufficient authority to overcome organizational obstacles to adopt agencywide information policies and procedures. As we reported last September, EPA has not yet developed policies and procedures to govern key aspects of its projects to disseminate information, nor has it developed standards to assess the data’s accuracy and mechanisms to determine and correct errors. Because EPA does not have agencywide polices regarding the dissemination of information, program offices have been making their own, sometimes conflicting decisions about the types of information to be released and the extent of explanations needed about how data should be interpreted. Likewise, although the agency has a quality assurance program, there is not yet a common understanding across the agency of what data quality means and how EPA and its state partners can most effectively ensure that the data used for decision-making and/or disseminated to the public is of high quality. To address such issues, EPA plans to create a Quality Board of senior managers within the new office in the summer of 1999. Although EPA acknowledges its need for agencywide policies governing information collection, management, and dissemination, it continues to operate in a decentralized fashion that heightens the difficulty of developing and implementing agencywide procedures. EPA’s offices have been given the responsibility and authority to develop and manage their own data systems for the nearly 30 years since the agency’s creation. Given this history, overcoming the potential resistance to centralized policies may be a serious challenge to the new information office. EPA and its state partners in implementing environmental programs have collected a wealth of environmental data under various statutory and regulatory authorities. However, important gaps in the data exist. For example, EPA has limited data that are based on (1) the monitoring of environmental conditions and (2) the exposures of humans to toxic pollutants. Furthermore, the human health and ecological effects of many pollutants are not well understood. EPA also needs comprehensive information on environmental conditions and their changes over time to identify problem areas that are emerging or that need additional regulatory action or other attention. In contrast to the need for more and better data is a call from states and regulated industries to reduce data management and reporting burdens. EPA has recently initiated some efforts in this regard. For example, an EPA/state information management workgroup looking into this issue has proposed an approach to assess environmental information and data reporting requirements based on the value of the information compared to the cost of collecting, managing, and reporting it. EPA has announced that in the coming months, its regional offices and the states will be exploring possibilities for reducing paperwork requirements for EPA’s programs, testing specific initiatives in consultation with EPA’s program offices, and establishing a clearinghouse of successful initiatives and pilot projects. However, overall reductions in reporting burdens have proved difficult to achieve. For example, in March 1996, we reported that while EPA was pursuing a paperwork reduction of 20 million hours, its overall paperwork burden was actually increasing because of changes in programs and other factors. The states and regulated industries have indicated that they will look to EPA’s new office to reduce the burden of reporting requirements. Although both EPA and the states have recognized the value in fostering a strong partnership concerning information management, they also recognize that this will be a challenging task both in terms of policy and technical issues. For example, the states vary significantly in terms of the data they need to manage their environmental programs, and such differences have complicated the efforts of EPA and the states to develop common standards to facilitate data sharing. The task is even more challenging given that EPA’s various information systems do not use common data standards. For example, an individual facility is not identified by the same code in different systems. Given that EPA depends on state regulatory agencies to collect much of the data it needs and to help ensure the quality of that data, EPA recognizes the need to work in a close partnership with the states on a wide variety of information management activities, including the creation of its new information office. Some partnerships have already been created. For example, EPA and the states are reviewing reporting burdens to identify areas in which the burden can be reduced or eliminated. Under another EPA initiative, the agency is working with states to create data standards so that environmental information from various EPA and state databases can be more readily shared. Representatives of state environmental agencies and the Environmental Council of the States have expressed their ideas and concerns about the role of EPA’s new information office and have frequently reminded EPA that they expect to share with EPA the responsibility for setting that office’s goals, priorities, and strategies. According to a Council official, the states have had more input to the development of the new EPA office than they typically have had in other major policy issues and the states view this change as an improvement in their relationship with EPA. Collecting and managing the data that EPA requires to manage its programs have been major long-term challenges for the agency. The EPA Administrator’s recent decision to create a central information office to make fundamental agencywide improvements in data management activities is a step in the right direction. However, creating such an organization from disparate parts of the agency is a complex process and substantially improving and integrating EPA’s information systems will be difficult and likely require several years. To fully achieve EPA’s goals will require high priority within the agency, including the long-term appropriate resources and commitment of senior management. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Environmental Protection Agency's (EPA) information management initiatives, focusing on the: (1) status of EPA's efforts to create a central office responsible for information management, policy, and technology issues; and (2) major challenges that the new office needs to address in order to achieve success in collecting, using, and disseminating environmental information. GAO noted that: (1) EPA estimates that its central information office will be operational by the end of August 1999 and will have a staff of about 350 employees; (2) the office will address a broad range of information policy and technology issues, such as improving the accuracy of EPA's data, protecting the security of information that EPA disseminates over the Internet, developing better measures to assess environmental conditions, and reducing information collection and reporting burdens; (3) EPA recognizes the importance of developing an information plan showing the goals of the new office and the means by which they will be achieved but has not yet established milestones or target dates for completing such a plan; (4) although EPA has made progress in determining the organizational structure for the new office, it has not yet finalized decisions on the office's authorities, responsibilities, and budgetary needs; (5) the agency has not performed an analysis to determine the types and the skills of employees that will be needed to carry out the office's functions; (6) EPA officials told GAO that decisions on the office's authorities, responsibilities, budget, and staff will be made before the office is established in August 1999; (7) on the basis of GAO's prior and ongoing reviews of EPA's information management problems, GAO believes that the success of the new office depends on the agency's addressing several key challenges as it develops an information plan, budget, and organizational structure for that office; and (8) most importantly, EPA needs to: (a) provide the office with the resources and the expertise necessary to solve the complex information management, policy, and technology problems facing the agency; (b) empower the office to overcome organizational challenges to adopting agencywide information policies and procedures; (c) balance the agency's need for data on health, the environment, and program outcomes with the call from the states and regulated industries to reduce their reporting burdens; and (d) work closely with its state partners to design and implement improved information management systems. |
In 2008, California voters approved Proposition 1A, which authorized $9.95 billion in state bond funding for construction of the California high- speed rail system and connection improvements to existing passenger rail systems. Proposition 1A established several requirements for this high-speed rail system, such as that the rail system must be capable of sustained operating speeds of no less than 200 miles per hour, and once built, must operate without a public subsidy. The planned 520-mile high- speed rail system will operate between San Francisco and Los Angeles at speeds up to 220 miles per hour (see fig.1). The Authority is the state entity charged with planning, designing, and constructing the California high-speed rail system. The Authority has a nine-member policy board appointed by the California legislature and Governor, and a staff of approximately 55 state employees who oversee, among other things, contracts for environmental review, preliminary engineering design, preliminary right-of-way acquisition tasks, contractor oversight and other activities. Construction of the California high-speed rail project is expected to occur in phases beginning with a 130-mile section from just north of Fresno, California, to just north of Bakersfield, California. Construction will begin in the Central Valley and proceed to other portions of the corridor as funding is available. The Central Valley is the furthest advanced in terms of design and engineering work, as well as environmental reviews. For example, FRA approved a preferred route alignment for the Merced to Fresno, California, portion of the corridor in September 2012. According to FRA, the federally funded portion of the project in the Central Valley has more complete cost estimates than subsequent segments given that has more complete cost estimates than subsequent segments given that preliminary engineering and environmental reviews are complete or nearly complete. Other project segments, however, are in different stages of development and have different levels of information from which to develop cost estimates. In July 2012, the California legislature appropriated $4.7 billion of the $9.95 billion in state bond funds, including $2.6 billion for construction of the high-speed rail project and $1.1 billion for upgrades in the San Francisco peninsula and in the Los Angeles basin (commonly referred to as the “bookends”). The process of acquiring property for the right-of-way and construction has begun. Requests for proposals to select construction contractors and right-of-way acquisition were issued in March and September 2012, respectively. In addition, in January 2013, the Authority awarded a project and construction management contract for the initial phase of California’s high-speed rail project. According to the Authority, a design-build contract for the first construction (covering approximately 30 miles) is expected to be awarded in June 2013 with construction planned to commence in summer 2013. (See fig. 2). The federal government has committed funding to the project. The FRA awarded the state approximately $3.3 billion in capital construction funds and $231 million for environmental review and preliminary engineering work under the HSIPR program for a total of approximately $3.5 billion.The California high-speed rail project is the largest recipient of HSIPR funds, with about 35 percent of program funds obligated. Most of the HSIPR money awarded to the project was appropriated by the Recovery Act and, in accordance with governing grant agreements, must be expended by September 30, 2017.million in fiscal year 2010 funding was awarded to the project by FRA and is to remain available until expended. While the funds remain available until expended under FRA’s fiscal year 2010 appropriation, the governing grant agreements specify the schedule for expenditure of funds. In addition, approximately $945 Even though some funding has been committed, additional funding will be needed to complete the project. For example, according to the Authority’s finance plan, over $38 billion in federal funds and over $4 billion from Proposition 1A proceeds will be needed to complete Phase 1 of the project. In addition, the Authority is also planning to obtain another $13.1 billion in private-sector capital to help defray the cost of construction after the initial operating segment is completed in 2028. As the federal agency responsible for awarding and overseeing grants to HSIPR applicants, the FRA established guidance outlining requirements and procedures and developed an oversight program to ensure that the project’s goals, objectives, performance requirements, budgets, and other related program criteria are being met. Thus far, FRA’s guidance to HSIPR grant applicants has been limited with respect to developing cost estimates and ridership and revenue forecasts. The Department of Transportation’s Office of Inspector General (DOT OIG) noted that the lack of clear, detailed guidance allows for analyses of widely varying quality, making it difficult to accurately assess whether projects will be viable or require substantial financial support and has recommended FRA improve its guidance. In addition, we have previously reported that a clear definition of the federal role, goals, and objectives in conjunction with a robust grant oversight program, is critical to FRA making sound federal investments in high-speed rail projects. The Authority is required to prepare and periodically submit to the state legislature a business plan, which must identify, among other things, ridership estimates, operating and maintenance costs, and the source of The Authority’s next business plan is expected to be project funding. released in 2014. Several groups have been established to review and comment on the estimates presented in the Authority’s business plans. For example, an independent peer review group (PRG) was established in accordance with California law to evaluate the Authority’s funding plans and report its judgment as to the feasibility and the reasonableness of the plans, appropriateness of assumptions, analyses and estimates, and any observations or evaluations it deems necessary. In addition, the Authority convened a Ridership and Revenue Peer Review Panel (Panel) to review the Authority’s ridership and revenue-forecasting process and outcomes and conduct an in-depth review of the models used to estimate ridership and revenue and the forecasts derived from them. The California state auditor is required to periodically audit the Authority’s use of bond proceeds. In response to the initial high estimated cost of building the San Francisco to Los Angeles route—about $98 billion—and other criticisms of the Authority’s November 2011 draft business plan, the project underwent substantial revision for the April 2012 revised business plan. Most significantly, the Authority scaled back its plans to build dedicated high- speed rail lines over the project’s entire length. Instead, the April 2012 revised business plan adopted a “blended” system in which high-speed rail service would be provided over a mix of dedicated high-speed lines and existing and upgraded local rail infrastructure (entirely at the bookends of the system on the San Francisco peninsula and in the Los Angeles basin). The estimated cost in the April 2012 revised business plan is $68.4 billion. The ridership and revenue forecasts in the April 2012 revised business plan also changed from the November 2011 plan. For example, in the November 2011 draft business plan, the Authority provided low and high estimates of ridership in 2030 of 14.4 million and 21.3 million passengers. In the April 2012 revised business plan, these estimates increased by 12 and 26 percent, respectively, to 16.1 million and 26.8 million. Revenues similarly increased between the two plans from a low and high estimate of $1.05 billion and $1.56 billion in the November 2011 plan to $1.06 billion and $1.81 billion in April 2012. The ridership and revenue estimates increased because among other things, a “one-seat” service from San Francisco to Los Angeles would begin sooner under the blended approach than the original solely dedicated lines approach. However, by 2040 ridership forecasts under the blended approach are less than the original full build approach. The range between the high and low estimates also increased between the November and April plans reflecting a greater degree of uncertainty in the estimates. We have previously reported that forecasting ridership and revenue is a complex and iterative process and that early stage estimates should be based on the best available data and what is initially known about the proposed project. As additional information becomes available, the Authority’s model used to produce the forecasts is intended to be updated. Development of the high-speed rail system has been controversial with many strongly held beliefs among the numerous supporters and opponents of the project. Supporters have cited the need for high-speed rail to address growing congestion concerns, particularly in the metropolitan areas, and to address future transportation demands. Supporters have noted that California’s expected population growth—which is expected to increase from 38 million in 2012 to an estimated 51 million Californians in 2050—and economic growth will continue to place more demands on California’s transportation infrastructure requiring that significant new capacity be added to its transportation network. Further, supporters argue that the cost of expanding the current network of highways and airports to meet current and future transportation needs is cost prohibitive and would be detrimental to air quality and that high-speed rail will increase economic development in local communities to be served by high-speed rail and generate new jobs. In addition, supporters also note that several critical airport and highway expansions are infeasible due to land constraints, particularly at key airports and urban segments of highways. Opponents of the plan have argued, among other things, that the cost of the high-speed rail system is too great and that future funding for the system is too uncertain given the current fiscal environment. Opponents have also raised concerns about the credibility of the ridership and revenue forecasts presented in the Authority’s business plans and specifically, the system’s ability to attract the ridership levels needed to avoid public operating subsidies. Local communities and property owners in California’s Central Valley have also raised concerns about the project and its potential impact to the agriculture sector in the region. As we reported in our December 2012 testimony, the Authority will face several challenges with acquiring rights-of-way in a timely manner, including potential construction delays as well as additional project costs. Timely right-of-way acquisition will be critical since some properties are in priority construction zones. Property to be acquired will include homes, businesses, and farmland. Not having the needed right-of-way could cause delays and add to project costs. There are a total of approximately 1,100 parcels to be acquired for the first construction segment; all of which are in California’s Central Valley. According to Authority officials, although the Authority may face challenges in acquiring right-of-way, they have built contingencies for time and cost into their acquisition plan. The Authority estimates that Phase 1 of the high-speed rail project in California will cost $68.4 billion to construct and hundreds of millions of dollars to operate and maintain annually. Since the project’s financing plan, as articulated in the April 2012 revised business plan, will depend on an additional estimated nearly $38.7 billion in federal funds, it is vital that the Authority, FRA, and Congress be able to rely on these cost estimates for project planning, funding, and oversight. In addition, because the value of potential private investment depends on the cost of operating the system, it is vital that the Authority and the private sector be able to rely on the operating cost estimate. Given that our past work on high-speed rail projects around the world has shown that projects’ cost estimates tend to be underestimated, ensuring the reliability of the estimates is critical to the success of this project. FRA provided limited guidance to grant applicants, including the Authority, about preparing cost estimates. FRA grant applicants were required to submit detailed capital cost estimates and high-level operating cost estimates, but FRA did not provide guidance on how applicants should produce these cost estimates to help ensure reliability. Moreover, the limited guidance that was provided did not reflect best practices included in our Cost Guide. FRA officials acknowledged that they specified the categories and types of costs to be estimated, not how applicants should prepare these cost estimates. FRA officials told us that they did not provide prescriptive guidance to grantees in preparing cost estimates because of the Recovery Act requirement to begin funding activities quickly following the enactment of the act in February 2009. In addition, FRA noted that the first two rounds of the HSIPR program were open to a wide range of project types and the level of detail necessary for an individual station is different from the level of detail necessary for a large, long-term corridor program like California. According to FRA officials, the Authority’s application complied with the HSIPR grant application requirements. FRA found the cost estimates to be reasonable based on their comparison of the Authority’s cost estimates (on a unit cost basis) to other rail projects in the United States and abroad. The Authority and its contractor told us that, in the absence of specific guidance on preparing cost estimates, they relied on their professional experience supplemented by available cost-estimating guidance from the Federal Transit Administration (FTA) which they thought was the most applicable guidance available. For example, since FRA’s guidance did not require the Authority to perform an independent cost estimate (this involves a comparison of the Authority’s original cost estimates to those performed by an independent entity), the Authority turned to FTA guidance to provide direction on how and when to conduct an independent cost estimate. We evaluated the Authority’s cost estimates against GAO’s Cost Guide, which details best practices for generating high-quality cost estimates at all levels of government. While not required by FRA, the best practices identified in our Cost Guide help estimators develop reliable cost estimates, which have the four following characteristics: An accurate cost estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs. A credible cost estimate discusses any limitations of the analysis from uncertainty or biases surrounding data or assumptions. A comprehensive cost estimate ensures that costs are neither omitted nor double counted. A well-documented cost estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference. Ensuring that cost estimates reflect these four characteristics helps minimize the risk of cost overruns, missed deadlines, and unmet performance targets. The Cost Guide also provides criteria for evaluating cost estimates to determine whether they exhibit these characteristics. We have previously applied the Cost Guide in reviewing several transportation and infrastructure projects and we applied it in our review of the Authority’s cost estimates. The Authority substantially met best practices in our Cost Guide for producing accurate cost estimates, but only partially met our best practices for producing comprehensive, well documented, and credible estimates. By not following all best practices, there is increased risk of such things as cost overruns, missed deadlines, and unmet performance targets. Our assessment of the Authority’s $68.4 billion construction and operating cost estimates for the high-speed rail project is summarized in table 1. Our assessment is discussed in more detail in appendix 2. We found that the Authority substantially met best practices for developing accurate cost estimates. Consistent with best practices, the estimates reflect the new “blended” system, which will rely, in part, on existing rail infrastructure; they contained few, if any, mathematical errors; and they have been adjusted for inflation. Furthermore, the Authority’s contractor used a construction industry database of project costs supplemented with actual bid-price data from other transportation infrastructure projects. While the Authority generally complied with best practices for producing accurate cost estimates, we could not determine whether the estimates were unbiased. This was the only best practice related to accuracy where the Authority fell short. To help ensure an unbiased estimate, the Cost Guide recommends conducting a systematic analysis of the potential risks to the project and their likelihood of occurring—called a risk and uncertainty analysis. A risk and uncertainty analysis is also a best practice for developing a credible cost estimate as discussed below. We found that the Authority partially met best practices for producing comprehensive cost estimates. For example, the Authority met the best practice for including in the cost estimates the major components of the project’s construction and operating costs. The construction cost estimate is based on detailed construction unit costs that are, in certain cases, more detailed than the cost categories required by FRA in its HSIPR grant application. However, the operating cost estimate was not as detailed as its construction cost estimate, as over half of the operating costs are captured in a single category called Train Operations and Maintenance. Authority officials told us that they developed their cost estimates consistent with FRA’s guidance which emphasized greater detail on the construction cost estimate and less detail on the operating cost estimate. FRA officials confirmed that they emphasize construction cost estimates because HSIPR grants are required by federal law to only fund the capital costs of a project, not its operating costs. However, sufficiently comprehensive operating cost estimates are necessary to determine the potential profitability of California’s project, a key consideration to attracting private-sector investment in the project that the Authority is counting on to help complete construction of the project. In addition, the Authority did not follow the best practice that calls for clearly describing certain assumptions underlying the construction and operating cost estimates. For example, Authority officials told us that the California project will rely on proven high-speed rail technology from systems in foreign countries, but it is not clear how the cost estimates were adjusted for applying the foreign technology in California and how these adjustments are reflected in the complete project cost estimates. California requirements on the technology may differ in terms of speed or safety requirements from that of other systems. Authority officials said that they produced a thorough description of the technical requirements for high-speed rail, but we were unable to see how this document was linked to the cost estimates, that is, a description of how these technical requirements will impact the project cost estimates. Without comprehensive cost estimates, it is not possible to independently assure that all cost-influencing factors and assumptions were considered. We found that the Authority partially met best practices for producing well- documented cost estimates. In many cases, the methodologies used to derive the construction cost estimate were well documented, but in other cases, were more limited. For example, while track infrastructure costs ($23.6 billion in 2011 dollars) were thoroughly documented, costs for other elements, such as building new stations and acquiring trains ($3.2 billion in 2011 dollars), were not supported with sufficient documentation to identify how these costs were developed and what costs were included or excluded. Authority officials told us that since station locations and train technology are not yet finalized, they used a higher-level cost estimate. Additionally, we were unable to trace the estimates back to their source data and recreate the estimates using the stated methodology. For example, we were unable to identify the basis for how the operating costs from analogous foreign high-speed rail projects were adjusted for use in California. Authority officials said that the operating cost estimate was used at a high level to determine whether or not the California system will operate with an operating surplus. Authority officials plan to refine the operating cost estimate as the project progresses. However, without more detailed documentation, the Authority’s cost estimates are more difficult to support and it may be harder to make changes to the estimates as they are revised since the basis of the original estimate may not be documented. In addition, without more thorough documentation, FRA and other oversight officials cannot replicate and evaluate what the Authority did to prepare its estimates and potentially exposes the project to possible cost overruns because the basis for costs are not known. Estimates that lack documentation are not useful for updates or information sharing and can hinder understanding and proper use. We found that the Authority partially met best practices to help ensure the credibility of its cost estimates. Those practices include: testing such estimates with a sensitivity analysis, such as assessing the effect of changes in key cost inputs; obtaining an independent cost estimate conducted by an unaffiliated party to see how outside estimates compare to the original estimates; and conducting risk and uncertainty analysis. In regard to the construction cost estimates, the Authority performed a sensitivity analysis for the approximately first 30 miles of construction and obtained an independent cost estimate for the first 185 miles of construction in the Central Valley but neither covered the entire Los Angeles to San Francisco project. And, as noted under the accuracy discussion, the Authority did not conduct a risk and uncertainty analysis on the cost estimates for any construction segment. Authority officials told us that in the absence of relevant FRA guidance, they followed FTA guidance for these types of evaluations. They noted that FTA guidance recommends conducting sensitivity tests once route alignments are selected and that, thus far, only 30 miles of the project meet those criteria. Similarly, Authority officials told us that they interpreted FTA guidance to require an independent cost estimate for those segments that have passed the 15 percent design milestone, which includes the first two construction segments from Merced to Fresno and Fresno to Bakersfield. Because the Authority’s cost estimates cover both construction of the full system—from San Francisco to Los Angeles—and include operating costs, sensitivity analyses and independent cost estimates are more beneficial when they cover the entire project to help ensure greater credibility. The methodology of these tests can be altered to reflect the level of design, so a segment that has met a certain level of design can still be evaluated for credibility. Finally, as noted above, the Authority did not perform a risk and uncertainty analysis, which would improve the estimates’ credibility by identifying a range of potential costs and indicating the degree of confidence decision-makers can place on the cost estimates. For example, the Authority faces the potential challenge of acquiring rights-of-way in a timely manner. Authority officials told us there about 400 parcels in the first construction package, about 100 parcels of which are considered to be potentially at-risk for timely delivery for construction. However, without a risk and uncertainty analysis it is not possible to determine how the cost estimates might be affected by such things as delays in acquiring necessary rights-of-way or having to pay more for property to keep the project on schedule. Authority officials said that they did not conduct a risk and uncertainty analysis yet because FRA did not require one, and the FTA guidance to which they turned, recommends such an analysis after a final route alignment is selected. In addition, Authority officials told us that they added contingencies to the cost estimates to account for the risk of cost overruns. However, according to our Cost Guide, risk and uncertainty analysis—as with sensitivity analysis and independent cost estimates— should cover the entire cost estimate and can be performed at varying levels of detail commensurate to the level of design. And, while contingencies are designed to cover potential cost overruns, based on our review, the Authority’s contingencies, which range from 10 to 25 percent for various cost elements, were not calculated based on the results of a risk and uncertainty analysis (since this was not performed) but rather were based on professional judgment. Without a risk and uncertainty analysis, we cannot be assured that the contingencies are accurately calculated, and more importantly, what level of confidence we can have in the cost estimates. For the operating cost estimate, the Authority conducted sensitivity tests under various ridership scenarios as recommended by the Panel; however, these tests were designed to measure the ability of the initial operating section to cover operating costs with ticket revenues and not to determine the potential risk factors that may affect the operating cost estimate itself. The Authority also did not compare its operating cost estimate to an independent cost estimate or conduct a risk and uncertainty analysis. The Authority told us that it views the sensitivity test already conducted as well as a forthcoming evaluation of operating costs by the International Union of Railways (UIC) as sufficient to meet these requirements. To make its operating cost estimate more comprehensive and better documented, the Authority has contracted with the UIC to evaluate the existing methodology and data and help refine the Authority’s estimates.expert review of the Authority’s operating cost estimates, it may not address some of the key practices that ensure credibility. For example, the UIC’s evaluation is not expected to result in new, independently- produced cost estimates that can be compared to the Authority’s original estimates. The quality of any cost estimate can be improved as more information becomes available. And, based in part on evaluations from the PRG, the Authority is taking some steps to improve the cost estimates that will be provided in the 2014 business plan. As noted above, the Authority has contracted with the UIC to evaluate and provide recommendations on the Authority’s operating cost estimates. While the study will provide additional analysis from a reputable source, it may not address all best practices from the Cost Guide that would help ensure that the operating cost estimate is comprehensive, accurate and credible. Cost estimates should also be updated with actual costs so that they are always relevant and current. Continual updating of a cost estimate as a program matures not only results in a higher-quality estimate but also provides an opportunity to incorporate lessons learned. While the Authority was not able to incorporate actual costs because construction had not yet begun, it will have the opportunity once contracts are awarded and actual costs begin to incur for the initial construction in the Central Valley, which is expected to begin in 2013. The bids for the first 30-mile construction package have been submitted to the Authority and will provide a check on how well the Authority has estimated the costs for this work, as well as provide more information on its cost estimates for other segments of the project. The Authority’s ridership and revenue forecasts to date are reasonable and the methods used to develop them followed generally accepted travel-demand-modeling practices. In addition, the Authority completed several updates to its ridership-and-revenue forecasting model after the release of the April 2012 revised business plan and also completed several sensitivity analyses to test the reasonableness of its model. However, the Authority will need to complete several additional updates to improve its model and the resulting forecasts for the 2014 business plan. Authority officials stated that they have plans in place to complete several critical updates, including completing a new travel preference survey and developing a second generation travel demand model, but will not be able to complete these improvements in time for the 2014 business plan. Based on our review, we found that the Authority’s methods and model used to produce its ridership and revenue forecasts adhere to generally accepted travel-demand-modeling practices. However, the Authority will need to complete several updates to improve these forecasts for the 2014 business plan. In its April 2012 revised business plan, the Authority forecasts between 16.1 million and 26.8 million passengers per year and annual revenues of $1.06 billion to $1.81 billion in 2030. These forecasts were derived from a statewide ridership and revenue model, developed under contract to the Metropolitan Transportation Commission (MTC). Developing travel demand and revenue forecasts is difficult in almost every circumstance, across every mode and for a variety of reasons. As we have previously reported, limited data and information, especially early in a project before specific service characteristics are known, make developing reliable ridership and revenue forecasts difficult. To the extent early stage data and information are available they need to be updated to reflect changes in the economy, project scope, and consumer preferences. In addition, risks of inaccurate forecasts are a recurring challenge for project sponsors. Research on ridership and revenue forecasts for rail infrastructure projects around the world have shown that ridership forecasts are often overestimated. Although forecasting is inherently risky, reliable ridership and revenue forecasts are critical to accurately estimate the financial viability of a high-speed rail project and determine what project modifications, if any, may be needed. Such forecasts enable policymakers and private entities to make informed decisions about the proposed project and to determine the associated risks when making investment decisions. In addition, ridership forecasts are critical because they serve as the basis for revenue forecasts. If the California high-speed rail project is unable to generate the necessary ridership and revenue to cover the system’s operating costs, the project may not be able to operate without a subsidy—as required by Proposition 1A. Conversely, if forecasts are overly conservative, it could lead to the state capturing less value from private investment than warranted. As such it is critical that the Authority’s process for developing these forecasts is reliable and provides some assurance that the resulting forecasts provide a reasonable estimate of future demand for the system. Unlike our cost estimating criteria discussed earlier, there is no industry standard or established criteria for developing or evaluating intercity passenger high-speed rail ridership forecasts. FRA has not established guidance on acceptable approaches to the development of reliable ridership and revenue forecasts and has established only minimal requirements and guidance related to information HSIPR grant applicants must provide regarding forecasts. We previously reported that developing guidelines, methods, and analytical tools to develop credible and reliable ridership forecasts is necessary to ensure equitable consideration of high- speed rail as a potential option to address demands on the nation’s transportation system. We recommended that the Secretary of Transportation develop guidance and methods for ensuring the reliability of ridership and other forecasts used to determine the viability of high- speed rail projects and support the need for federal grant assistance. The DOT OIG has also recommended that FRA develop specific and detailed guidance for the preparation of HSIPR ridership and revenue forecasts. According to FRA officials, they are in the process of developing an oversight plan that will include criteria for evaluating ridership forecasts. FRA officials indicated that they intend to use the DOT OIG’s HSIPR Best Practices: Ridership and Revenue Forecasting guide as a starting point for developing this guidance. For the purposes of our assessment, we identified generally accepted travel-demand-modeling practices for high-speed rail projects from a variety of sources and developed criteria based on these practices to assess the reasonableness of the approach used to create the ridership and revenue models for the California high-speed rail project. In developing our criteria, we relied primarily on a 2011 report, prepared for the DOT OIG’s office by the firm Steer Davies Gleave, on best practices related to travel demand modeling. In addition, we also examined other literature on developing rail ridership and revenue forecasts to supplement information in the Steer Davies Gleave report. Specifically, we reviewed, among other sources, our prior GAO reports, Federal Highway Administration (FHWA) and FRA guidance, and academic literature. (See app. I for a list of guidance and reports we reviewed.) From our review of these reports and other sources, we identified common approaches to developing ridership and revenue forecast models and elements affecting the validity of those models. We identified seven key steps of the ridership-and-revenue forecasting process and then compared the Authority’s process for completing these tasks to generally accepted travel-demand-modeling practices. We found that the Authority followed generally accepted travel-demand-modeling practices for each of the seven key steps: (1) developing trip tables, (2) determining and applying service characteristics, (3) developing mode choice models, (4) estimating induced travel, (5) estimating expected fare revenue, (6) conducting sensitivity analysis, and (7) conducting validation testing. (For a detailed description of generally accepted travel-demand-modeling practices and the Authority’s process for completing each of these steps, see app.III). The Authority’s process for developing trip tables and collecting and compiling data on current travel patterns followed generally accepted practices. A central task of the ridership forecasting process involves collecting and compiling data on current travel patterns along the proposed high-speed rail route into trip tables. We found that the Authority followed generally accepted practices for developing trip tables. For example, trips were distinguished by mode of travel (auto, air, rail), trip purpose (commute, business, recreation, and other), and trip length (over and under 100 miles) as is general practice. Various data sources were used to develop the base year trip tables, including, among other sources, 2000 Census Bureau data, survey data from a 2005 travel survey and a 2011 long-distance travel survey conducted by Harris Interactive, and existing regional models used by metropolitan planning organizations (MPOs). Overall, the data sources used for developing the trip tables were consistent with generally accepted standards. One potential limitation, however, is that the 2011 survey sample was not selected at random from among California residents but rather was limited to individuals who had opted to join an online survey panel. The American Association for Public Opinion Research (AAPOR)recommended against the use of such panels, often called “opt-in” survey panels, when accurate population estimates are needed, due to concerns about data quality and the possibility that a panel may differ from the intended target population in unknown ways. Data were weighted to adjust for differences between the survey sample and the California population on four characteristics—geographic location, age, wealth, and employment status—but, the data may still not be representative of the California population in other characteristics related to travel behavior. The Authority’s process for determining and applying service characteristics followed generally accepted practices; however, as detailed service plans are finalized or scenarios are changed, the model will need to be updated to reflect revised service characteristics. Ridership forecasts require information on the service characteristics (such as travel time and fares) of competing modes of travel, such as automobile and air travel along the proposed route. We found that the Authority collected and considered relevant service characteristics and used appropriate data sources. These included information on service characteristics of the interregional transportation modes in the area including information on time, cost, and other service characteristics for each mode—auto, conventional rail, and air—based on the most currently available published or observed data. High-speed rail service characteristics were defined based on the initial service plans and fare structure because published or observed data do not exist. Some or all of the high-speed rail characteristics will likely change as service plans are finalized and engineering decisions are made, and those changes can significantly affect ridership and revenue forecasts. This was illustrated in a 2012 sensitivity analysis completed by the Authority in which service characteristics for the proposed high-speed rail system were adjusted to reflect reduced service on the San Francisco Peninsula, which is likely under the current “blended” approach whereby the high-speed rail system will share tracks with Caltrain.ridership and revenue forecasts by 11 percent and 13 percent, respectively, compared with forecasts in the April 2012 revised business plan. Updated representations of the base and forecast year level of service characteristics will be important for producing realistic ridership forecasts in the future. Academic experts from the University of California Berkeley’s ITS conducted a review of ridership and revenue forecast models used to develop forecasts in June 2010 and produced a report summarizing their findings. See D. Brownstone, M. Hansen, and S. Madanat, Review of Bay Area/California High-Speed Rail Ridership and Revenue Forecasting Study, University of California Berkeley’s Institute of Transportation Studies, UCB-ITS-RR-2010-1, June 2010. survey (revealed preference data), or on hypothetical situations presented to travelers in a survey (stated preference), or both. There are advantages and disadvantages to using revealed preference and stated preference data. Revealed preference data provides information on travelers’ actual choices made in a specific market. However, according to the Steer Davies Gleave report, when collected from travel surveys, biases may exist in the respondents’ responses due to a desire of respondents to justify their chosen mode. In addition, since true high-speed rail does not exist in the U.S., it is not possible to use revealed preference alone to determine how American travelers would actually use high-speed rail. To address this problem, stated preference data have been used in high-speed rail studies to asses likely traveler responses to a new service. According to the Steer Davies Gleave report, while stated preference data can provide detailed information about a traveler’s likely responses to different modes or services that do not currently exist, this type of data may also exhibit bias; specifically, survey respondents may respond favorably to a hypothetical new mode, when in reality it may be more difficult to change habitual behavior. According to the Steer Davies Gleave report, limitations with each of these types of data can be mitigated by taking various steps such as by combining both revealed preference and stated preference data. The primary source of data for the Authority’s mode choice model was a revealed preference and stated preference survey, of air, rail, and auto trip passengers, conducted at airports, rail stations, and by telephone from August to November 2005. However, in its July 2011 and May 2012 reports to the Authority, the Panel reported that the Authority’s main mode choice model was based solely on stated preference responses, and recommended in its May 2012 report that the Authority collect new survey data and use both revealed and stated preference data in developing a new mode choice model. Authority officials stated that they are currently developing a new revealed preference and stated preference survey, which they plan to begin administering in early 2013. We discuss this survey in further detail in the next section. In addition, academic experts from the University of California Berkeley’s ITS have previously reviewed the ridership and revenue model and have also identified limitations in the Authority’s method of applying a statistical model to the survey data. These data came from a choice- based sample, meaning that survey respondents were selected for having already chosen a mode of transport (air, rail, or auto). The Authority used a conventional method of applying a statistical model to such data, but a newer method has been identified in a recent research paper. According to some academic experts, the use of the latest method could be an improvement. The Panel also reviewed this issue in its May 2012 report and stated that while they do not see the non-use of this new method in the Version 1 model as an important defect, it is worth investigating this issue as the Authority continues to refine the travel demand model. The Authority followed generally accepted practices when producing induced travel estimates. Induced travel refers to trips that occur as a result of the high-speed rail project and that might not otherwise have been made using existing modes. In general, these are new trips that are generated because a new travel mode exists. The Authority estimated induced travel for the California high-speed rail project to be on average 2 percent of total high-speed rail trips. According to the Steer Davies Gleave report, based on its review of forecasts of proposed U.S. high- speed rail systems, an upper limit on induced travel of approximately 10 percent of total high-speed rail trips is widely accepted for proposed U.S. high-speed rail systems. Steer Davies Gleave also reviewed actual induced travel for high-speed rail systems outside the United States and found that it ranged from 6 to 27 percent. The Authority forecasted induced travel to be on average 2 percent. This estimate appears to be conservative and reasonable when compared to proposed U.S. high- speed rail systems and actual induced travel for high-speed rail systems outside the United States. The Authority followed generally accepted practices when estimating expected fare revenue for the California high-speed rail project. Expected fare revenue is a product of forecast ridership and average fares. High- speed rail fares are based on a boarding fare and a per mile fare for interregional trips. For example, travel from Los Angeles to San Francisco would mean an average, one-way, high-speed rail fare of $81 in 2010 dollars or 83 percent of average 2009 airfares from Los Angeles to San Francisco. Forecast ridership is impacted by high-speed rail fares and revenue is an output of the ridership and revenue model. According to the Authority, the annual number of riders needed to breakeven when the Phase 1 blended system is opened in 2029 is 6.1 million or 23 percent of the high forecast. Authority officials stated that they did not produce a revenue optimization forecast—that is, a ridership forecast that would maximize revenue—to produce these high-speed-rail fare estimates and acknowledged that they will need to do so in the future when meeting with potential private high-speed rail operators, which will establish their own revenue-maximizing fares for the system. According to the Authority, the Authority will not operate the high-speed rail system and a private operator is expected to serve as a contract operator of the system. As such, the private operator will be expected to assume all revenue risks of the project (including setting fares). The Authority’s process for calculating expected fare revenue adheres to generally accepted practices; however, as other factors and inputs change, total expected fare revenue will likely also change. For example, over time, it will be important for the Authority to monitor changes in airfares, gasoline prices, and other key assumptions and incorporate these changes, as necessary, into future revenue forecasts. The Authority followed generally accepted practices when conducting sensitivity analysis of key model assumptions for the California high- speed rail project. A sensitivity analysis is typically conducted by varying key model assumptions such as socioeconomic data, type of trips taken, gasoline and auto fleet efficiency, and airfares, or parameter values to determine how the model behaves in response to changes to these assumptions. The Authority conducted several sensitivity analyses on its ridership and revenue model, most in 2011. For example, the Authority conducted sensitivity analyses that tested key factors, such as changes in fuel economy, air and auto travel time, air and auto travel costs, and high- speed rail travel time assumptions. In one analysis, the Authority tested the overall effect of a higher auto fuel economy and found that this change reduced ridership and revenue forecasts by 16 and 19 percent, respectively, from the Phase 1 high ridership and revenue forecasts presented in the November 2011 draft business plan. According to the Panel that reviewed these analyses, the results of the various sensitivity analyses that the Authority has conducted show that the model is appropriately sensitive across the range of variables tested. In addition, the Authority performed a sensitivity analysis of an extreme downside scenario to test the ridership and revenue implications of a series of downside events, such as increased average rail travel time from Merced to the San Fernando Valley (140 minutes instead of 126 minutes), decreased train frequency (3 trains per hour instead of 4 trains per hour during peak times), lower auto-operating costs, and lower air fares (10 percent below actual 2009 average air fares). Based on this analysis, the Authority determined that an extreme downside scenario would be expected to reduce ridership and revenue forecasts by 27 percent and 28 percent, respectively, below that shown for the low IOS forecasts in the April 2012 revised business plan. According to the Authority, they tested these events using the IOS phase because the financial viability will be the most fragile during the early stages of ridership. According to the Authority, these forecasts are still sufficient to cover the Authority’s estimated operating costs and not require a public operating subsidy. Authority officials stated that they intend to conduct additional sensitivity analyses going forward. The Authority mostly followed generally accepted practices to validate the ridership and revenue model. Model validation generally involves verifying that the model reflects observed traveler behavior including total travel, region-to-region travel flows, and observed market shares by mode. In the United States, without another high-speed rail system to use for comparison, model validation is a difficult task. Furthermore, validating the proposed California project with foreign high-speed rail systems is difficult because some of the travel market characteristics in other countries with high-speed rail, such as the cost of driving, may not be comparable. The Authority has taken some steps to validate the model through tests performed using data on Amtrak’s premium Acela service in the Northeast Corridor (NEC) as input to the California high-speed rail model and compared the output with 2008 actual ridership and 2030 NEC forecasts. The results of the Authority’s comparison of the California high-speed rail model with NEC input to actual 2008 ridership data showed that ridership in the California high-speed rail model with NEC- like conditions in 2008 is 79 percent of actual 2008 NEC ridership. Similarly, the Authority’s comparison of the California high-speed rail system ridership forecasts from the model run with “NEC-like” service is about the same as projected 2030 ridership on the Acela service. Authority officials told us they believe the results from these tests demonstrate that the ridership and revenue model is reasonably sensitive to speed, frequency, and fares. While the NEC corridor has some comparable characteristics to the proposed California high-speed rail corridor—such that it is capable of reaching top speeds up to 150 miles per hour and covers a distance of over 400 miles—the proposed California high-speed rail corridor and the NEC differ in important ways. For example, population density, congestion, and travel behavior in the two corridors differ, and as such, forecast comparisons should be interpreted with caution. In developing the forecasts for the April 2012 revised business plan, the Authority also revised several model assumptions used in the initial ridership and revenue forecasts presented in the November 2011 draft business plan. Specifically, the Authority revised model assumptions to reflect changes in current and anticipated future conditions for airfares and airline service frequencies, decreases in gasoline price forecasts, and anticipated declines in the growth rates for population, number of households, and employment. Some of the initial assumptions were largely based on pre-2007 data and did not reflect potential effects of the 2007 to 2009 recession. (See table 2 for a summary of updates that have thus far been completed.) According to Authority officials, this was done to build in additional conservatism in the ridership forecasts. Updating model assumptions can help mitigate of the risks of overestimating ridership and revenue forecasts—referred to as optimism bias. Biased ridership forecasts are a recurring problem with rail infrastructure projects and we have previously reported that forecasting ridership and revenue is a complex and uncertain process and ridership forecasts of high-speed rail projects are often overestimated. Other research on ridership and revenue forecasts for rail infrastructure projects have confirmed that actual ridership is likely to be lower than forecasted unless steps are taken to incorporate more conservative assumptions into the model. For example, a recent study examined a sample of 62 rail projects and found the ridership forecasts of 53 of them were overstated; actual ridership was, on average, 41 percent lower than forecasted. Updates to model inputs, such as fuel prices and other projections, are important for updating ridership forecasts for any project; in this instance, updates to model inputs resulted in more conservative ridership forecasts. The Authority has plans to complete future improvements to its ridership and revenue forecasts, including completing a new travel preference survey and developing a second generation travel demand model. However, the Authority will not be able to complete these critical improvements in time for the 2014 business plan. According to Authority officials, the 2014 plan is expected to include, among other updates, updated ridership and revenue forecasts. Two critical updates to the ridership forecasts that peer reviewers and academic experts have recommended are the development of a new 2013 revealed and stated preference travel survey and a second generation travel demand model— which will make use of the new survey. Although the Authority has begun taking steps to complete both of these tasks, neither the survey nor the second generation model will be completed in time for the 2014 business plan. The Authority began work developing a new 2013 revealed and stated preference survey in late 2012. According to Authority officials, a survey sample and survey questionnaire design was initiated in December 2012. Data collection for the 2013 revealed preference and stated preference survey will not begin until 2013. Full data collection, cleaning, and preliminary analysis of results are expected to be completed by mid-April 2013. The new survey will include a larger sample of 4,500 respondents compared with the 3,172 respondents to the 2005 survey. In addition, the revealed preference portion of the data set will be designed and coded to facilitate estimation using revealed and stated preference responses simultaneously, which was not done in the first version of the model. According to the Panel, development of a new survey is critical as it will address several long-term issues that can only be overcome with the collection and analysis of new survey data. The Authority also has plans to develop a second-generation travel demand model; however, the second generation model will not be completed in time for the 2014 business plan. According to Authority officials, work on the second-generation model will not begin until the ridership and revenue analysis for the 2014 business plan is completed. According to the Authority, the second generation model will use data from the new 2013 revealed and stated preference survey to supplement data from the 2005 survey. In addition, the Authority plans to replace the 2011 Harris Interactive long-distance travel survey data with data from the 2012 California Household Travel Survey being conducted by the California Department of Transportation. According to Authority officials, both surveys will be needed for developing its second-generation travel demand model. The Panel, which has released five reports assessing the Authority’s ridership and revenue forecasts, has reported that the Authority’s ridership and forecasts to date are reasonable for planning purposes but has also stated that additional updates and enhancements, particularly the development of a new model, will be critical for future project decision making. For example, in its most recent October 2012 report, the Panel stated that a second-generation model will be required to meet the Authority’s long-term goals of completing detailed planning studies and make key planning and operational decisions on issues such as specific rail alignments, station design requirements, and pricing strategies. While the Authority will not be able to complete a second-generation travel demand model in time for the 2014 business plan, it has begun work on developing an enhanced model that will be used to produce the ridership and revenue forecasts for the 2014 business plan. The enhanced model will retain the same structure of the original model, but some of the individual model components will be updated. For example, the main mode choice model will use both revealed preference and stated preference results from the 2005 travel survey. According to Authority officials, the enhanced travel demand model will be completed by May 31, 2013. Even if the Authority is not able to complete the major ridership and revenue forecast improvements in time for the 2014 business plan, ongoing disclosure of interim results from model improvements both before and after the business plan are published will be important to outside reviewers and the public. Peer reviewers and other groups that have examined the Authority’s ridership and revenue forecasts have reported the need for greater transparency in the Authority’s analyses. For example, in its January 2012 report, the California State Auditor reported that the Authority’s November 2011 draft business plan lacked detail in its presentation of some of the revenue forecasts. Similarly, in its October 2012 report, the Panel advised the Authority to provide summaries on the Authority’s website, describing, among other things, recent forecasts, key input assumptions used to develop the forecasts (e.g., fuel price trends, socioeconomic growth rates, and changes in household size and structure), and updated service characteristic information. According to Authority officials, documentation supporting analysis in its 2014 draft business plan will be available on the Authority’s website when the plan is made public. In addition, Authority officials stated that the Authority has generally posted key technical documents as they are made public. The Authority also has developed a work plan for other travel demand model improvements. This includes, among other things, plans to complete additional validation testing of model results using data from the NEC. For the 2014 business plan, the Authority is planning to conduct further testing and sensitivity analysis of the ridership and revenue forecasts to examine the sensitivity of the forecasts to reduced frequencies of services; changes in alternative fare structures (for example, premium fares for intra-regional trips in the San Francisco Bay Area and Los Angeles Basin); changes to service plans (destinations and schedules); and other sensitivity analyses aimed at quantifying risks. In its October 2012 report, the Panel also recommended additional sensitivity analyses to be completed on the second generation travel demand model, including analyses examining the impact of pricing strategies on revenue, impact of local transit feeder systems on station choice, and impact of major changes in the roadway network on highway congestion and subsequent mode choice decisions. In addition, the Authority is planning to conduct Monte Carlo simulations to test numerous potential combinations of assumptions on the forecasts that will be part of the 2014 Business Plan during fiscal years 2013 through 2014, provided the foundational information to construct, test, and analyze the simulation and its results is sufficiently developed at that stage of the program. All of these will be important as the ridership and revenue forecasts continue to evolve with development of the high-speed rail project. The project’s funding, which relies on both public and private sources of financing, faces uncertainty about whether those funds can be obtained in a tight federal and state budget environment. The Secretary of Transportation and the Governor of California have committed to funding this project, but obtaining sustained congressional and public support for appropriating additional funds is one of the biggest challenges to completing this project. In the latter stages, the Authority will also rely on private-sector financing, but will require more reliable operating cost estimates and revenue forecasts to determine whether, and the extent to which, the system will be profitable, as well as the value of any private investment. The Authority’s financing plan recognizes the uncertainty of the current funding environment so the Authority is building the project in phases and has identified an alternative funding source that is also uncertain. However, delays in obtaining funds as planned will likely lead to project delays and higher costs for construction. A summary of funding already committed to date can be found in table 3. The Authority is relying primarily on public-sector funding to complete construction of Phase 1 of the project with $55 billion, or 81 percent of the total construction cost, expected to come from state and federal sources. Heavy reliance on public-sector funding is not unusual for a project of this size. For example, France built its high-speed train system primarily with public-sector funding. In the United States, federal-aid highway system continues to be funded by the federal government through the gas tax and, more recently, with transfers from the general fund. The Authority expects to obtain public sector funds over the life of the project as individual segments are ready for construction. This type of “phased” funding is typical for major transportation infrastructure projects. Table 4 provides a summary of the Authority’s funding plan for Phase 1 the high- speed rail project. The Authority’s April 2012 revised business plan relies on approximately $42 billion in federal funding for the project’s construction, which includes the $3.3 billion that has already been obligated. The remaining $38.7 billion in federal funds have not been identified in federal budgets or appropriations but would amount to an average of more than $2.5 billion annually over the life of the project’s construction. This exceeds the average annual funding made available under DOT’s New Starts transit- funding grant program since 2008 (about $1.6 billion per year). Moreover, it exceeds the federal government’s average annual appropriations to Amtrak since 2008 (about $1.5 billion per year). Largely as a result of the funding challenge, the Authority is taking a phased approach—planning to build segments as funding is available. Thus, according to the Authority’s 2012 revised business plan, no additional funding will be needed until 2015 when it hopes to begin construction beyond the first construction segment. Based on our past work on high-speed rail, successful projects require significant and sustained financial commitments from the public sector before private investors will participate, and the Authority’s plan reflects this funding model. For example, in Japan, private investment is contingent on substantial government investment. Other federally- supported transportation programs—like those for highway and certain transit infrastructure—rely on a dedicated revenue source for their funding and allow for multi-year funding agreements for eligible projects. In contrast, the HSIPR program has not been funded with a dedicated revenue stream, but from the general fund, a process that means that the program has to compete for appropriations with other discretionary programs. In addition, the HSIPR program has provided one-time grants and, as currently structured has not awarded multi-year agreements for grantees. Our 2013 High-Risk Series report identified the use of general funds for high-speed rail projects as a challenge to project completion. Given that the HSIPR grant program has not received funding since 2010 and that future funding proposals will likely be met with continued concern about the general level of federal spending, the largest block of expected funding for the California project is uncertain. The Authority is also relying on a total of about $8.2 billion in state high- speed rail bond proceeds (which includes the $3.7 billion that has been appropriated to date) and another $5 billion in locally-generated funds for The proceeds from the high-speed rail bonds the project’s construction. are dedicated, in that they can only be used for this project and do not have to compete with other budgetary priorities. However, the remaining $4.5 billion will have to be appropriated to the project. The $5 billion in local funds—most of which are expected at the end of the project’s construction timeline—have not been committed by local entities yet. Authority officials told us that these local funds could include revenues derived from property development in and around high-speed rail stations or improved service on existing transit corridors. For example, planned improvements on the Caltrain corridor may result in increased ridership; if so, a portion of increased revenues could be earmarked for high-speed rail construction. According to their April 2012 revised business plan, the Authority is planning on applying $8.2 billion of the $9.95 billion in state high-speed rail bond proceeds to project construction. Some $950 million of the remaining $1.75 billion will be used for transportation projects that will connect to high-speed rail. And, the other $800 million will be used for environmental, planning, and support costs. In addition to the challenges of obtaining public-sector funding, the Authority may face challenges in attracting private-sector funding if its’ operating cost estimate and ridership forecasts prove to be optimistic. The Authority expects that once the initial operating segment is operational in 2023, it will generate a profit that would be attractive to private investors. The Authority is planning to raise approximately $13.1 billion by selling an operating concession to a private firm or consortium of firms.complete construction of the system. The Authority plans to use the proceeds of this sale to help Our past work on high-speed rail systems has shown that private sector investment is easier to attract only after the public sector has made a substantial capital investment in the system. The Authority’s plan is consistent with this funding approach; however, to successfully attract private investment in this project, the Authority will have to meet two significant milestones: complete construction of the IOS (this will require at least an additional $25 billion of investment from public sources), and demonstrate that the IOS can operate at a profit. Public-private partnerships for intercity passenger rail, such as what the Authority is planning for, have been proposed but not implemented in the United States. However, in other transportation sectors, public-sector infrastructure owners have used public-private partnerships to incorporate private-sector operating expertise and encourage private investment. For example, the state of Indiana raised $3.8 billion in 2006 by arranging a private operating concession for its existing Indiana Toll Road. In addition, according to DOT, the city of Denver, Colorado, raised nearly $100 million from private sources to help finance a $2 billion expansion to its transit and commuter rail project. While the private-sector can provide needed funding and management expertise to a transportation project, this approach is not risk-free. As we have previously reported for highway projects, public-private partnership can also present trade-offs, such as the risk that the private operator will demand more revenue from users (e.g., tolls) than initially expected. Other nations’ experiences with high-speed rail indicate that under certain circumstances the private sector can operate these systems and that they could potentially be profitable on an operating basis. For example, Japan sold certain high-speed rail lines to private operators and does not provide operating subsidies to these firms. And, in 2010, Britain sold an operating concession for its High-Speed 1 line to a consortium of private investors for approximately £2 billion (approximately $3.2 billion). Private firms have also expressed interest in operating high-speed rail projects in the United States. For example, according to FRA, several private consortia were preparing to submit bids on a HSIPR-funded project between Tampa and Orlando, Florida. However, public-sector support was withdrawn when the governor canceled the project which precluded private-sector investment. And, in Texas, a privately-financed high-speed rail project failed when the investors encountered financial difficulties. Authority officials told us that they have met with a number of private firms and high-speed rail operators that have expressed interest in California’s project but have not entered into any agreements since the project has not yet been built. Attracting private investment may not only require up-front public investment but may also require the use of revenue guarantees, or public guarantees of a minimal level of income to the project regardless of ridership levels. Such guarantees will reduce the risk for private operators, and therefore their cost of raising capital, but according to the Authority, a revenue guarantee is considered to be a type of operating subsidy that is barred by the legislation that authorized the state high-speed rail bonds (Proposition 1A bonds). Private-sector investment in the California high-speed rail project, if any, will ultimately be determined by the profitability of the system—that is, the extent to which operating revenues exceed operating costs. The Authority currently estimates an operating profit in the first and all subsequent years of operation. However, this estimate is only as reliable as the underlying operating cost and revenue forecasts. As discussed earlier, the Authority’s current ridership and revenue forecasts are reasonable for planning purposes, however, further refinements will be required as the project continues to evolve. The Authority’s current operating cost estimates will also need to be improved in the future. Accordingly, both cost and ridership forecasts will change before the initial operating segment is completed in 2022, making the future value of potential private funding uncertain at this time. The Authority comprehensively identified key economic impacts that could result from the high-speed rail project, including user and non-user impacts, as required by FRA and other federal requirements. FRA guidance, as contained in the program Notices of Funding Availability (NOFA), requires HSIPR applicants to identify the potential benefits and costs of proposed projects with a focus on a public return on investment. A public return on investment includes a project’s potential to deliver transportation, economic recovery, and other benefits. To assist in project evaluation, FRA encouraged HSIPR applicants to provide an economic analysis that quantified the monetary value of user benefits and, if available, public benefits. However, according to FRA, program NOFAs did not explicitly require either a formal benefit-cost analysis (BCA) or preparation of an economic impact analysis (EIA). FRA officials said their review of economic impacts was based on a reasonableness test—that is, were economic impacts identified and were the assumptions behind the impacts reasonable. The officials said that FRA did not have the time or resources to conduct an in-depth analysis and that a reasonableness test provided increased assurance as to potential economic impacts of project proposals. Projects awarded funding must also comply with the National Environmental Policy Act (NEPA) and its implementing regulations. NEPA requires that government agencies undertaking a major federal action (such as providing grant funding) with significant effects on the environment prepare an analysis of the environmental impacts of the proposed action, including a discussion of alternatives to the proposed action. Under FRA’s guidelines for considering environmental impacts, among the impacts to be considered in a NEPA environmental assessment are such things as land use and potential economic effects on existing business districts and metropolitan areas. The program NOFAs required HSIPR applicants seeking funds to develop new high-speed rail corridors and intercity passenger rail services to complete a NEPA review and the June 2009 program NOFA required HSIPR applicants to present information that provided a business and investment justification that contained project cost and benefit estimates. DOT OIG, FRA Needs to Expand Its Guidance on High-Speed Rail Project Viability Assessments, CR-2012-083 (March 28, 2012). In particular, we focused on best practices contained in the report HSIPR Best Practices: Public Benefits Assessment, Steer Davies Gleave, June 2011 that was prepared for the Office of Inspector General in conjunction with the March 2012 report. The Steer Davies Gleave report included a list of components that would be included in a public benefits assessment of high-speed rail projects. The components of public benefit assessments were similar to economic impacts identified through a review of the HSIPR NOFAs. transportation connectivity may have on allowing firms to access larger labor or product markets or increasing the labor supply because people can more easily access jobs. The latter are included in what is termed “wider economic impacts.” FRA’s requirement for a public return on investment from HSIPR projects included aspects of both user and non- user impacts. The best practices report noted the importance of both ridership and revenue forecasts and cost estimates in determining public benefits. In particular, it stated that public benefits assessment depends heavily on ridership and revenue forecasts and the implications these have on project impacts on travelers and the general population. Similarly, operating, maintenance, and capital cost estimates were also identified as important elements of public benefits assessments. The Authority’s April 2012 revised business plan identified the primary user and non-user economic impacts of the California high-speed rail project (see table 5). For example, the plan identified potential user impacts such as travel time reliability for high-speed rail users and non- user impacts such as the effects on highway congestion and economic development around stations. In addition to the business plan, the Authority prepared a BCA that provided a more detailed analysis of both user and non-user impacts included in the revised business plan. The Authority also prepared an EIA that focused on those other economic impacts of the system that do not fall into the BCA framework. According to the Authority, the EIA presented longer-term impacts on California’s economy from building the high-speed rail system. Although the Authority comprehensively identified the primary potential economic impacts, it is too early to determine specific economic impacts since these will depend on a number of factors. These include the following: Future project decisions. The California high-speed rail project is in its early stages of development, and a number of project decisions have yet to be made, including final alignment of train routes, some station locations, and the type and frequency of service. Decisions such as these can be expected to have a bearing on potential economic impacts. For example, route alignments and station locations can affect economic development. While acknowledging that the extent to which high-speed rail would change California’s economic landscape was not fully understood, the EIA suggested that based on studies in other countries, the main economic impacts from high-speed rail in California will likely occur in areas within 2 hours of major economic centers, such as the San Francisco Bay Area and Los Angeles. However, the EIA also concluded that the greatest volume of redevelopment attributable to high-speed rail will likely occur in major metropolitan areas and that the Central Valley could see moderate clustering of development around stations. Although economic development around stations offers the potential for economic impacts, achieving such development may be subject to a number of factors, and certain impacts may not be easy to identify. In July 2012, we reported on the potential for economic development associated with bus rapid transit projects. We found that in the five case study locations we examined, although the bus rapid-transit project was having some positive effect on economic development, individuals associated with these projects were unsure about how much economic activity could be attributed to the presence of bus rapid transit compared with other factors or circumstances. In addition, the project sponsors and experts we spoke with told us that transit- supportive policies and development incentives can play a crucial role in helping to attract and spur economic development associated with bus rapid transit. In September 2009 we reported the characteristics of transit-oriented development around stations can increase nearby land and housing values, but we also found that determining transit- oriented effects on the availability of affordable housing in these developments are complicated by lack of direct research and data. Economic conditions over the life of the project. As we reported in March 2012, the closer the economy is to full employment, the smaller net effect a project will have on total economic activity. The speed with which the nation and California recover from the 2007 to 2009 recession, which cannot be known in advance, will affect the net employment from any new infrastructure projects. If the economy achieves full employment, such projects would affect the composition of employment but not its level or rate of growth. The high-speed rail project will be constructed and operated over a period of many decades and likely over many economic cycles. The Authority’s April 2012 BCA used a 67-year period (from 2013 to 2080) to estimate potential economic benefits and costs of the project, and, for purposes of analyzing potential operating and maintenance costs, the April 2012 revised business plan assumed a 38-year operating period (2022 to 2060). Over such an extended period, economic conditions can be expected to change, as will potential economic impacts. The Authority’s April 2012 EIA recognizes that the project’s economic impacts will be affected by California’s economy and unemployment rates. According to the Authority’s EIA, as of February 2012 California’s 10.9 percent unemployment rate was the nation’s third highest. It goes on to estimate that the high-speed rail project has the potential to create about 1 million direct and indirect job-years through construction of the Phase 1 blended system, based on the assumption that 20,000 job-years would be created for each $1 billion in capital investment.the economic conditions and unemployment rates at the time the jobs are created. The EIA acknowledged this uncertainty when it stated that multipliers used to estimate indirect and induced jobs are snapshots in time of an economy and represent only current or recent economic relationships and technologies. They do not capture structural changes in the economy, new technologies, or changes in wages that occurred since the multiplier data were produced or that might occur in the future. The accuracy of this estimate will depend on Uncertainty about some impacts. Uncertainty may particularly affect non-user impacts, some of which are difficult to measure or quantify. The Authority’s April 2012 EIA discusses the potential for wider economic impacts from high-speed rail such as the benefits of bringing California’s economic activities and markets closer together by reducing travel times. The EIA states, by improving transportation connectivity and reducing congestion, the high-speed rail system could make California’s economy more efficient, productive, and competitive by such things as bringing businesses closer to labor and other markets and providing workers with greater access to jobs. However, the best practices report prepared for the DOT OIG found that additional data on things like the relationship between economic density and productivity, labor supply elasticity, and price-cost margins were needed to assess wider economic impacts of high speed rail projects. The study went on to note that no research was currently available regarding wider economic impacts of U.S. high- speed rail projects since such projects do not yet exist. The Authority’s April 2012 EIA recognized this difficulty and states that the extent to which the high-speed rail project will affect the economic landscape of California is not well understood, though transportation infrastructure investments have historically created fundamental shifts in the spatial relationship between places. These limitations do not imply deficiencies in Authority or DOT performance but rather the inherent analytical complexities of large infrastructure investments. Uncertainty regarding local or regional impacts. The April 2012 EIA discusses the potential for economic impacts along the high-speed rail line, including direct and indirect employment opportunities, increased efficiencies and productivity from bringing labor and other markets closer together, and transit-oriented development around stations. The specific impacts to regions or localities will depend on a number of factors, including project-related factors and factors associated with local policies and decisions. Among the project-related factors are both the rate of project spending over time as well as where project funds are spent. The high-speed rail project is expected to be constructed over a long period of time and in phases when funding becomes available. The rate of spending and its timing will influence when and to what extent regions and localities may experience economic impacts associated with the project. Which specific regions or localities may experience economic impacts will be influenced by such project decisions as route alignments and station locations. Local policies and decisions will also affect regional and local economic impacts. Studies conducted for the Authority by the University of California at Berkeley suggested there are opportunities for economic development from the high-speed rail system in a variety of locations, including Fresno and Bakersfield in California’s Central Valley. However, the studies cautioned that the extent of economic development will depend on cities establishing a framework of planning and development policies that encourage development. Some cities have begun taking actions to promote economic development related to the high-speed rail system. For example, in June 2012 Fresno issued a solicitation for consultants to prepare a master plan for its planned downtown rail station that would enable the city to maximize local economic benefits from the high-speed rail system. Other cities have not yet acted. The Authority’s April 2012 EIA found that of the 13 potential stations on the Phase I blended corridor, 7 (53.8 percent) did not have station-specific development plans. We also found some limitations in the specific economic analyses prepared by the Authority. In particular, the April 2012 BCA has shortcomings that could limit its usefulness to decision makers. Identification of negative impacts of the project and their effects on the BCA analysis. The BCA lacked detail regarding the handling of negative impacts associated with the high-speed rail project. DOT guidance on benefit-cost analysis suggests that negative (or adverse) impacts, such as non-user (or highway) delays associated with rail construction, be included in BCA analyses to facilitate consistent project comparisons. The guidance recommends that negative impacts, such as highway delays associated with rail construction, be shown as a negative benefit and not included in project investment costs to better facilitate comparisons between projects. The April 2012 BCA recognized that during the period of project construction there would be roadway delays in urban areas that would offset some travel time savings. However, the BCA excluded such impacts from the analysis since the impacts were expected to be (1) localized, (2) minimal since the high-speed rail project minimizes urban grade crossings, and (3) negligible in proportion to overall travel time savings once the project is complete. Aside from roadway delays, the Authority acknowledged there were additional categories of economic impacts that may be negative. For example, the April 2012 revised business plan discusses how the high-speed rail system could limit access to parts of farmland in the Central Valley potentially reducing the output of affected farmlands. The BCA contained little discussion of such impacts and states that the BCA did not incorporate or monetize land use and land value impacts the high-speed rail project may cause (positive or negative). The Authority said negative impacts were assumed to be part of the mitigation measures that would be conducted as part of the environmental review process and right-of-way acquisition and that the costs of these measures were included in the cost side of the benefit-cost calculation. According to the Authority, including negative impacts as a negative benefit would lead to double counting them as both a negative benefit as well as a cost in the benefit-cost calculation. We agree that negative impacts of the high-speed rail project should not be double counted, but the BCA should include discussion of potential negative impacts and how they are treated in the analysis. Such information would better inform decision makers about the existence of negative impacts and their potential effect on project benefits or costs. Identification of the risks and uncertainties associated with the BCA analysis. The BCA did not discuss the potential risks and uncertainties associated with either the benefits or costs used in the analysis. Forecasts are inherently uncertain, including those for ridership and economic projections. Recognition and analysis of risks is an important part of project evaluation. As we reported in February 2011, Executive Order 12893 and Office of Management and Budget (OMB) Circulars Nos. A-94 and A-4 indicate that benefit and cost information shall be used in agency decision making and that the level of uncertainty in estimates of benefits and costs shall be disclosed. In particular, Executive Order 12893 requires that uncertainties about the amount and timing of important benefits and costs associated with an infrastructure investment be recognized and addressed through appropriate quantitative and qualitative assessments. Similarly, DOT’s TIGER guidance, which the Authority used to prepare its BCA, requires applicants to assess the reliability of any forecasts used to generate benefit estimates but does not specifically require a discussion of risks and uncertainties in a BCA. We reported in February 2011 that the majority of the applications to the TIGER and HSIPR programs we reviewed did not provide information related to uncertainties in projections, data limitations, or the assumptions underlying their models. Even when such information was provided, we found that it was not always comprehensive. We recommended that DOT require, among other things, grant applicants to clearly communicate the level of uncertainty in estimates of project benefits and costs. The Authority did not conduct a risk analysis beyond examining the potential effects of high and low cost scenarios. While the Authority may have used credible sources for variables in its analysis (e.g., fuel prices), this does not eliminate all forecasting risks, and those risks should be identified. In addition, as we have noted, decisions on route alignments and other aspects of the project have yet to be made; decisions that could add to project risk. GAO-11-290. Although the Authority comprehensively identified the project’s potential economic impacts, additional analysis is needed at the state level of how high-speed rail will affect other transportation modes and their ability to meet future travel demand. This includes the potential cost of additional improvements that may be needed or conversely planned projects that may not be needed. An important aspect of non-user impacts is a project’s potential effect on other transportation modes, including highways, aviation, and local transit systems. This is an important issue since the Authority has estimated that, as a result of population growth and other factors, overall interregional trips in California will increase from about 500 million in 2000 to about 900 million in 2030. In addition, under the blended approach adopted by the Authority, success of the high- speed rail system will depend in part on local transit improvements. As part of the planning process, the Authority considered high-speed rail’s impact on the capacity of other transport modes. For example, in April 2012 the Authority issued an analysis of the highway and airport improvements that would be needed to provide an equivalent capacity to the high-speed rail system envisioned in the April 2012 revised business plan. The analysis found the total cost (in 2011 dollars) of equivalent capacity investment in highways and airports would range between $123 billion and $138 billion to build up to 4,600 highway lane-miles, 115 airport gates, and 4 airport runways. However, the analysis did not focus on potential additional highway or other transportation improvements that may be required even with construction of the high- speed rail system. Rather, the analysis identified the potential highway and airport improvements that would be required to provide an equivalent capacity to that of the high-speed rail, not an assessment of additional improvements required to meet future intercity travel demand. Identifying such improvements was not the task of the Authority since its task is to develop a high-speed rail system. Rather, this task would fall to the state as part of its overall planning responsibilities under federal transportation- planning requirements. Constructing a high-speed rail system is not expected to meet all of California’s future intercity travel demand. Among other things, development of the phase I blended approach will affect the need for additional improvements to local transportation systems to support high- speed rail. For example, officials with the Orange County Transportation Authority told us they have bus projects and a street car project in the planning phases that will be linked to both local commuter rail and the high-speed rail system. Similarly, officials from the Southern California Association of Governments told us the cost of integrating high-speed rail with local transit was still being developed and that the cost of an initial list of prioritized projects to facilitate this integration exceeded $3 billion. Of this $3 billion in projects, about $1 billion in projects were categorized as high priority. The officials said that although they have got agreement from the Authority to help fund some of the high priority projects and some funding is expected to come from Proposition 1A, a funding source for the list of potential projects had not been determined. Even though high-speed rail is not expected to meet all of California’s future intercity travel demand, statewide transportation planning has not yet fully assessed the impact of the high-speed rail system in meeting this demand. In November 2011, the California Transportation Commission (CTC) issued a Statewide Transportation System Needs Assessmentthat identified the preservation, management, and expansion projects required over the 2011 to 2020 period. The assessment identified a total cost for the projects of about $540 billion and a nearly $300 billion funding gap in meeting the project needs identified. According to CTC officials, the needs assessment included information about high-speed rail that was readily available from Authority documents. However, high-speed rail was not assessed in terms of project needs, costs, or the funding gap because, according to CTC officials, Regional Transportation Plans that formed the basis of the assessment did not include high-speed rail in identifying project needs and costs. Similarly, California Department of Transportation officials told us current highway transportation planning has not looked at this issue, and the department’s long range transportation plans have not included consideration of high-speed rail. According to the officials, the department did not see an immediate need to do an assessment since the high-speed rail system is not expected to be operational for another 10 years or more. The officials agreed that the high-speed rail system will affect highways, and its impact will need to be considered in future transportation plans. As currently conceived, California’s high-speed rail project is expected to be among the most expensive infrastructure projects that has been undertaken in this country. Therefore, concerns about potential cost escalations and optimistic ridership forecasts, as well as the potential burden this could place on public budgets are well placed. Cost and revenue estimates for large projects are, by their nature, imprecise; these estimates endeavor to predict many years into the future within the confines of what is known today. According to our past work reviewing high-speed rail projects in other countries, cost and ridership estimates tend to be overly optimistic. However, experts agree that taking steps to anticipate project risks and improve the credibility of such estimates will lower the risk of cost overruns and missed revenue forecasts. Improving the reliability of cost and revenue forecasts is critical to providing project sponsors, FRA, the Congress, and ultimately, the public with greater confidence that this project can be viable. This confidence is of particular importance as the Authority will seek significant and sustained funding from federal, state, and private sources. We found that the Authority did not fully employ best practices for producing reliable cost estimates as expressed in GAO’s Cost Guide, which are recommended practices but not required. The cost estimates can be improved as the project progresses from design to construction, and ultimately, to operation. The Authority was not required to follow the Cost Guide; instead, it was required to follow FRA’s guidance which we found to be limited. That guidance identified the cost categories that applicants should include in its cost analyses, but did not specify how cost estimates should be generated. The Authority told us that it looked to FTA’s cost-estimating guidance to help inform the Authority’s cost- estimating methodology. The Authority can be commended for supplementing its analyses using FTA’s guidance, but this does not necessarily ensure a fully reliable cost estimate. Our past work, as well as that of the DOT OIG, has shown that FRA has yet to develop sufficient program guidance for project evaluation and oversight under its additional grant-making responsibilities under the HSIPR program. Developing guidance for HSIPR applicants and grantees that incorporates best practices from the Cost Guide would allow cost estimators to improve the reliability of cost estimates for expensive projects like high-speed rail. Such guidance would help ensure project costs reflect the four characteristics required for developing reliable cost estimates and minimize the risk of cost overruns, missed deadlines, and unmet performance targets. The Authority is in the process of updating its ridership and revenue model in response to recommendations provided by experts and peer review groups, such as the Ridership and Revenue Peer Review Panel. We believe that these steps will have the potential to improve the Authority’s ridership forecasts, and we encourage continued refinement, as more information becomes available and continued review by peer review groups. Improved forecasts will be particularly important as the Authority seeks to secure private investment in the project. The potential project revenues—which are primarily dependent on ridership—will help determine how much the Authority may be able to obtain from private sources. Similar to the cost estimates, FRA has developed minimal guidance for applicants to develop reliable ridership and revenue forecasts. We, along with the DOT OIG, have previously made recommendations that FRA improve this guidance to ensure reliability of ridership and revenue forecasts that are used to determine the viability of high-speed rail projects. According to FRA officials, the agency is currently in the process of implementing these recommendations. Since we have already made recommendations to FRA on this issue in our prior work and FRA is taking actions on these recommendations, we are not at this time making additional recommendations related to improving FRA’s ridership and revenue forecasting guidance. We recommend that the Secretary of Transportation direct the Administrator of FRA to improve its guidance for high-speed rail project sponsors to better ensure that cost estimates that are submitted by applicants seeking federal funding are accurate, comprehensive, well- documented, and credible according to the best practices detailed in GAO’s Cost Guide. We provided a draft of this report to DOT and the Authority for review and comment. DOT neither agreed nor disagreed with our recommendation. In an e-mailed response, DOT said it was pleased that the Authority met many of the criteria in the Cost Guide for producing accurate, comprehensive, well-documented, and credible cost estimates; that ridership and revenue forecasts were reasonable; and that the Authority did a comprehensive job in identifying potential economic impacts. However, DOT noted (1) that the currently funded project has sound cost estimates while future, currently unfunded phases will continue to be refined as the project progresses and data improves, (2) that FRA’s cost- estimating guidance was the best available at the time, and (3) that GAO’s Cost Guide focuses on federally managed acquisitions. DOT’s response noted that the project is a multi-decade effort consisting of many segments and phases, each in a different stage of development. DOT’s response also noted that the Cost Guide was issued in March 2009, one month after passage of the Recovery Act and a few months before deadlines for the HSIPR guidance, and, therefore, it was not feasible for FRA to incorporate Cost Guide best practices into guidance. Finally, DOT said applying the Cost Guide principles to future FRA capital cost guidance, while feasible, would require analysis and adaptation to accommodate unique aspects of long-term grantee-managed transportation projects. DOT noted that the Cost Guide is focused primarily on federally managed acquisitions and programs, not infrastructure projects that non-federal parties will develop and build. DOT also provided technical comments that we incorporated as appropriate. We recognize DOT’ concerns; however, our charge was to assess the reliability of the cost estimates and not just whether they complied with FRA’s guidance, which we found to be less than best practices. While the Cost Guide was released in 2009, it is a culmination of cost- estimating best practices that have previously been published and have been available to federal agencies for many years. Therefore, these practices could have been considered when preparing HSIPR program guidance. Finally, the best practices contained in the guide are applicable to developing cost estimates for a wide variety of programs and projects, whether federally managed or not. The Authority provided a letter summarizing its comments about the report (see app. IV). In general, the Authority believes that the report highlights its efforts to produce cost estimates that reflect the scope of the project, that methods and models used to develop ridership and revenue forecasts adhered to applicable best practices, and that comprehensively identifying potential economic impacts demonstrates a strong economic case for the project. However, the Authority noted that different components of its program (such as implementation phases and construction packages) are at different stages of development and that it would not be practicable to apply the full complement of tools in the Cost Guide at the program level at this time. This is because the environmental review is under way on a number of sections and alignments and other choices are still to be made. The Authority also stated it plans to improve its cost estimates and ridership forecasts. For example, the Authority stated that the updated ridership model being developed for the 2014 business plan will incorporate many of the changes we suggested and that the Authority will improve the quantification of project risks. We commend the Authority for planning to improve its cost estimates and forecasts. Regarding not applying the Cost Guide at the program level, we note that a program does not need to be in an advanced stage of planning in order to complete a sensitivity or cost risk and uncertainty analysis. In fact, such analyses are most valuable when performed early in a program’s life cycle. Single point estimates are more uncertain at the beginning of a program because less is known about its detailed requirements and the opportunity for change is greater. For example, undefined or unknown technical information, uncertain economic conditions, and political issues are often encountered during a program’s acquisition. For management to make good decisions, the program estimate must reflect the degree of uncertainty, so that a level of confidence can be given about the estimate. Therefore, it is important to conduct a risk and uncertainty analysis at all stages of a project so cost estimates reflect the risk and uncertainty that exist. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of FRA, and the Director of OMB. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report assesses (1) the reliability of the California High-Speed Rail Authority’s (Authority) estimates of the project’s costs; (2) the reasonableness of the Authority’s passenger rail ridership and revenue forecasts; (3) the risks to the Authority’s plan to fund the project; and (4) the comprehensiveness with which the Authority identified potential economic impacts of the project. Our analysis focused on the Authority’s cost estimates, ridership and revenue forecasts, and economic estimates presented in the April 2012 revised business plan. To address these objectives, we reviewed numerous documents, including Federal Railroad Administration (FRA) guidance, Department of Transportation Office of Inspector General (DOT OIG) reports, prior GAO reports, and pertinent legislation. In addition, we obtained documents from and conducted interviews with Authority officials to obtain information about the Authority’s process for developing its various cost estimates, ridership and revenue forecasts, and economic impact estimates. Specifically, we reviewed the November 2011 draft business plan and April 2012 revised business plans and documentation used to develop the analysis presented in those plans. We conducted interviews with the Authority’s contractors—Parsons Brinckerhoff and Cambridge Systematics—to obtain additional information about the Authority’s processes for developing these estimates and to clarify information in their written documentation. In addition, we also conducted interviews with officials from various federal and state agencies, peer review groups, academic experts, advocacy groups, and transit and local government groups to obtain information on, among other things, their role with the California high-speed rail project and their views on the Authority’s cost estimates, financing plans, ridership and revenue forecasts, and potential economic impacts. (See table 6 for a list of organizations and individuals we interviewed for this study.) To assess the reliability of the project cost estimates, we analyzed the Authority’s cost estimating approach against GAO’s best practices found in the 2009 GAO Cost Estimating and Assessment Guide (Cost Guide). GAO designed the Cost Guide to be used by federal agencies to assist them in developing reliable cost estimates and also as an evaluation tool for existing cost estimates. To develop the Cost Guide, GAO cost experts assessed measures applied by cost-estimating organizations throughout the federal government and industry and considered best-practices for the development of reliable cost-estimates. We analyzed the cost estimating practices used by the Authority against these best practices. For our reporting needs, we collapsed these best practices into four general categories representing practices that help ensure that a cost estimate is (1) accurate, (2) well documented, (3) comprehensive, and (4) credible. After a review of all source data, including but not limited to electronic cost models for both capital investment and operating and maintenance phases, all supporting documentation, personal interviews, and independent research, we assessed the extent to which the Authority met these best practices on a five-point scale: Not Met—Authority provided no evidence that satisfies any of the criteria. Minimally Met—Authority provided evidence that satisfies a small portion of the criterion. Partially Met—Authority provided evidence that satisfies about half of the criterion. Substantially Met—Authority provided evidence that satisfies a large portion of the criterion. Fully Met—Authority provided complete evidence that satisfies the entire criterion. We determined the overall assessment rating by assigning each individual rating a number: Not Met = 1; Minimally Met = 2; Partially Met = 3; Substantially Met = 4; and Fully Met = 5. For the purposes of this assessment we have also included a Not Applicable (N/A) assessment category. Then, we took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. The resulting average becomes the Overall Assessment as follows: Not Met = 0 to 1.4; Minimally Met = 1.5 to 2.4; Partially Met = 2.5 to 3.4; Substantially Met = 3.5 to 4.4; and Fully Met = 4.5 to 5.0. To assess the reasonableness of the Authority’s ridership and revenue forecasts, we analyzed the extent to which the Authority’s methods for developing the ridership model and resulting ridership and revenue forecast adhered to federal guidance and generally accepted travel demand modeling practices for high-speed rail projects. Unlike with GAO’s cost-estimating criteria discussed earlier, there is no single industry standard for developing or evaluating intercity passenger high- speed rail ridership forecasts. As such, for the purposes of our assessment, we reviewed a variety of sources that identify generally accepted travel demand modeling practices and developed criteria based on these practices to assess the reasonableness of the approach used to create the ridership and revenue models for the California high-speed rail project. In developing our criteria, we relied primarily on a 2011 report prepared for the DOT OIG’s office by the firm Steer Davies Gleave, on best practices related to developing high-speed rail ridership and revenue forecasts. The report provides a description of current standard practices in high-speed rail ridership and revenue forecasting, key steps typically involved in completing these forecasts, and a description on the range of data and methods used in the forecasting process. The intent of this guidance is to provide information that will assist reviewers to understand and evaluate forecasting studies. In addition, we also examined other literature on developing rail ridership and revenue forecasts to corroborate information in the Steer Davies and Gleave report. Specifically, we reviewed, among other sources, forecasting guidance from the FRA, Federal Highway Administration (FHWA), prior GAO reports and other ridership and revenue guidance in academic research. (See table 7 for a list of sources used to develop criteria). From these sources we identified recommended practices related to seven key steps relating to: (1) developing trip tables, (2) determining and applying service characteristics, (3) developing mode choice models, (4) estimating induced travel, (5) estimating expected fare revenue, (6) conducting sensitivity analysis, and (7) conducting validation testing. We compared generally accepted practices for each of these steps to the Authority’s process for developing the ridership and revenue forecast as outlined in the April 2012 revised business plan and in supporting technical documentation. We could not evaluate each of the many detailed design decisions, assumptions, and model inputs used by the Authority, but rather focused on the seven key steps and whether they were implemented in accordance with generally accepted practices. We reviewed documents from and conducted interviews with Authority officials and their contractor—Cambridge Systematics—to obtain information about the Authority’s process for developing the ridership and revenue forecasts. Specifically, we examined the Authority’s process for developing the models used to produce the various forecasts, the assumptions and data sources used to develop the models, the survey instruments used to collect data, and the Authority’s process for model estimation, calibration, and validation. We focused our analysis on identifying key steps in developing ridership forecast models for high-speed rail projects, elements affecting validity and reliability of models, common limitations of models and pitfalls, and recommended approaches for external review. In addition, we interviewed organizations that had conducted reviews of the Authority’s ridership and revenue forecasts, such as academic experts from University of California Berkeley’s Institute of Transportation Studies (ITS) and members of the Ridership and Revenue Peer Review Panel and the California High- Speed Rail Peer Review Group. From these interviews, we obtained additional information about (1) generally accepted methods used for project ridership and revenue for high-speed projects and elements of these approaches that have the greatest potential risk, (2) general assumptions underlying demand forecast models, elements impacting validity and reliability of models, and existing data limitations, and (3) the extent to which the Authority’s approach follows generally accepted practices for developing valid and reliable ridership and revenue estimates. To assess the Authority’s financing plan, we reviewed the plan, conducted interviews with Authority officials and other state and federal officials, and reviewed literature and other information on financing for high-speed rail projects in other countries as well as large transportation projects in the United States. To assess how well the Authority identified economic impacts associated with the high-speed rail project, we reviewed the April 2012 revised business plan as well as the April 2012 benefit-cost analysis and April 2012 economic impact analysis. In addition, to establish criteria for the various components of economic impact analysis, including user and non- user impacts, we reviewed pertinent legislation, such as the Passenger Rail Investment and Improvement Act of 2008 and the National Environmental Policy Act, and the NOFAs associated with the HSIPR and Transportation Investment Generating Economic Recovery (TIGER) grant programs. FRA officials told us the NOFA’s outlined the type of information HSIPR grant applicants were to provide regarding project benefits and costs and how this information would be reviewed by FRA in reviewing grant applications. We also reviewed reports from the DOT OIG regarding HSIPR project viability assessments. In particular, we reviewed the June 2011 report prepared for the DOT OIG’s office on best practices related to public benefit assessments of high-speed rail projects produced by the firm Steer Davies Gleave. The Steer Davies Gleave report identified important components of user and non-user impacts associated with public benefits assessments. To gain a better understanding of economic impact analysis, we reviewed the Economic Impact Analysis Primer prepared by the FHWA’s Office of Asset Management. The Economic Analysis Primer identified the basic process of identifying and analyzing economic impacts, including benefit- cost analyses. The document also identified the similarities and differences between benefit-cost analyses and economic impact analyses. We also interviewed officials from FRA, the Authority, FHWA, the Federal Transit Administration, DOT OIG, and Steer Davies Gleave about economic impact issues. To assess issues related to high-speed rail and future travel demand, we reviewed the Authority’s revised business plan and April 2012 equivalent capacity study as well as the November 2011 Statewide Transportation Systems Needs Assessment prepared by the California Transportation Commission (CTC). We also discussed these issues with officials from CTC and the California Department of Transportation as well as officials with local transportation agencies in California about potential improvement projects they had planned that were associated with the high-speed rail project. The proposed high-speed rail project is a very large public works project with costs expected to be spread over more than a decade. Depending on how cost figures are presented, different impressions of the magnitude and funding requirements of the program could be given. Whether or not the effects of inflation are included in the estimate is a source of significant differences. Year of expenditure (YOE) dollars include inflation in out-year costs, a convention adopted to facilitate budgeting over time but not necessarily a good representation of the true economic costs of the project. Removing the increase in cost attributable solely to inflation in the price level provides a better picture of burden on taxpayers and other funders because the tax base, including incomes, property values, and retail sales, would have increased with inflation as well. In the case of the high-speed rail project, the YOE cost total is 25 percent greater than that when inflation effects are removed. An estimate of the cost in present value terms, which accounts for inflation and the time value of money, would be smaller still. We conducted this performance audit from February 2012 to March 2013 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The cost estimate results are unbiased, not overly conservative or optimistic, and based on an assessment of most likely costs. Individual assessment Partially Met While the Authority has attempted to ensure accuracy and eliminate bias in their estimate by conducting sensitivity analysis, parametric checks, and the use of peer review, these have all been on subsets of the total program. No risk or sensitivity analysis has been developed at the program level or between the low and high estimates. Alternative high and low estimates do not create a range of estimates, but rather point estimates evolving around potential options. In the absence of cost risk and uncertainty analysis it is not possible to determine if the estimate is unbiased. Unless the estimate is based on an assessment of the most likely costs and reflects the degree of uncertainty given all of the risks considered, management will not be able to make informed decisions. The estimate has been adjusted properly for inflation. Substantially Met Both capital investment and operations and maintenance (O&M) costs are inflated to YOE dollars using sound data and methodologies. Source data used for cost estimating are normalized to appropriate base years, although in some instances the normalizing processes were not clear. The estimate contains few, if any, minor mistakes. The cost estimate is regularly updated to reflect significant changes in the program so that it is always reflecting current status. Variances between planned and actual costs are documented, explained, and reviewed. The estimate is based on a historical record of cost estimating and actual experiences from other comparable programs. Substantially Met The estimate relies on construction cost data from commercial databases heavily supplemented with local construction bids from analogous construction projects. The Authority collects technical and summary-level cost data on existing and future high-speed trainsets, but there is no documentation that explains how these data were adjusted for use in the cost estimate. The O&M estimate relies on applicable historical data. However, the extent of applicability is unknown because adjustments are not thoroughly documented. The estimating technique for each cost element was used appropriately. Individual assessment Substantially Met The estimating techniques are reasonable for those Standard Cost Categories (SCC) elements discretely estimated, where a unit price estimating methodology was employed. The 2012 O&M model is a simplified version of the 2009 model, appropriately suited to the cost model’s stated purpose of establishing program viability. However, the simplification results in an unnecessary loss of fidelity in some cost elements. Substantially Met The Authority has included all relevant costs with the relatively minor exclusion of disposition costs in the capital investment estimate. Partially Met The technical baseline description for the capital investment cost estimate resides in multiple documents that collectively comprise the technical baseline of the program. However, there is no distinct technical baseline description for the O&M estimate. Officials stated that later versions of the O&M estimate will align with the Concept of Operations plan, which was approved in February 2012. The cost estimate work breakdown structure (WBS) is product-oriented, traceable to the statement of work/objective, and at an appropriate level of detail to ensure that cost elements are neither omitted nor double- counted. Partially Met The program utilized the FRA (SCC) and associated definitions for the capital investment costs. The cost estimate expands upon this structure to provide detailed identification of infrastructure work, but has reduced insight into common support costs. The standardized O&M FRA SCC elements were not used for capturing O&M costs because the O&M estimate was not required to comply with the SCC elements. While the O&M estimate includes common elements for administration and support costs, the O&M WBS is greatly simplified. As a consequence, up to two-thirds of O&M costs are collected in a single cost element. The estimate documents all cost- influencing ground rules and assumptions. Individual assessment Partially Met Ground rules and assumptions are imbedded in much of the documentation for both the capital investment and O&M estimates as well as in the cost models, but not all assumptions have supporting rationale or sources. As the design for a specific section advances, risks are quantified and assigned to specific WBS elements. At the program level, contingency factors are used to capture less-defined risks. However, O&M risks are not specifically related to O&M WBS elements, and the impact of budget constraints on specific WBS elements has not been clearly defined. In addition, the impacts of technology maturity on cost are not entirely defendable. Unless ground rules and assumptions are clearly documented, the cost estimate will not have a basis for areas of potential risk to be resolved. The documentation should capture the source data used, the reliability of the data, and how the data were normalized. Partially Met The documentation provides some insight into the development of the cost estimates; however, much of our analysis was based on information derived from interviews and discussions with Authority representatives, not from readily available information in the documentation. The O&M model includes relevant data, but sources and variables can only be described as somewhat documented. For the most part, documentation relates how inputs are adjusted from past O&M models but fails to account for how earlier values were derived. Without sufficient background knowledge about the source and reliability of the data, the cost estimator cannot know with any confidence whether the data collected can be used directly or need to be modified. The documentation describes in sufficient detail the calculations performed and the estimating methodology used to derive each element’s cost. Substantially Met The documentation provided varying degrees of insight into the estimating methodology. The majority of costs— that is, infrastructure and site work—are described at a detailed level by unit cost, quantities, labor rates, equipment, and material costs, and the like. However, some cost elements had little or no supporting documentation. The documentation describes step-by- step how the estimate was developed so that a cost analyst unfamiliar with the program could understand what was done and replicate it. Individual assessment Partially Met Details of the estimating process and methodology were provided for the track structure and track and site work elements of the capital investment model, but supporting data and details of how other elements are estimated were not available. No comprehensive document exists that explains the O&M model element by element. Without good documentation, management and oversight will not be convinced that the estimate is credible. In addition, analysts unfamiliar with the program will not be able to replicate the estimate because they will not understand the logic behind it. The documentation discusses the technical baseline description and that the data in the baseline are consistent with the estimate. Partially Met The documentation of the capital investment cost model and the technical baseline are consistent with one another. The primary emphasis and underlying data sources are for the infrastructure and site work, but little definition or supporting data are provided for the remaining cost elements. In addition, the O&M cost estimate is not based on an approved technical baseline document, although officials state that later versions will be aligned to the Concept of Operations plan. Because the technical baseline is intended to serve as the basis for developing a cost estimate, it should be discussed in the cost estimate documentation. The documentation provides evidence that the cost estimate was reviewed and accepted by management. Partially Met Documents indicate that the Authority’s management team was engaged in reviewing the cost estimates and there are multiple indications that management reviewed pieces of the cost estimate. However, many of these reviews appear to be for subsets of the total program, either by construction package or phase, and focusing more on the financing rather than the detailed estimating methodology or underlying assumptions. While specific subsets of the estimate appeared to be reviewed by or discussed with management, we found no specific instance where the total program estimate, including supporting source data and estimating methodologies, was provided to senior management for review, discussion, and subsequent approval. Because a cost estimate should form the basis for establishing the budget, it is imperative that management understands how the estimate was developed, including the risks associated with source data and estimating methodologies. The cost estimate includes a sensitivity analysis that identifies a range of possible costs based on varying major assumptions, parameters, and data inputs. Individual assessment Partially Met A formal sensitivity analysis has been performed for Contract Package 1, design and construction of the first 26-33 miles of trackway infrastructure between the counties of Madera and Fresno. In addition, the Authority conducted limited sensitivity analysis on summary-level variables in the O&M model. However, sensitivity analysis of the entire program estimate has not been done. The capital investment estimate includes low and high cost alternative alignments, and the O&M estimate provides three alternative scenarios driven by ridership options. However, without a complete sensitivity analysis that reveals how the cost estimate is affected by a change in a single assumption, the cost estimator will not fully understand which variable most affects the cost estimate. A risk and uncertainty analysis was conducted that quantified the imperfectly understood risks and identified the effects of changing key cost driver assumptions and factors. Partially Met The Authority utilized FRA guidance in developing its estimates, guidance that does not require risk or uncertainty analysis at the program level this early in the design stage. Authority officials stated that more advanced engineering designs are being developed to support the process and that risk and uncertainty analysis has been undertaken on Contract Package 1. Authority officials acknowledge the existence of risk and have tried to accommodate expected risk through the application of contingency factors. While the capital investment and O&M models include a contingency element, the factors used do not appear to be based on historical data or analogous sources. Lacking risk and uncertainty analysis, management cannot determine a defensible level of contingency reserves that are necessary to cover increased costs resulting from unexpected design complexity, incomplete requirements, technology uncertainty, and other uncertainties. Major cost elements were crossed checked to see whether results were similar. Individual assessment Partially Met The Authority recognizes the importance of crosschecks and identified a series of crosschecks to verify and validate the results of the data. Authority officials stated there are several stages of crosschecks and quality control which are described in their cost estimating procedures. Yet little documentation has been provided that would allow us to verify that crosschecks and alternative methodologies have been developed. For example, estimators have crosschecked major cost factors in the O&M model with cost data from foreign systems, but there is no evidence that costs have been estimated using different methodologies. The main purpose of cross-checking is to determine whether alternative methods produce similar results. If so, then confidence in the estimate increases, leading to greater credibility. The Authority has contracted with the International Union of Railways (UIC) for a study that intends to verify and validate the capital investment cost model. Authority officials stated that the UIC panel of experts will also provide a set of international cost comparisons for infrastructure maintenance and rolling stock maintenance. An independent cost estimate was conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. Minimally Met An independent cost estimate (ICE) was performed on the Merced-Fresno and Fresno-Bakersfield segments for infrastructure costs. However, while the segments cover 35 percent of the planned system rail length, they make up less than 10 percent of the overall estimated program cost. An ICE should be performed on the entire program, including O&M costs. ICEs can provide decision makers with additional insights into a program’s potential costs because they frequently use different methods and are less burdened with organizational bias. The Authority collected data from a variety of sources including, among others, socioeconomic data from local agencies, U.S. Census Bureau, and the California Department of Finance; travel data from various travel surveys; and highway, air, conventional rail, and urban transit network data from local agencies. The high-speed ridership and revenue model for inter-regional travel was developed utilizing surveys and other statewide travel information. Intra-regional travel models from Metropolitan Planning Organizations (MPOs) in the San Francisco and Los Angeles regions were adapted for use in the high-speed rail ridership and revenue model from the models maintained by the MPOs for those regions. A factoring process was used to estimate ridership in the San Diego region. Base year trip tables were developed from existing California regional models used by local authorities including the Metropolitan Transportation Commission (MTC) and the Southern California Association of Governments as well as interregional trip tables developed from travel survey data. Forecast year trip tables were developed by projecting base year forecast data to forecast year 2030, and then the models were run on 2030 projections. Trip were segmented by long versus short trips (over and under 100 miles), and trip purposes (commute, business, recreation, other). development: The base and forecast year input trip tables are the basis for a study’s ridership estimates and revenue forecasts. Any overestimate or underestimate of the trip tables will translate to high or low forecasts of ridership. Base trip tables generally summarize the current total number of trips by mode for each city pair along the route and are generally prepared by using a variety of sources of data on actual trip making patterns. Growth factors—which determine the rate of increase over time—can then be applied to the base trip tables to develop forecast year trip tables, which contain estimates of future travel on various modes in the absence of a proposed high-speed rail alternative. Forecast year trip tables may also be prepared by estimating future-year trips directly. Trip segmentation: Trip tables are generally segmented by mode of travel, trip purpose, and other traveler characteristics. Criteria frequently used in defining market segments include trip purpose, trip length, traveler income, travel party size, and others. Authority’s methods for developing ridership and revenue forecasts The Authority developed a detailed network representation for the entire state to forecast travel between regions. Data were obtained from the existing statewide highway network and details were added using data from local regional models, from the MTC, the Southern California Association of Governments, the San Diego Association of Governments, and data from the Kern County region. LOS characteristics were defined for the four inter-regional travel modes: auto, conventional rail, high-speed rail, and air. LOS characteristics covered three broad categories: costs, times, and reliability, which were summarized in travel skim tables. Several of these characteristics were varied during model application to see how ridership and revenue would be impacted. Characteristics were collected from published or observed data from various sources including, the MTC and the Federal Aviation Administration. The high- speed rail characteristics were based on the initial service plan and fare structure. Network representation can be detailed (i.e., with detailed representations of street and transit networks that include location, alignment, connections, and service characteristics) or can be less explicit and instead focus directly on zone to zone level of service data. A less explicit network representation can be used if the structure of the network is very simple). Preparing skim tables: Skim tables contain data on the time, cost, and other service characteristics of the various modes that are available for a trip. Accurate and realistic representation of the base and forecast year LOS characteristics is of paramount importance for realistic high-speed ridership forecasting. Rail LOS information may be approximately derived from the service plan but may not represent it in complete detail. Authority’s methods for developing ridership and revenue forecasts The Authority developed two choice models: intra- regional urban model (models behavior associated with shorter distance and more frequent trip making) and an inter-regional model (models traveler behavior associated with longer-distance travel). predict the decisions of travelers considering alternative transportation modes. Multinomial logit models and nested logit model are types of choice model that can be used. Diversion choice model: A diversion choice model considers only two modes–the one in use in the base situation and the high-speed rail alternative. Intra-regional (Urban) Models: For both the San Francisco Bay Area and the greater Los Angeles regions, mode choice models were adapted from existing models to include the high-speed rail mode. The updated mode choice models were applied using the MPO trip tables for each region as input. San Diego is the only other region that contains the possibility of intra-regional high-speed rail trips, but the estimate of these riders was very low relative to the other regions. Because the level of effort to develop, calibrate, and apply the regional mode choice model was very high, intra-regional ridership for San Diego was developed using a population-based estimate rather than a traditional mode choice model. Inter-regional models: The Authority developed four sets of models which included trip frequency, destination choice, primary mode choice, and access/egress mode choice. The destination choice component predicts the destinations of the trips generated in the trip frequency component based on zonal characteristics and travel impedances. The mode choice components (main mode choice, access mode choice and egress mode choice) predict the modes that the travelers would choose based upon the modal service levels as well as characteristics of the travelers and trips being made. Data were derived from, among other sources, the California Department of Transportation Statewide Model, existing regional mode choice models, and revealed preference and stated preference survey data. The economic and household characteristics were forecast for each zone in the year 2030 based on data and forecasts from state, regional, and local government agencies. The primary main mode choice model relied primarily on stated preference data. Generally accepted travel-demand-modeling practices A high-speed rail project will improve the overall level of service for intercity travel within a given corridor. This improvement will make conditions more favorable for travel. Trips will therefore be taken on high-speed rail that might not otherwise have been made using any of the current modes. The new trips are commonly referred to as induced travel. An upper limit on induced travel of approximately 10 percent of total high-speed rail trips is widely accepted for proposed high-speed rail systems in the U.S. Expected fare revenue is determined by a calculation using the ridership estimates generated by the model and the average fares. The total ridership for the system is generally calculated by adding the diverted trips calculated from the mode choice models and the induced trips to produce the total ridership for the system. All ridership and revenue forecasting studies should incorporate an analysis of the sensitivity of forecast results to key inputs and modeling assumptions including fare, running time, service frequency, station locations, and assumptions about socio-economic and travel growth in forecast years. Sensitivity analysis typically is conducted by varying, more or less systematically, selected forecasting model inputs, parameters, or assumptions (e.g., inflation rate or fuel cost) around their “standard” value, running the model, and examining the variation in outputs. Sensitivity analysis can help determine the reliability associated with the model output forecasts and can help identify the factors that have greatest impact on project ridership and revenue. Model validation is a key component of ridership and revenue forecasting and generally consists of testing the validity of the model using data other than (and usually newer than) the data from which it was estimated, to assess how well the model predicts actual ridership. There are two superior (but not often performed) ways of checking model performance: (1) the historical method, in which a prior-year model is used to forecast current travel, which is then compared with actual current travel; and (2) “backcasting,” in which a current year model is used to estimate travel for a prior year, which is then compared with actual travel in the prior year. Backcasting is used by 5 percent of all and 13 percent of large MPOs. Authority’s methods for developing ridership and revenue forecasts The Authority forecasted 2.05 percent induced travel for the blended Phase 1 low scenario. The Authority calculated fare revenue by multiplying the ridership estimates generated from the ridership model by the average high-speed rail fares forecasted for each region-to-region pair. Several sensitivity tests were done to determine how the model reacted to different sets of assumptions such as changes to fuel costs, travel times, and fares. In addition, the Authority developed an extreme case scenario to test the sensitivity of the model to a series of downside events, such as increased average rail travel time from Merced to the San Fernando Valley (140 min. instead of 126 min), decreased train frequency (3 trains per hour instead of 4 trains per hour during peak times), lower auto-operating costs and lower air fares (10 percent below actual 2009 average air fares). The Authority validated the model through tests performed using Amtrak’s Acela service in the Northeast Corridor (NEC) as input to the California high-speed rail model and compared the output with 2008 actual ridership and 2030 NEC forecasts. Efforts to validate the model by comparing to the NEC appear reasonable. The NEC is not an ideal test for the model but it is the only one available in the U.S. Use of foreign systems would raise difficult issues of comparability. In addition to the individual named above, Paul Aussendorf, Assistant Director; Russell Burnett; Jason Lee; Delwen Jones; Richard Jorgenson; James Manzo (Technomics, Inc.); Maria Mercado; Susan Offutt; Paul Revesz; Max Sawicky; Maria Wallace; and Crystal Wesco made key contributions to this report. | The planned 520-mile California high-speed rail project, which would link San Francisco to Los Angeles, would be designed to operate at speeds up to 220 miles per hour. At an estimated cost of $68.4 billion (in year-of-expenditure dollars), it is expected to be one of the most expensive transportation projects undertaken in the United States. The Authority is responsible for implementing the project and federal funding is being provided from the FRAs High-Speed Intercity Passenger Rail program. GAO reviewed (1) the reliability of project cost estimates, (2) the reasonableness of revenue and passenger rail ridership forecasts, (3) the risks attendant with the projects funding plan, and (4) the comprehensiveness with which the projects economic impacts were identified. GAO obtained documents from and conducted interviews with federal officials and officials from the Authority related to cost, financing, ridership and revenue modeling and estimation, and business plans and analyses related to potential economic impacts. GAO also interviewed state and local officials as well as the projects peer review group members. The California High-Speed Rail Authority (Authority) met some, but not all of the best practices in GAO's Cost Estimating and Assessment Guide (Cost Guide) for producing cost estimates that are accurate , comprehensive , well documented , and credible . By not following all best practices, there is increased risk of such things as cost overruns, missed deadlines, and unmet performance targets. The Authority substantially met the criteria for the accurate characteristic by, for example, the cost estimate's reflecting the current scope of the project. However, the Authority partially met the criteria for the other three characteristics since the operating costs were not sufficiently detailed ( comprehensive ), the development of some cost elements were not sufficiently explained ( well documented ), and because no systematic assessment of risk was performed ( credible ). The Federal Railroad Administration (FRA) issued limited guidance for preparing cost estimates, and this guidance did not reflect best practices in the Cost Guide . The Authority plans to improve its cost estimates. GAO found the Authority's ridership and revenue forecasts to be reasonable; however, additional updates are necessary to refine the ridership and revenue model for the 2014 business plan. GAO also found the travel-demand-modeling process used to generate these forecasts followed generally accepted travel- demand-modeling practices. For example, the Authority revised several assumptions, such as gasoline price forecasts, to reflect changes in current and anticipated future conditions. However, additional updates, such as the development of a new travel survey, will be necessary to further refine these forecasts and improve the model's utility to make future decisions. External peer review groups have also recommended additional updates. The project's funding, which relies on both public and private sources, faces uncertainty, especially in a tight federal and state budget environment. Obtaining $38.7 billion in federal funding over the construction period is one of the biggest challenges to completing this project. In the latter stages, the Authority will also rely on $13.1 billion in private-sector financing, but will require more reliable operating cost estimates and revenue forecasts to determine whether, or the extent to which, the system will be profitable. The Authority's plan recognizes the uncertainty of the current funding environment and is building the project in phases. The Authority has also identified an alternative funding source. However, that funding source is also uncertain. The Authority did a comprehensive job in identifying the potential economic impacts of the high-speed rail project. This includes identification of user impacts, such as effects on travel time reliability, and non-user impacts, such as effects on highway congestion. However, the nature of specific economic impacts will depend on a number of factors, including future project decisions. GAO also found limitations in the Authority's benefit-cost analysis of the project that could limit its usefulness to decision makers. Finally, GAO found that construction of the high-speed rail project will not eliminate the need for additional improvements to meet future statewide-travel demand, but current statewide- transportation assessments and planning have given little consideration to this issue. To produce reliable cost estimates, FRA should improve its guidance so it is in line with the best practices in GAO's Cost Guide . The Department of Transportation did not agree or disagree with the recommendation but said, with further analysis, applying the Cost Guide would be feasible. The Authority said it will incorporate many of the report's findings into future cost and ridership estimates. |
Federal managers have complained for years about the rigid and elaborate procedures required for federal personnel administration, often expressing the need for more flexibility within a system that has traditionally been based on uniform rules. Reformers have long sought to decentralize the personnel system and simplify the rules, arguing that however well the system may have operated in the past, it is no longer suited to meet the needs of a changing and competitive world. In 1983, for example, NAPA published a report critical of the excessive constraints on federal managers, including constraints on their human resources decisions. As part of the response to these criticisms, OPM decentralized and delegated many personnel decisions to the agencies and has encouraged agencies to use human capital flexibilities to help tailor their personnel approaches to accomplish their unique missions. Our strategic human capital model also advocates that agencies develop a tailored approach to their use of available flexibilities by taking advantage of those flexibilities that are appropriate for their particular organizations and their mission accomplishment. Because of this tailoring, the federal personnel system is becoming more varied, despite its often-cited characterization as a “single employer.” The trend toward increased flexibility has manifested itself a number of ways, including the efforts of some agencies to seek congressional approval to move away from the personnel provisions of Title 5 of the U.S. Code that have traditionally governed much of the federal government’s civil service system. As noted by OPM in a 1998 report, federal agencies’ status relative to these Title 5 personnel requirements can be better understood by thinking of them on a continuum. On one end of the continuum are federal agencies that generally must follow Title 5 personnel requirements. These agencies do not have the authority, for example, to establish their own pay systems. On the other end of the continuum are federal agencies that have more flexibility in that they are exempt from many Title 5 personnel requirements. For example, Congress provided the Tennessee Valley Authority and the Federal Reserve Board with broad authority to establish their own personnel systems and procedures. The movement in the direction of greater flexibility, in fact, has gained momentum to the extent that about half of federal civilian employees are now exempt from at least some of the personnel-related requirements of Title 5. In addition to receiving congressional authorizations for exemptions from the personnel-related requirements of Title 5, other mechanisms are available to introduce human capital innovations and flexibilities within federal agencies. OPM has the authority to review and make changes to its existing regulations and guidance to provide agencies with additional flexibilities. Additionally, a federal agency can obtain authority from OPM to waive some existing federal human resources laws or regulations through a personnel demonstration project. The goal of these demonstration projects is to encourage experimentation in human resources management by allowing federal agencies to propose, develop, test, and evaluate changes to their own personnel systems. In some cases, Congress has allowed some agencies to adopt alternatives that have been tested and deemed successful. For example, more flexible pay approaches that were tested within the Department of the Navy’s China Lake (California) demonstration project in the early 1980s were eventually adopted by other federal agencies, such as the Department of Commerce’s National Institute of Standards and Technology. Exemptions from Title 5 personnel requirements within our seven selected agencies help to illustrate the gradations of flexibility. IRS, for example, represents an agency with broad authority related to its human capital management. Efforts to reform IRS led to provisions under the IRS Restructuring and Reform Act of 1998, which gave the Secretary of the Treasury various pay and hiring flexibilities not otherwise available under Title 5, such as the authority to establish new systems for hiring and staffing, compensation, and performance management. State and ITA are examples of organizations in which some employees are not subject to Title 5, while the remainder of the organization is covered. In this case, Foreign Service employees at State and ITA are outside of Title 5. For the remaining four agencies we included in our review, the majority of their employees are covered under the personnel requirements of Title 5, with some limited exemptions. Air Force, for instance, has made use of flexibilities under the demonstration project authority and currently participates in two such demonstration projects, one involving laboratory personnel and another for the civilian acquisition workforce. In addition, several of our selected agencies, such as GSA and VBA, received additional flexibility through legislative authority to offer voluntary separation incentive payments, commonly known as buyouts, to help restructure their workforces. Figure 2 provides background information on the seven agencies along with a summary of some of their related exemptions from Title 5 personnel requirements. Even under current Title 5 personnel provisions and their applicable regulations, efforts to reform and improve the personnel system have provided many human capital flexibilities for agencies to use. Within broad parameters, such as adherence to merit system principles and employee protection from prohibited personnel practices, these flexibilities offer the agencies effective ways to accomplish their missions while maintaining the key values of a centralized system. For example, agencies have many flexibilities available to help them restructure and realign their workforces. Moreover, agencies have numerous compensation flexibilities that authorize them to provide additional direct payments to support their recruitment, relocation, and retention efforts, although some of them may require the approval of OPM or the Office of Management and Budget (OMB). Today, federal agencies are facing many human capital challenges. With the increasing numbers of employees retiring and the numbers of employees who will be eligible to retire in the near future, along with competition from private companies, federal agencies are in a struggle to recruit and retain highly skilled employees. In response to these challenges, agencies need to use the various human capital flexibilities that are available to them in managing their workforces to achieve agency missions and accomplish goals. Our discussions with agency officials and union representatives revealed numerous human capital flexibilities that they deemed effective in managing their workforces. These flexibilities encompassed broad areas of personnel-related actions such as recruitment, retention, compensation, position classification, incentive awards and recognition, training and development, performance management and appraisals, realignment and reorganization, and work arrangements and work-life policies. On the basis of these discussions, we identified the flexibilities that were the most frequently cited by agency and union officials as being the most effective for managing their agencies’ workforces. These flexibilities include work-life programs, such as alternative work schedules, child care assistance, and transit subsidies; monetary recruitment and retention incentives, including retention and relocation bonuses and retention allowances; special hiring authorities, such as student employment and outstanding incentive awards, which range from performance-based cash awards to time-off awards to symbolic items of nominal value, such as plaques and T-shirts. Table 1 provides a summary of these flexibilities and the cited benefits of implementing them. Agency officials and union representatives cited work-life programs among the most effective flexibilities for recruiting, motivating, and retaining staff. These programs are offered to help employees balance their work and family lives and include alternative work schedules, employee assistance programs, child care centers and assistance, transit subsidies, and telecommuting options. OPM has strongly supported the use of these family-friendly programs, indicating that they can help to attract and retain quality employees, boost morale, and reduce unscheduled leave. Our recent report looking at human capital challenges at the Securities and Exchange Commission revealed how agencies can sometimes overlook the effectiveness of these work-life programs in recruiting, retaining, and motivating staff. The following is additional information about the effectiveness of these work-life flexibilities. Alternative work schedules. Federal agencies generally have the authority to determine the hours of work for their employees to ensure that agencies meet organizational goals. Agencies may establish hours of work and scheduling flexibilities to replace the traditional schedules of 8 hours per day and 40 hours per week, such as full-time and part-time, overtime hours, and flexible work schedules. Scheduling flexibilities, such as alternative work schedules, were among the effective flexibilities most cited by agency managers and supervisors, human resources officials, and union representatives. Although some supervisors told us that such schedules can be a challenge to manage, these supervisors stated that this scheduling flexibility increases employee morale, strongly motivates employees, and allows employees to be more flexible in accomplishing job responsibilities. For example, IRS officials told us that the agency has made use of alternative work schedules since the early 1980s and that this flexibility is attractive to both current and potential employees. Supervisors at the San Francisco Mint said that the use of alternative work schedules reduces the amount of accumulated leave taken because employees can accomplish personal errands and tasks on their days off. According to human resources officials in GSA’s San Francisco region, about 1,300 of the region’s 1,500 employees make use of alternative work schedules. Employee assistance programs. Through these programs, agencies can provide a range of free, confidential counseling and referral services to assist employees who may be experiencing personal problems affecting their job performance or personal health. Agency and union officials said that these programs can be valuable in helping employees deal with issues such as work and family pressures. IRS supervisors in Philadelphia told us, for example, that IRS’s employee assistance program offers employees and their family members a way to address both work-related and nonwork- related issues and that the employees they had referred to the program had found the services to be quite beneficial. Officials at Langley Air Force Base told us that both civilian and military personnel use the agency’s employee assistance programs, which were designed to meet the needs of various employee groups. Child development centers and child care assistance. Many federal agencies provide on-site or near-site child development centers to help employees with child care needs. Civilian federal agencies recently obtained authority through federal statute to use appropriated funds from salaries and expenses to assist their lower income employees with the cost of child care. Agencies can also assist their employees with information about other organizations that can help employees locate quality child care services. At some of the field locations we visited, agencies provided on- site or near-site child care for their employees. Agency and union officials said that this assistance greatly aids employees in focusing on their job responsibilities by providing more reliable child care, and that reliable child care often results in fewer employee absences. A national union representative pointed out that child care subsidies have allowed agencies to retain employees and save money because they do not have to train new staff members. According to OPM, there are approximately 1,000 work-site child care centers sponsored by civilian and military agencies in the federal government. Transportation subsidies. In April 2000, an executive order was signed that required all federal agencies to implement a transportation fringe benefit program for their employees. This transit subsidy program was designed to encourage federal employees to use mass transportation for commuting to and from work to reduce traffic congestion and air pollution. Federal agencies in the national capital region were required to implement a “transit pass” program by providing eligible employees with subsidies in the form of subway farecards. Agencies generally have the flexibility to make this program available to their employees nationwide and can provide employees with transit passes of up to $100 per month for each employee who uses public or vanpool transportation. Many supervisors and union representatives we interviewed said that this transit subsidy is highly valued by employees. Officials in the San Francisco Bay Area made particular note of the benefits of using public transportation given the traffic congestion in the area. While many agency managers and supervisors, human resources officials, and union representatives supported the effectiveness of work-life programs, our discussions of telecommuting with these officials brought about strongly mixed opinions. Telecommuting, also referred to as telework or flexiplace, involves work arrangements that allow an employee to work away from the traditional work site, either at home or at another approved location. Often cited potential benefits for agencies to establish telecommuting programs include improved recruiting and retention of employees, increased productivity, and a reduced need for office space. Cited reasons for employees to participate in such programs include the opportunity to reduce commuting time; lowered personal costs in areas such as transportation, parking, food, and wardrobe; and improvement in the quality of work-life and morale because they are able to balance work and family demands. An MSPB survey conducted in 2000 found that 47 percent of federal employees considered telecommuting important to them personally and that 20 percent had it available to them. Several managers and supervisors we interviewed, however, said that telecommuting has not been shown to increase employee productivity, and that it is often complicated to manage an employee who is working “out of sight.” According to these agency officials, in many cases it is more difficult to judge the quality of the employee’s work in a telecommuting environment, while in other cases the quality of the work can decline if the employee is not mature in using this flexibility. In addition, with telecommuting, the office often loses some sense of teamwork and continuity, and sometimes significant logistical obstacles must be overcome. Further, telecommuting is not practical for all occupations or situations. Yet, other agency managers and numerous union representatives said that telecommuting can be an effective flexibility if used appropriately. Union representatives at GSA in Philadelphia, for example, said that agency managers should focus on employee productivity and results rather than the need to simply observe the employee working. These views mirror those found in our 1997 report reviewing the use of telecommuting (i.e., flexiplace) in the federal government. During that review, agency officials and union representatives we interviewed cited management resistance as the largest barrier to implementing flexiplace programs. Agency officials had informed us that they had had some success in overcoming management resistance by training supervisors or by exposing them to telecommuting arrangements. At the request of the Chairman, Subcommittee on Technology and Procurement Policy, House Committee on Government Reform, we are undertaking an assessment of federal telecommuting policies and programs. Agency and union officials also cited monetary recruitment and retention incentives as highly effective in managing their agencies’ workforces. Agencies generally offer these types of monetary incentives to employees based on employee qualifications, special needs of the agencies, or difficulties in filling positions. These flexibilities include the following. Superior/special qualifications appointments. Using this flexibility, agencies can set base pay for newly appointed individuals above step 1 of the various grade levels based on the superior qualifications or highly specialized skills of the candidates or special needs of the agency. Agency officials said that this flexibility was especially effective because it allows agencies more control over entry-level salaries and permits agencies to match the prior salaries of new hires coming from the private sector. For example, IRS supervisors in Oakland told us that this hiring flexibility had helped their office in matching salaries of employees hired from the dot- com industry. GSA human resources officials in San Francisco said that this appointment authority had greatly assisted their office in hiring about 30 employees over the last 3 years. Officials from the Mint’s headquarters information technology office said this pay incentive had helped in hiring highly skilled information security personnel at the GS-13 and GS-14 levels. Recruitment bonuses. A recruitment bonus is a lump-sum payment of up to 25 percent of basic pay that an agency may pay to an employee newly appointed to a position that would otherwise be difficult to fill. In return, the employee must sign an agreement to fulfill at least 6 months of service with the agency. A senior human resources manager at one department, for example, told us that her department had instituted over 1,000 recruitment bonuses (averaging about $5,000 each) to attract new hires. She said that the department typically hired new employees only at the GS-7 level and thus relied on these recruitment bonuses to augment starting pay, particularly for hard-to-fill scientific and technical positions. Relocation bonuses. A relocation bonus is a lump-sum payment of up to 25 percent of basic pay that an agency may pay to a current employee who must relocate to a position in a different commuting area that would otherwise be difficult to fill. In return, the employee must sign a service agreement with the agency. Another senior human resources manager, for example, told us that his agency uses relocation bonuses to assist certain employees who are required to move every 3 years to limit potential conflicts of interest in their sensitive positions. He said that without the relocation bonus, these employees would often lose money when they move, resulting in significant morale problems. Retention allowances. A retention allowance is a continuing (i.e., biweekly) payment of up to 25 percent of basic pay that an agency may pay to help retain an employee. The agency must determine that (1) the unusually high or unique qualifications of the employee or a special need of the agency for the employee’s services makes it essential to retain the employee and (2) the employee would be likely to leave the federal government in the absence of a retention allowance. In addition, an agency may offer retention allowances to a group or category of employees. Agencies must annually review and certify the allowances, which allows the agencies to terminate the incentive payments when no longer deemed necessary. One senior human resources manager told us, for example, that her department often uses retention allowances to help retain certain specialized employees who are frequently approached by recruiters from private industry and state governments. Although agencies generally use retention allowances to retain highly qualified employees, State also uses this flexibility to build employee competencies. In 1998, when planning for its information technology requirements, State determined that it needed to address the difficulty of attracting and keeping the highly qualified technical workforce necessary to carry out its mission of providing support and coordinating the activities of all U.S. government agencies abroad. As such, it implemented a technology skills development program to attract and retain employees with certain technological skills by granting them retention allowances for obtaining job-related degrees and certifications. Under the program, State also paid for training courses leading up to certification but not the examinations to obtain the credentials. According to State, it has granted over $4 million in total retention allowances under this skills development program. The number of information technology employees with degrees or certifications increased from 133 in 1999 to 583 in 2001. As part of its evaluation of the skills development program, State surveyed the participants and supervisors involved in the program. Approximately 61 percent of the employees who participated in the program (335 out of 547) responded to the 2001 survey. The 2001 survey showed that 80 percent of the responding participants agreed that receiving the retention allowance played a substantial role in their decision to work at the department, and 90 percent agreed that receiving the allowance played a substantial role in their decision to remain at the department. Agency and union representatives frequently noted that special hiring authorities available to federal agencies can also be particularly effective in assisting agencies to appoint needed employees. These hiring authorities allow agencies to hire employees without going through the standard federal hiring process, often resulting, according to managers, in shorter hiring times, less onerous paperwork, and more flexibility in selecting the job candidates who managers believe are most qualified. These special hiring flexibilities include the following. Student educational employment program. The student employment program allows agencies to appoint graduate, undergraduate, vocational, technical, associate degree, and professional degree seeking students who are enrolled or have been accepted for enrollment in at least a part-time schedule at an accredited institution. Some of these student employees are eligible to receive tuition assistance and, upon completion of their academic work, may be eligible for conversion to permanent jobs with the agency. A senior human resources manager at one department said that the student employment program allows agencies to develop professional relationships with students while they are still in school, which makes it easier to hire them when they are looking for permanent employment. GSA officials said they had hired 110 students under this program in the last 3 years and noted that the agency has done well at retaining these employees after they completed their academic work. Air Force officials told us that given the agency’s downsizing environment of the past decade, the Air Force had only recently reestablished its student employment program but that the program has been successful in bringing in new employees who, thus far, tend to stay with the agency. Outstanding scholar program. The outstanding scholar program supplements the standard competitive hiring process by allowing agencies to hire outstanding college graduates for certain entry-level occupations at grades GS-5 and GS-7. Agency officials we interviewed said that because agencies using the program are not required to rate and rank candidates for these positions, the hiring process can be shortened. For example, supervisors at GSA in Philadelphia told us that outstanding scholar hiring authority is beneficial because it allows the agency to hire more quickly. Although these and other agency officials strongly supported the use of this program, concerns have been raised by some about the degree of discretion this program provides in allowing agencies to circumvent the standard competitive hiring process. For example, in a January 2000 report, MSPB noted that the hiring authority under the outstanding scholar program was originally intended to be used as a short-term supplemental hiring tool. The program was established in 1981 in response to a civil lawsuit challenging the federal government’s use of a written test for entry- level professional and administrative jobs because of that test’s adverse impact on African-Americans and Hispanics. Although the program is aimed at addressing underrepresentation of African Americans and Hispanics, the program has never been restricted to those designated minority groups. In its report, MSPB recommended that this hiring authority be abolished and that merit-based hiring be restored to this group of federal jobs. In its comments on a draft of our report, OPM cautioned that although some agency officials we interviewed may have viewed this program as providing broad authority to use noncompetitive hiring procedures, agencies are to use this program only as a supplement to competitive examining. OPM stressed that agencies must have an established pattern of competitive selection into the covered occupations before agencies can use the program. Veteran-related hiring authorities. During our review, several agency and union officials also noted the benefits of two veteran-focused hiring authorities. Veterans Readjustment Appointment (VRA) authority allows agencies to noncompetitively appoint eligible veterans to otherwise competitive positions at any grade level through GS-11 or equivalent. After the veteran completes 2 years of satisfactory service, the employing agency must then noncompetitively convert this VRA appointee to permanent status in the competitive federal service. Veterans Employment Opportunities Act (VEOA) authority allows agencies to obtain a wider pool of job applicants by permitting agencies to accept job applications from eligible veterans for certain positions that would typically be open only to individuals with competitive status. Veterans who submit job applications under this VEOA authority could then be selected for the positions under standard competitive procedures. Supervisors of wage-grade employees at GSA’s Philadelphia region said, for example, that using VRA authority had been effective in assisting the region in quickly hiring highly qualified veterans. GSA human resources officials in San Francisco told us that VEOA had been effective in facilitating the hire of 24 veterans over the last year. Agency and union officials also frequently mentioned the effectiveness of granting incentive awards to employees. The intent of the incentive awards program is to provide appropriate motivation and recognition for excellence in job performance and contributions to an agency’s goals. Incentive awards, which can be either monetary or nonmonetary, include the following. Performance awards are lump-sum cash awards that reward employees for fully successful or better job performance as defined by formal performance appraisals. Awards can be up to 10 percent of an employee’s basic pay, or up to 20 percent for exceptional job performance. Special act or service awards are lump-sum cash awards for specific accomplishments that contribute to the efficiency, economy, or other improvement of government operations. Agencies may grant up to $10,000 without external approval, up to $25,000 with OPM approval, and in excess of $25,000 with Presidential approval. Quality step increases (QSI) are permanent pay increases for outstanding performance as shown on formal job performance appraisals. QSIs are granted by providing employees with faster than normal progression through the stepped rates of GS. Time-off awards are awards that grant employees time off from duty without charging their annual leave or requiring that they forgo pay. These awards allow employees to take time off from work when it is most convenient for both the agencies and the employees. Group incentives include cash awards granted to employees based on (1) increases in productivity or decreases in costs (i.e., gainsharing) or (2) achievement of specified goals that enhance the success of the organization’s mission (i.e., goalsharing). These incentives are designed to foster teamwork and promote innovation and continuous improvement. Honorary and informal recognition awards are awards such as trophies, plaques, certificates, and other tangible incentives. These awards give supervisors maximum flexibility to be creative in how they recognize employees. Agency and union officials provided us with numerous examples of their use of incentive awards as effective flexibilities. For example: Officials at GSA said that GSA had used its awards program effectively to recognize and motivate employees and that the agency had delegated approval for authorizing awards to appropriate levels within the agency. GSA’s fast-track awards program, for example, allows managers and supervisors to log onto GSA’s intranet system and complete the administrative work for the award within minutes. At VBA in Philadelphia, supervisors noted that offering movie tickets and restaurant coupons to employees was a good way to show appreciation for employees’ performance and contributions. U.S. Mint officials said that they reward and recognize employees through on-the-spot awards, time-off awards, and gainsharing. At the Mint in San Francisco, managers mentioned that they have used employee recognition day to boost morale by providing awards that are of nominal monetary value but that are symbolically significant, such as T-shirts. State’s Information Resource Management (IRM) Bureau officials said that their quarterly awards process allows supervisors to recognize and reward employees in a more timely fashion, rather than waiting until the annual job performance appraisal process. IRS managers in Philadelphia mentioned that the agency provides data conversion employees with incentive pay tied to quality and production, noting that this award has helped to motivate these employees to accomplish their job tasks more quickly and accurately. We identified five categories of additional flexibilities that agency officials and union representatives cited most often as being potentially helpful in managing their workforces if additional flexibilities were authorized for agencies. Specifically, these categories include more flexible pay approaches, greater flexibility to streamline and improve the federal hiring process, increased flexibility in addressing employees’ poor job performance, additional workforce restructuring options, and expanded flexibility in acquiring and retaining temporary employees. These suggestions by agency officials and union representatives provide a starting point for executive branch decision makers and Congress to consider as they seek to reform federal human capital policies and practices. Although we have not analyzed the validity of the suggestions, the categories are consistent with the authorities that we have established at GAO and have been urging for other federal agencies. The GAO Personnel Act of 1980 and our 2000 legislation included some of the proposed additional flexibilities. The most prominent change in human capital management that we implemented as a result of the GAO Personnel Act of 1980 was a broadbanded pay-for-performance system that bases employee compensation primarily on the knowledge, skills, and performance of individual employees. It provides managers flexibility to assign and use employees in a manner that is more suitable to multitasking and the full use of staff. Importantly, careful design and effective implementation is crucial to obtaining the benefits of broadbanding in an equitable and cost-effective manner. Also, as a result of the 1980 Act, the Comptroller General has the authority to hire, on a noncompetitive basis, up to 15 experts and consultants at any level, including senior executives, with renewable terms up to 3 years each. GAO has used this authority in selected cases and found it to be valuable in filling critical time-sensitive positions within the agency. Our October 2000 legislation gave us additional tools to realign our workforce in light of mission needs and overall budgetary constraints; to correct skills imbalances; and to reduce high-grade, managerial, or supervisory positions without reducing the overall number of employees. To address any or all of these three situations, we were given authority to offer voluntary early retirement and voluntary separation incentive payments to our employees until December 31, 2003. This legislation also allowed us to create a technical and scientific career track at a compensation level comparable to senior career executives and to give greater consideration to performance and employee skills and knowledge in any reduction-in-force actions. Aspects of these authorities were also included in the recently enacted Homeland Security Act of 2002, which created the new Department of Homeland Security. In addition to providing the President with additional authority to create new policies for managing the workforce within the new department, the legislation includes provisions that authorize agencies across the federal government to use additional personnel flexibilities. For example, agencies will now be permitted to offer buyouts to their employees without the requirement to reduce their overall number of employees. This change will provide agencies the opportunity to more easily restructure their workforces to correct skills imbalances related to those employees whose jobs have become obsolete or whose skills are no longer needed. The legislation also permits agencies to use a more flexible approach in the rating and ranking of job candidates during the hiring and staffing process. Using this alternative approach can expand the number of qualified candidates that a selecting official could choose from when filling a position. In addition, under the legislation agencies will need to incorporate workforce planning into their strategic plans and appoint “chief human capital officers” to oversee workforce management. Additional analysis may be needed to ensure that any new personnel authorities that are granted and implemented are consistent with a focus on results, merit, and other important federal employment goals. As we have noted in previous reports and testimonies, comprehensive legislative reform of the civil service will likely be necessary to address the federal government’s human capital challenges; however, the consensus necessary to make this a reality has yet to be achieved. Such reform could provide a broader range of federal agencies with a more standard set of human capital tools and flexibilities to manage their workforces. Ultimately, in undertaking any civil service reform, policymakers will likely want to consider the potential needs of individual agencies along with the governmentwide need to manage competition between agencies for skilled employees. Because human capital flexibilities entail greater decentralization and delegation of human capital authorities and fewer rules, the protection of employees’ rights under these conditions can be challenging. The managers and supervisors and human resources officials we interviewed generally believed that additional human capital flexibilities could be authorized and implemented in their agencies while also ensuring protection of employees’ rights. Union representatives we interviewed, on the other hand, had mixed views on the ability of agencies to protect employee rights with the increased discretion that additional flexibilities would give to agency managers. Some union representatives responded positively when asked if agencies could give managers additional flexibilities while protecting employees’ rights. Several union officials, however, said that managers could more easily abuse their authority when implementing these additional flexibilities and that agency leaders often do not take appropriate actions in dealing with abusive managers. According to the agency and union officials we interviewed, one of the most effective ways to ensure protection of employees’ rights when implementing these flexibilities is making certain that supervisors and employees are fully aware of the available flexibilities, the procedures to use them, and the associated rights and responsibilities of both managers and employees when using them. Clear guidelines for consistently applying the flexibilities and straightforward explanations from managers about how and why they made decisions are essential, according to some of the individuals we interviewed. The consensus of agency officials, with some union representatives agreeing, was that putting personnel authority in the hands of agency managers through human capital flexibilities will not affect employee protection as long as managers are held directly accountable for their personnel decisions. In our previous work, we recognized the importance of involving employee unions when agencies propose major changes in the work environment that may be of particular concern to the unions. We found that obtaining union cooperation and support through effective labor-management relations can help achieve consensus on the planned changes, avoid misunderstandings, and more expeditiously resolve problems that occur. When agencies and employee unions maintained an ongoing working relationship in an environment of trust and openness, agencies and unions were able to work cooperatively even in the face of significant change. For example, both IRS and the National Treasury Employees Union officials credited the excellent working relationship they developed over the last decade for helping the reorganization of IRS. One IRS official, for example, stated that it is important to involve the union as a part of the discussions about flexibilities because the union is sometimes more effective than agency managers in communicating with employees. Agency managers and supervisors also cited the importance of securing a close working relationship with the agency’s human resources officials in the protection of employee rights. Officials commented that human resources officials are often good sources of information about flexibilities and effective monitors of potential problems involving their use. According to several supervisors we interviewed, this assistance and monitoring by human resources officials, along with managers’ and union representatives’ efforts to keep each other honest, help to ensure that employee protection can coexist with the use of human capital flexibilities. Based on our interviews with human resources directors across the federal government and our related human capital work, we identified six key practices that agencies can implement for effectively using human capital flexibilities. These practices are (1) planning strategically and making targeted investments, (2) ensuring stakeholder input in developing policies and procedures, (3) educating managers and employees on the availability and use of flexibilities, (4) streamlining and improving administrative processes, (5) building transparency and accountability into the system, and (6) changing the organizational culture. We confirmed the importance of these practices in our discussions with managers and supervisors, human resources officials, and local union representatives at the seven agencies we selected for more detailed review. We also identified relevant examples of the use of these key practices from the seven agencies. The following is a more detailed discussion of these practices along with examples we identified. With strong commitment on the part of their leadership, federal agencies need to ensure that the use of human capital flexibilities is part of an overall human capital strategy clearly linked to the program goals of the organization. Agencies need to plan for how they will use and fund these authorities, what results they expect to achieve, and what methods they will use to evaluate actual results. Our review found that a significant reason why managers and supervisors had not made greater and more effective use of existing human capital flexibilities was agencies’ weak strategic human capital planning and inadequate funding for using these flexibilities given competing priorities. Such a strategic focus would allow for answering critical questions such as whether current staff and resources are sufficient; whether they are being allocated in a manner best suited to promote mission accomplishment; and, ultimately, whether agencies and Congress may wish to consider a variety of targeted investments or new human capital flexibilities in the future. The following are elements and examples of planning strategically and making targeted investments from the seven agencies we reviewed. Obtain agency leadership commitment. Top leadership commitment is crucial to instilling a common vision across the organization and creating an environment that is receptive to innovation. In earlier reports and testimonies, we observed that top leadership plays a critical role in creating and sustaining high-performance organizations. We also highlight the importance of top leadership commitment in our recently issued model of strategic human capital management, in which we note that political leaders and senior career executives demonstrate this commitment by personally developing and directing reform, driving continual improvement, and characterizing the agency’s mission in reform initiatives. At IRS, for example, Commissioner Rossotti’s efforts demonstrated a clear case of leadership’s commitment to change. As mandated by Congress in the IRS Restructuring and Reform Act, the Commissioner articulated a new mission for the agency, together with support for strategic goals that balance customer service and compliance with tax laws. The Commissioner led a modernization effort that touched virtually every aspect of IRS, including implementation of IRS’s newly authorized personnel system and the additional human capital flexibilities that accompanied it. Determine agency workforce needs using fact-based analysis. Federal agencies often have not gathered and analyzed the data required to effectively assess how well their human capital approaches have supported results. High-performing organizations identify their current and future human capital needs, including the appropriate number of employees; the key competencies for mission accomplishment; and the appropriate deployment of staff across the organization. For example, in 1998 the Air Force Materiel Command (AFMC), the largest employer of civilians in the Air Force, began a two-phased workforce study designed to tailor its human capital to meet future business needs. AFMC’s planning efforts, as documented in its April 2000 study called Sustaining the Sword, involved an assessment of the current and projected 2005 workforce by workforce mix, skills, skill levels, and demographics and then a more detailed position-level analysis of workforce data from AFMC locations. AFMC reported that these data and the results of its workforce shaping activities led to a more informed understanding of workforce gaps, for which corrective strategies could be then developed. Develop strategies that employ appropriate flexibilities to meet workforce needs. After identifying current and future workforce needs, agencies ought to develop effective strategies that fill the gaps. In developing these strategies, agencies should assess which human capital flexibilities might work best given current and future needs. For example, in 2000 the Mint created a “human resources flexibilities team” to assess the agency’s current and future use of existing human capital flexibilities. This initial assessment, as outlined in a December 2000 report, revealed that the Mint had pursued a number of key flexibilities but had not done so uniformly across its organizational, occupational, and grade-level structures. In its report, the Mint assessed over 80 disparate human capital flexibilities and developed specific plans to use each of the flexibilities that had not been used or that required immediate attention for full use. Make appropriate funding available. After developing strategies, agencies need to assess the associated costs of using any human capital flexibilities as part of these strategies. Such assessments will allow agencies to better plan for the use of these flexibilities and to ensure that appropriate funding is available when needed. Air Force, for example, developed a comprehensive, multiyear funding plan to implement its Civilian Personnel Management Improvement Strategy (CPMIS), which comprises 28 separate human capital initiatives grouped into the areas of accession planning, workforce development, retention/separation management, and support activities. Under accession planning, for instance, one initiative calls for the Air Force to expand its use of the “3Rs”—recruitment bonuses, relocation bonuses, and retention allowances—to sustain necessary skills in the civilian workforce. Beginning in fiscal year 2004, the Air Force projects offering approximately 1,300 recruitment bonuses annually at an average cost of $11, 250, approximately 650 relocation bonuses at an average cost of $10,000, and approximately 650 retention allowances at an average cost of $9,000. (See table 2.) Agency leaders, managers, employees, and employee unions need to work together to identify and effectively implement human capital flexibilities. Engaging all of the stakeholders in developing policies and procedures for the use of flexibilities helps in reaching agreement on the need for change, the direction and scope that change will take, and how progress will be assessed. Stakeholder input should also be used to ensure that the policies surrounding the use of flexibilities are clear and the procedures to implement them are uncomplicated. The following are elements and examples from our seven selected agencies on how they ensured stakeholder input in developing human capital flexibility policies and procedures. Engage the human capital office. Because flexibilities influence the entire human capital system, human capital professionals are needed to supply the energy and expertise in helping to develop policies and procedures on the use of flexibilities. As noted in our model of strategic human capital management, this assistance requires the expansion of the role of human capital professionals from largely paperwork processors to functioning as advisors to and partners with senior leadership and managers. By transforming from focusing largely on transactions to more on total customer service, the role of the human capital office in facilitating the use of flexibilities will become increasingly important. GSA’s Philadelphia regional office, for example, established a Human Resources Council, which is composed of the human resources director and representatives of various GSA offices, to discuss human capital policies and practices in the region, such as alternative work arrangements and incentive awards. In another example, State’s IRM Bureau directly involved human capital professionals in its working group that crafted its skills development program to provide retention allowances (ranging from 5 to 15 percent) to certain information technology workers who obtain job-related degrees and certifications. Engage agency managers and supervisors. Soliciting the input of managers and supervisors on how best to implement human capital flexibilities is a key component for their successful use. Because managers and supervisors are virtually certain to be negatively affected by unclear policies and procedures, their perspectives on how to make strategic use of flexibilities, while avoiding potential problems caused by poor implementation, are essential. To address the potential problems of limited input, for example, 160 frontline managers from GSA’s central and regional offices convened in four sessions in March 2001 to exchange information about effective workforce-related practices using many of the flexibilities already available to the agency. This effort resulted in a catalog of “best practices” that their offices had implemented in the areas of recruiting and orienting employees, engaging existing employees, and developing leaders. Involve employees and unions. As with any significant change in the workplace, involving employees and unions in decisions to use human capital flexibilities increases employees’ understanding and acceptance of the objectives for implementing change, helps to avoid misunderstandings, and can assist in more expeditiously resolving problems that might occur. While frontline employees can help ensure a more operationally oriented perspective on the use of flexibilities, obtaining union cooperation and support through effective labor- management relations can help achieve consensus on the changes accompanying their use. For example, the Mint and VBA made changes to employee work schedules based on input from employees in open forums. At a “town hall” meeting at the Mint’s San Francisco coin- making plant, employees (with assistance from the local union) were able to vote on various options for implementing an alternative work schedule for the facility. At a “listening post” session at VBA’s regional office in Philadelphia, employees offered input to change the operating hours of the facility’s phone operations. Use input to establish clear, documented, and transparent policies and procedures. After obtaining sufficient input from key players, agencies need to develop and implement human capital flexibilities using clear, documented, and transparent policies and procedures. This practice is essential to ensuring that they are used fairly and, at the same time, are not encumbered with so many administrative burdens that they lose their value as flexibilities. Agencies can take various steps to ensure that policies and procedures are clear and uncomplicated. For example, the Mint’s Office of Chief Financial Officer hired a writer-editor to assist the agency in writing personnel-related policies and procedures in “plain English.” As an example of developing uncomplicated policies and procedures, GSA officials provided us with a merit promotion plan that had been reduced from 75 to 5 pages. Agencies need to ensure that they have an effective campaign not only to inform agency managers and employees of their personnel authorities, but also to explain the situations where the use of those authorities is appropriate. Our work at the seven agencies showed that the lack of awareness and knowledge of human capital flexibilities was one of the most significant reasons why federal managers and supervisors have not made better use of these flexibilities. In some cases, senior managers might not know that such flexibilities were already available to their agencies. In other cases, agency leaders or parent departments might place restrictions on the use of a flexibility—either strategically or haphazardly— and then not communicate the source and reasons for such restrictions to line managers and supervisors within the agency. Educating managers and employees goes a long way in ensuring effective use of these flexibilities across the federal government. The following are elements and examples of how agencies educated managers and employees on the availability and use of human capital flexibilities. Train human capital staff. Traditionally, what has been called the personnel or human resources function has often been viewed as strictly a support function involved in administering personnel processes and ensuring compliance with rules and regulations. As human capital professionals take a more consultative approach to their jobs, they will need not only the knowledge of and expertise in the full range of human capital flexibilities available but also skills on communicating this information to their clients in the agencies they serve. For example, GSA held a conference in September 2000 for its human resources staff members to increase their knowledge of emerging human capital issues and to better their skills in responding to the needs of clients throughout the agency. According to a senior human resources manager at GSA, the conference included a presentation and discussion of the human capital flexibilities available for use within the agency. Educate agency managers and supervisors on existence and use of flexibilities. Ultimately the flexibilities within the personnel system are only beneficial if the managers and supervisors who would carry them out are actually aware of their existence and of the best manner in which they could be used. Educating managers and supervisors is key to ensuring that agencies use all of the tools and flexibilities needed to manage their workforces to accomplish agency missions and achieve goals. For example, AFMC developed and distributed a Supervisor’s Guide to Work Force Planning to educate agency managers and supervisors on numerous flexibilities available to attract and retain quality employees. GSA’s Philadelphia office has educated its supervisors on human capital flexibilities with its “Human Resources Solutions Series” training, which includes topics such as employee leave and work schedules, options for dealing with performance and conduct problems, and balancing managerial flexibility and accountability under merit system principles. Inform employees of procedures and rights. In previous work, we have highlighted the importance of informing employees of personnel- related policies and procedures and their rights under them. This communication helps in minimizing employee confusion and apprehension and ensuring that flexibilities are implemented fairly within and across the organization. Agencies can use a variety of methods to communicate this information. For example, GSA’s human resources manager in Philadelphia said that most updates concerning employee rights and procedures are communicated via GSA’s intranet Web site. The office also distributes an employee newsletter with information about related personnel policies and procedures. Agencies also need to streamline and improve administrative processes for using flexibilities and review self-imposed constraints that may be excessively process oriented. Indeed, our interviews with agency managers and supervisors revealed that they viewed burdensome and time- consuming approval processes as a significant reason why they did not make better use of available human capital flexibilities. Although sufficient controls are important to ensure consistency and fairness in using flexibilities, agency officials should look for instances in which processes can be reengineered. This reengineering of processes for using flexibilities can assist the agencies in increasing efficiencies, decreasing costs, or both. In this effort, agency managers need to bear in mind that they should first determine requirements and design processes before developing any information systems to support the new processes. The following are elements and examples of how the agencies streamlined and improved administrative processes. Ascertain the source of existing requirements. As we have previously reported, some of the barriers to effective strategic human capital management in the federal government do not stem from law or regulation but are self-imposed by agencies. The source of these barriers can sometimes be a lack of understanding of the prerogatives that agencies have. For example, the head of State’s office responsible for overseas building operations asked OMB in May 2001 for a series of increased flexibilities to accomplish various personnel management goals. In its response, OMB noted that the department already had the authority to implement many of these requested changes. In another example, personnel policy at the Mint had required that job vacancy announcements for certain positions be publicly posted for at least 30 calendar days. OPM, however, generally allows agencies the flexibility to post such announcements for as few as 5 business days. Mint officials told us that the Mint’s parent agency, the Department of the Treasury, had initially established this 30-day posting requirement and that the Mint’s original policy had been drafted to concur with Treasury’s. After Mint officials realized that this 30-day requirement flowed from its parent department, the Mint was able to work to modify the policy to require a minimum of only 5 business days for posting these job announcements. Reevaluate administrative approval processes for greater efficiency. In our interviews at the selected agencies, some managers and supervisors complained about the lack of time to initiate and implement the justification and approval processes that agencies have in place to use existing flexibilities. If senior managers within the agency want supervisors to use these flexibilities, supervisors must view the required initiation and approval processes worth their time compared to the expected benefit to be gained in using the flexibility. In simplifying processes to provide for greater efficiencies and improved quality and responsiveness, agencies have often turned to automation of paper-based personnel processes and procedures. For example, managers and supervisors at GSA’s Philadelphia office cited the agency’s recently automated processes for granting employees on-the-spot cash awards (ranging from $50 to $2,000). Previously, agency supervisors were required to complete lengthy justifications and send these forms to the personnel office for review. According to the human resources manager, the perceived burdens of the previous administrative process led to very few awards being granted. Now, according to GSA managers and supervisors, by accessing GSA’s intranet Web site, an agency supervisor can complete the award initiation process within minutes and on the next business day receive a certificate to present to the employee that shows what the award is for and when the employee can expect the money in his or her paycheck. Replicate proven successes of others. When developing processes and procedures for using flexibilities, agencies can potentially learn valuable lessons from other agency components or from other organizations altogether. These lessons learned could be instructive in developing ways to best implement such flexibilities along with determining which flexibilities are most effective. For example, officials at VBA’s Oakland office informed the agency’s Philadelphia office of the success they had in using the student cooperative program to recruit needed staff members for the office. This special hiring authority, called the Student Career Experience Program, allows agencies to appoint students who are enrolled or have been accepted for enrollment at least part-time at accredited institutions. After completing their academic requirements, these employees can then be converted noncompetitively to term or permanent positions within 120 days. To ensure effective use of human capital flexibilities, agencies need to delegate authority to use these flexibilities to appropriate levels within the agency, and then agency managers and supervisors need to be held accountable—both for achieving results and for treating employees fairly. Agency managers and supervisors are more likely to support changes when they have the necessary authority and flexibility—along with commensurate accountability and incentives—to advance the agency’s goals and improve performance. Indeed, devolving decision-making authority to program managers in combination with holding them accountable for results is one of the most powerful incentives for encouraging results-oriented management. However, achieving a proper balance between managerial flexibility and adequate controls to ensure consistency and accountability can be a challenging endeavor. Moreover, agencies that expect their managers and employees to take greater responsibility and be held accountable for results must ensure that the managers and employees have the training and tools they need to fulfill these expectations. The following are elements and examples from the agencies we reviewed of how they built transparency and accountability into their human capital systems. Delegate authority to use flexibilities to appropriate levels within the agency. In a recent report, we found that only about one- third of agency managers we surveyed from 28 agencies believed that they had, to a great or very great extent, the authority they needed to help accomplish agency goals. Providing managers and supervisors with such authority gives those who know the most about an agency’s programs the power to make those programs work. This delegation of authority is equally important when implementing human capital flexibilities. For example, the Department of the Treasury delegated authority to IRS and its other bureaus to establish their own policies on superior qualifications appointments (SQA), a flexibility that allows agencies to hire individuals at advanced rates of pay based on the individuals’ superior qualifications or special needs of the agencies. To expedite timely approval in hiring situations, IRS in turn redelegated this approval authority for SQAs to the human resources officers within each of the agency’s business units. In another example, VBA in Philadelphia delegated authority to immediate supervisors to approve on-the-spot monetary awards for their employees without review by senior managers. VBA supervisors said that under this delegated authority they simply complete a short form and present it to the employee, who can then proceed to the on-site credit union and receive cash, all within 1 hour. Hold managers and supervisors directly accountable. Agencies must develop clear and transparent guidelines for using flexibilities and then hold managers and supervisors accountable for their fair and effective use. Managers need to be held accountable for their contributions to results and recognized and rewarded for those contributions. Internal and external parties, such as agency human resources offices, offices of inspectors general, and OPM, can help to ensure transparency in the use of flexibilities through appropriate review and oversight. For example, according to the senior human resources official at GSA’s Philadelphia regional office, the human resources office monitors supervisors’ granting of employee awards to ensure that supervisors are effectively using this flexibility. This list of award amounts and frequencies (without personal identifiers) can be provided to supervisors within the region so that they know how their use of such flexibilities compares with that of other regional supervisors. Apply policies and procedures consistently. While recognizing differences in each individual’s job performance and competencies, supervisors need to make concerted efforts to apply policies and procedures for using flexibilities consistently. Our review at the seven agencies showed that a significant reason why supervisors have not made greater use of flexibilities is supervisors’ fears that some employees will view the use of various flexibilities as somehow unfair. The consistent application of policies and procedures helps to lessen employee fears because decision-making criteria are well defined, documented, transparent, and applied the same way in similar situations. For example, after some concerns expressed by newly hired IRS employees about possible inconsistencies, the agency developed guidelines for its managers to use in determining if a job applicant qualifies for a recruitment bonus. According to IRS officials, these guidelines helped to ensure consistent application of recruitment bonuses based on the specific backgrounds of new employees. Organizational culture represents the underlying assumptions, beliefs, values, attitudes, and expectations generally shared by an organization’s members. Because an organization’s beliefs and values affect the behavior of its members, changing the organizational culture related to outdated personnel-related approaches is crucial to effectively using human capital flexibilities. Changing this culture is important particularly in areas related to ensuring the involvement of senior human capital managers in key decision-making processes and decreasing managers’ and supervisors’ resistance to change. Agencies also need to address managers’ and supervisors’ concerns that employees will view the use of flexibilities as inherently unfair, and the belief that all employees must be treated essentially the same regardless of job performance and agency needs. By addressing such organizational culture issues, agencies can better assist managers and staffs in developing creative ways to employ tools and flexibilities to address human capital challenges. The following are elements and examples from the seven agencies we reviewed of practices they implemented to change their organizational cultures. Ensure involvement of senior human capital managers in key decision-making processes. A fundamental reorientation is required to ensure that human capital leaders take a “seat at the table” as full members of the top management team rather than isolating them to provide after-the-fact support. By expanding the strategic role of human capital officials beyond providing traditional personnel administration services, agencies are in a better position to integrate human capital considerations when identifying the mission, strategic goals, and core values of the organization as well as when designing and implementing policies and procedures. The senior human capital manager at IRS, for instance, has been heavily involved in the agency’s recent restructuring initiative as well as its overall strategic direction. Recognizing the importance of this strategic role, he also recently devolved the agency’s human resources office into three units; two are strategically focused and the third is transaction focused. Encourage greater acceptance of prudent risk taking and organizational change. Managers and supervisors need to have an appropriate attitude toward risk taking and proceed with new operations after carefully analyzing the risks involved and determining how they may be minimized or mitigated. Managers and supervisors will at times resist making changes because they would have to work in new and unfamiliar ways. Although managers and supervisors can initially be uncomfortable exercising newly delegated authorities, they will often gain confidence as they better understand their importance and become more experienced in exercising them. For example, IRS’s regional office in Oakland hired a consultant to conduct training for managers that promotes creative thinking, empowerment for decision making, and prudent risk taking. The training course is an ongoing process with managers returning each year to ensure their continued comfort with and use of principles covered in the training. In another example, according to a senior human resources official in State, managers in the department’s Office of Logistics Management were initially hesitant to allow the use of alternative work schedules for employees in that office but finally accepted use of the flexibility when they realized that it would not drastically affect the office’s operations. Recognize differences in individual job performance and competencies. In previous work looking at the practices of private sector organizations regularly cited as leaders in the area of human capital, common principles of human capital management we identified include the importance of recognizing differences in employees’ job performance and competencies. Rather than follow the federal government’s traditional approach of compensating federal employees strictly based on their status at a particular grade level, agencies should look at using performance management systems, including pay and other meaningful incentives, to more clearly recognize individual job performance as well as employee competencies. In an example of recognizing differences in individual job performance, GSA’s Public Buildings Service (PBS) created a performance measurement and incentive awards system for its regional offices and its employees in its “Linking Budget to Performance” initiative. Under this initiative, each of PBS’s 11 regional offices strives to achieve preestablished goals for nine standard performance measures. On the basis of each region’s performance, monetary incentives can be provided based on employees’ contribution to the region’s accomplishments. Furthermore, an example of recognizing differences in employee competencies is demonstrated with State’s use of retention allowances for employees who obtain job-related degrees and certifications in the information technology field. The insufficient and ineffective use of flexibilities can significantly hinder the ability of federal agencies to recruit, hire, retain, and manage their human capital. To deal with their human capital challenges, it is important for agencies to assess and determine which human capital flexibilities are the most appropriate and effective for managing their workforces. On the basis of our review at seven selected agencies, the most effective flexibilities cited were work-life policies and programs, monetary recruitment and retention incentives, special hiring authorities, and employee incentive awards. Our review at the seven selected agencies also found several categories of additional flexibilities that agency and union officials cited as being potentially helpful in managing their workforces. If such additional flexibilities are desired, agencies should develop business cases to justify the need for the authority to implement these additional flexibilities. Although comprehensive civil service reform will likely be necessary to address the federal government’s human capital challenges, agencies need not wait in seeking additional flexibilities where clear business cases have been established. The appropriate and effective use of flexibilities is essential to ensuring that employees’ rights are protected, agencies adhere to merit system principles, and employees are shielded from prohibited personnel practices. To ensure the most effective use of human capital flexibilities, it is important that agencies (1) plan strategically and make targeted investments, (2) ensure stakeholder input in developing policies and procedures, (3) educate managers and employees on the availability and use of flexibilities, (4) streamline and improve administrative processes, (5) build transparency and accountability into their systems, and (6) change their organizational cultures. By more effectively using flexibilities, agencies would be in a better position to manage their workforces, assure accountability, and transform their cultures to address current and emerging demands. We provided a draft of this report on September 4, 2002, to the Director of OPM, the Secretary of Defense, the Commissioner of IRS, the Director of the U.S. Mint, the Secretary of Veterans Affairs, the Administrator of GSA, the Under Secretary for International Trade, and the Secretary of State. OPM, Defense, IRS, the Mint, VA, GSA, and ITA provided comments on the draft report. These agencies either generally agreed with the information presented or did not express an overall opinion about the report. In some cases these agencies provided written technical comments to clarify specific points regarding the information presented. Where appropriate, we have made changes to this report to reflect these technical comments. State did not provide comments on this report. The following summarizes significant comments provided by the seven agencies. In her written comments (see app. II), the OPM Director noted that OPM was pleased that our report acknowledges the need for greater personnel flexibilities in cases where existing law constrains OPM in providing policies and programs to assist agencies in accomplishing their missions. In technical comments, OPM raised concerns, however, about our position that individual agencies could be authorized additional legislative flexibilities if they develop sound business cases that such flexibilities are needed. OPM stated that its obligation is to review and analyze all agencies’ requests to use additional flexibilities or create additional flexibilities to ensure that they promote the efficiency and effectiveness of the federal government and do not create an unfair competitive advantage for selected agencies. In this regard, OPM commented that it supports the need for a standardized approach to governmentwide flexibilities. As we noted in this report and in previous reports and testimonies, comprehensive legislative reform of the civil service will likely be necessary to address the federal government’s human capital challenges. We believe, however, that agencies need not wait in seeking additional flexibilities where clear business cases have been established for such flexibilities. It is possible that civil service reform could provide a broader range of agencies with a more standard set of human capital tools and flexibilities to manage their workforces. Ultimately, in addressing civil service reform, policymakers will likely want to consider the potential needs of individual agencies along with the governmentwide need to manage competition between agencies for skilled employees. We added a discussion of this issue to the report in the section dealing with agency and union officials’ views on authorizing additional flexibilities. In its technical comments, OPM also emphasized that the outstanding scholar hiring program can only be used as a supplement to competitive examining and should not be viewed as an “alternative” hiring authority. OPM expressed concern that we not recommend that agencies use this program for a purpose other than that for which it was intended. We noted in the draft report, however, that this program was intended to serve as a supplemental hiring tool. Our report states that many agency officials we interviewed viewed this program as effective because the program allows the agency to hire more quickly given that the agency does not have to rank and rate candidates as usually required under the standard competitive hiring process. Although OPM does not include the outstanding scholar program as an alternative hiring or staffing option in its Flexibilities Handbook, many of the agency officials we interviewed viewed this program as an effective flexibility, and the program meets the definition of human capital flexibility that we used in this report. Our report, however, does not recommend that agencies use this program to circumvent the standard examining process. As with many of the flexibilities available to agencies, the outstanding scholar program could be used in inappropriate or inefficient ways. As we note under key factors for effective use of flexibilities, agencies must build transparency and accountability into their human capital systems to ensure that managers and supervisors are held accountable for the fair and effective use of these flexibilities. In response to OPM’s concerns on this issue, we added additional language to the report to emphasize that this program is to be used as a supplement to competitive hiring and to note OPM’s statement that agencies must have an established pattern of competitive examining into the covered jobs before agencies can use this program. Defense’s comments, provided by E-mail through its Office of Inspector General, did not express an overall opinion about the report. However, the comments noted that it appeared we were asserting in the report that telecommuting had been clearly shown to increase employee productivity. We noted in the draft report that our discussions with agency and union officials about telecommuting brought about strongly mixed views, including its effect on employee productivity and the challenges of managing such a program. Still, we changed the text to clarify that some managers and supervisors told us that telecommuting has not been shown to increase employee productivity and that telecommuting is not practical for all occupations or situations. In written comments (see app. III), the IRS Commissioner stated that he generally agreed with the list of available human capital flexibilities that agency and union officials cited as most helpful for managing their workforces. Nonetheless, he said that these flexibilities may not be as important in the long run as some of the more deep-rooted changes to human capital management policies and practices that agencies like IRS have undertaken recently to improve their workforces’ performance and accountability. He noted that IRS’s recently acquired statutory flexibilities, such as a broadbanding pay system and an expedited and flexible hiring process, were instrumental to achieving the agency’s transformation to a modern, business-like organization. He stressed the importance of providing additional flexibilities to federal agencies so that they can manage their workforces in a manner comparable to the private sector. In written comments (see app. IV), the Mint’s Director stated that the report provides an objective, balanced review and assessment of the issues surrounding the implementation of human capital flexibilities. She commented that the report would serve as a useful tool that policymakers could use to guide federal agencies seeking to employ greater flexibilities to manage their workforces. VA provided comments by E-mail through its GAO liaison. VA agreed with the information presented and had no additional comments on the draft report. GSA’s comments, provided by E-mail from its Office of Human Resources, were largely clarifying and technical in nature and did not express an overall opinion on the report. In a point similar to that made by Defense, GSA commented that our report should more fully draw attention to the drawbacks of telecommuting in our discussion of work- life programs. Again, we added clarifying text indicating that some managers and supervisors told us that telecommuting has not been shown to increase employee productivity and that telecommuting is not practical for all occupations or situations. In written comments from ITA (see app. V), the Under Secretary for International Trade said that the report thoroughly and comprehensively addresses the critical issue of the programs needed to manage the federal workforce. In addition, he emphasized the need for the additional flexibilities mentioned in the report. We are sending copies of this report to the Chairman and Ranking Minority Member, House Committee on Government Reform, and its Subcommittee on Civil Service, Census and Agency Organization, and other interested congressional parties. We will also send copies to the Director of OPM, the Secretary of Veterans Affairs, the Secretary of State, the Secretary of Commerce, the Secretary of the Air Force, the Secretary of the Treasury, and the Administrator of GSA. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Edward Stephenson on (202) 512-6806. Key contributors to this report are listed in appendix VI. The objectives for this study were to provide information on agency officials’ and union representatives’ views on (1) the most effective flexibilities for managing their workforces, (2) additional flexibilities that would be the most helpful in managing their workforces, and (3) whether employee rights could be protected if additional flexibilities were authorized and implemented within agencies and key practices that agencies should implement for effective use of human capital flexibilities, along with specific examples of such practices from selected agencies. To respond to the objectives of this report, we conducted this work in two phases and gathered information from a variety of sources using several different data collection techniques. During phase one of this review, which was completed from May to December 2001, we first interviewed representatives from OPM, the federal government’s human resources agency; MSPB, a federal agency that hears and decides civil service cases, reviews OPM regulations, and conducts studies of the federal government’s merit systems; and NAPA, an independent, nonpartisan, nonprofit, congressionally chartered organization that assists federal, state, and local governments in improving their performance. We interviewed representatives of these three organizations to gather background information on the federal government’s experiences with and use of human capital flexibilities and to obtain suggestions about which federal agencies we should consider for a more detailed review during phase two of our study. We also reviewed numerous reports issued by these organizations on governmentwide human capital issues and the use of various human capital flexibilities in federal agencies. In addition, we reviewed previous GAO reports on a broad range of human capital issues. During phase one of this study, we also gathered information for our two objectives by conducting semistructured interviews with (1) the human resources directors of the 24 largest federal departments and agencies and (2) representatives from 4 national organizations representing federal employees and managers—National Treasury Employees Union, American Federation of Government Employees, National Association of Government Employees, and Senior Executives Association. To produce a general summary of the human resources directors’ views, we first reviewed their responses to the open-ended questions we had posed to them. Based on our analysis of those responses, we identified a set of recurring themes and then classified each director’s responses in accord with these recurring themes. At least two staff reviewers collectively coded the responses from each of the 24 interviews and the coding was verified when entered into a database we created for our analysis. During phase two of this study, which was done from January to May 2002, we conducted semistructured interviews with managers and supervisors, human resources officials, and local union representatives from seven federal agencies we selected for more detailed review—the Air Force, GSA, IRS, ITA, the Mint, State, and VBA. We interviewed over 200 officials at these seven agencies. Our interviews with these agency and union officials focused on their views about the most effective flexibilities, additional flexibilities needed, and protection of employee rights. We also asked these officials to confirm and provide examples of the key practices we had identified on the basis of our interviews with the human resources directors and our related human capital work. To produce a general summary of these agency and union officials’ views, a staff reviewer coded their responses to our questions according to the recurring themes we had developed. A separate reviewer verified the coding when entering the information into the database we created for our analysis. We sought to obtain views from a broad and diverse set of officials who would have relevant knowledge and experience regarding human capital flexibilities. We did not employ random selection in our choice of individuals to interview; thus the responses we obtained should not be viewed as a representative sample of all managers and supervisors, human resources officials, or local union officials at the seven agencies. We selected the seven agencies for various reasons, including their variety of existing human capital challenges and their range in use of available human capital flexibilities. Specifically, we included Air Force because the Department of Defense, Air Force’s parent department, historically has represented a large percentage of civilian federal employees and we had previously reported that the Air Force lacked sufficient acquisition and logistic capabilities. We included GSA because it had displayed a high use of monetary incentives compared to other large federal agencies based on our review of data from OPM’s Central Personnel Data File (CPDF). IRS was included based on congressional requesters’ interest in including an agency with a strong union presence, and IRS was frequently cited as an agency that had recently received increased authority to implement a broad range of human capital flexibilities. ITA was included because the Department of Commerce, ITA’s parent department, had shown a high use of monetary incentives, and we had previously reported that ITA lacked an experienced staff to monitor and enforce trade agreements. We selected the Mint because it was originally a candidate to receive performance- based organization (PBO) status in the late 1990s and it continued to seek additional human capital flexibilities when it did not receive this PBO designation. We selected State because our review of CPDF data showed it to be a low user of monetary incentives, and it had recently established an often-cited skills development program for its information technology employees. Lastly, we included VBA because the Department of Veterans Affairs, VBA’s parent department, continued to actively seek authority for increased human capital flexibilities, and we had previously reported that VBA was lacking a sufficient workforce of skilled claims processors. For the Air Force, we focused on work at Wright-Patterson Air Force Base in Dayton, Ohio, and Langley Air Force Base in Hampton, Virginia. For GSA, IRS, VBA, and the Mint, we focused our work at their field offices in the Philadelphia and San Francisco metropolitan areas. At State, we concentrated our work on the IRM Bureau, the Bureau of Administration, and the Bureau of Overseas Buildings Operations in Washington, D.C. At ITA, we focused our work primarily on the headquarters office in Washington, D.C. Our agency selection process was not designed to identify examples that could be considered representative of all the human capital flexibilities used at the seven agencies reviewed or the federal government as a whole. In addition, we collected and analyzed data from CPDF on the extent of use of human capital flexibilities, both governmentwide and for the seven federal agencies we reviewed in more detail. We also collected and analyzed documents from the seven selected agencies on their experiences with and use of human capital flexibilities. We did not attempt to verify the usage data we gathered. We conducted our audit work in accordance with generally accepted government auditing standards. In addition to the persons named above, K. Scott Derrick, Charlesetta Bailey, Tom Beall, Ridge Bowman, Molly K. Gleeson, Judith Kordahl, Sylvia Shanks, Shelby D. Stephan, Gary Stofko, Mike Volpe, Gregory H. Wilmoth, and Scott Zuchorski made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports Order GAO Products heading. | An essential element to acquiring, developing, and retaining high-quality federal employees is agencies' effective use of human capital flexibilities. These flexibilities represent the policies and practices that an agency has the authority to implement in managing its workforce. Congressional requesters asked GAO to provide information on agency and union officials' views about the most effective human capital flexibilities, additional flexibilities needed, and whether additional flexibilities could be implemented while also protecting employees' rights. GAO was also asked to identify key practices for effective use of flexibilities. GAO interviewed the human resources directors of the federal government's 24 largest departments and agencies, and representatives of 4 national organizations representing federal employees and managers. GAO further focused its efforts on 7 federal agencies--Department of Air Force, General Services Administration, Internal Revenue Service, International Trade Administration, U.S. Mint, State Department, and Veterans Benefits Administration--interviewing more than 200 managers, supervisors, human resources officials, and union representatives in headquarters and field locations. Agency and union officials' views on human capital flexibilities. Most effective flexibilities. Existing flexibilities that are most effective in managing the workforce are work-life programs, such as alternative work schedules, child care assistance, and transit subsidies; monetary recruitment and retention incentives, such as recruitment bonuses and retention allowances; special hiring authorities, such as student employment and outstanding scholar programs; and incentive awards for notable job performance and contributions, such as cash and time-off awards. Additional flexibilities needed. Additional flexibilities that would be helpful in managing the workforce include more flexible pay approaches to compensate federal employees, greater flexibility to streamline and improve the federal hiring process, increased flexibility in addressing employees' poor job performance, additional workforce restructuring options, and expanded flexibility in acquiring and retaining temporary employees. Protection of employee rights. Managers, supervisors, and human resources officials generally believed that additional human capital flexibilities could be implemented in their agencies while also protecting employees' rights. Union representatives, however, gave mixed views ranging from the opinion that additional flexibilities could be implemented while still protecting employee rights to concerns that managers would abuse their authority. Key practices for effective use of human capital flexibilities. GAO identified six key practices for the effective use of human capital flexibilities. These practices are (1) planning strategically and making targeted investments, (2) ensuring stakeholder input in developing policies and procedures, (3) educating managers and employees on the availability and use of flexibilities, (4) streamlining administrative processes, (5) building transparency and accountability into the system, and (6) changing the organizational culture. The insufficient and ineffective use of flexibilities can significantly hinder the ability of federal agencies to recruit, hire, retain, and manage their human capital. Congress is currently debating the extent of personnel flexibilities that should be granted to the new Department of Homeland Security. While this decision is important to how the department will operate, how personnel flexibilities are implemented is equally important. |
Noting that congressional and federal managers’ decision-making was often hampered by the lack of good information on the results of federal programs, the Congress passed the Government Performance and Results Act of 1993. By passing the Results Act, the Congress intended to change the focus of federal management and decision-making from the performance of tasks to the results of those tasks. To do this, the act established a system to set goals for programs’ performance and to measure the results of that performance. In part to fulfill the requirements of the Results Act, the Department of Energy (DOE) announced the development of its Strategic Management System in 1996. The system is intended to be a managerial framework for DOE’s interrelated strategic planning, budgeting, performance-based contracting, and program evaluation processes for the Department’s varied missions and numerous organizations. The Congress passed the Results Act to have federal agencies clarify their missions, set their program goals, and measure their performance toward achieving those goals. The Congress had found, among other things, that • waste and inefficiency in federal programs undermined the confidence of the American people in their government and reduced the government’s ability to address vital public needs adequately; federal managers were seriously disadvantaged in their efforts to improve program efficiency and effectiveness because programs’ goals had not been articulated sufficiently and information on programs’ performance was inadequate; and • congressional policy-making, spending decisions, and program oversight were seriously handicapped by insufficient attention to programs’ performance and the results. The Congress intended the Results Act to improve the effectiveness of federal programs by fundamentally shifting management and decision-making away from a preoccupation with staffing and activity levels to a wider focus on the results of federal programs. The framework the act established for such a shift requires executive agencies to prepare multiyear strategic plans, annual performance plans, and annual performance reports. The Results Act requires executive agencies to develop strategic plans that cover a period of at least 5 years and to update those plans at least every 3 years. Agencies were required to submit their first strategic plans to the Congress by September 30, 1997. Strategic plans are to (1) include agencies’ mission statements; (2) identify long-term general goals and objectives; (3) describe agencies’ plans to achieve those goals through their activities and through their human, capital, information, and other resources; and (4) explain the key external factors that could significantly affect the achievement of those goals. Additionally, the strategic plans are to explain how the agencies’ strategic goals and objectives are related to the performance goals in their annual performance plans. Hence, the strategic plan is the starting point for the agencies’ system of performance management. In January 1998, we reported on our reviews of 24 major agency strategic plans, including the one prepared by DOE. The Results Act requires executive agencies to develop annual performance plans that cover their performance for a single fiscal year. The first annual performance plans were to be submitted to the Congress with the President’s budget in February 1998 and were to cover the agencies’ performance in fiscal year 1999. The annual performance plan is to contain an agency’s strategic goals and annual performance goals, which the agency is to use to gauge its progress toward accomplishing its strategic goals. The annual performance plan also is to include the measures of performance that the agency will use to gauge its progress toward achieving its annual goals and the resources the agency will need to meet its goals. Finally, the plan is to discuss how the agency will verify the resulting performance data. The Results Act further requires executive agencies to prepare annual reports on program performance for the previous fiscal year. The first annual performance reports will describe agencies’ results for fiscal year 1999 and are due to the Congress and the President no later than March 31, 2000. Subsequent reports are due annually by March 31. In each report, an agency is to review and discuss its performance compared with the performance goals it established in its annual performance plan. The Senate Committee on Governmental Affairs, in its report on the Results Act, explained that it is important that performance measurement not be a major additional cost or paperwork burden imposed on federal programs. In stressing its concerns about the cost of performance measurement, the Committee cited our report on federal agencies’ use and collection of performance data. In that report, we pointed out that a great deal of data collection was already going on in federal programs and that this activity could be redirected, coordinated, and the data better reported and used. The Congress recognized the significance of converting a task-oriented government to a performance-oriented government and phased the implementation of the Results Act over a 7-year period. For example, in its report on the Results Act, the Senate Committee on Governmental Affairs recognized that the reforms of the Results Act are a major undertaking and noted that comprehensive program goal-setting and performance measurement and reporting on a governmentwide basis will not be accomplished easily. In 1997, in our review of agencies’ pilot projects under the Results Act, we reported that agencies were confronting a variety of difficult challenges. These challenges included developing strategic plans; generating the results-oriented performance information needed to set goals and assess progress; instilling a results-oriented organizational culture within agencies; and linking performance plans to the budget process. The experiences of pilot agencies and related efforts by other agencies suggest that these challenges will not be quickly or easily resolved. On March 4, 1996, DOE announced its Strategic Management System, which seeks to align planning with strategic intent, ensure that planning drives resource allocation, and provide feedback on performance results. The system provides a general explanation of how strategic planning, annual planning, budget formulation, performance-based contracting, and program evaluation are to be linked. Within the Strategic Management System, DOE’s departmental strategic plan provides the goals and strategies that will shape DOE’s future budgets. DOE’s strategic plan aligns DOE’s work into four business lines—energy resources, national security, environmental quality, and science and technology. To help ensure the success of its business lines, DOE’s strategic plan also includes a section on corporate management, which cuts across the business lines. Because DOE’s organizational structure does not mirror its business lines, the business lines include crosscutting issues within the agency that require different parts of the Department to work together to achieve the desired results. DOE’s Assistant Secretary for Policy and International Affairs is responsible for coordinating the preparation of the departmental strategic plan. Preparation of the strategic plan is managed by the Assistant Secretary’s Office of Strategic Planning, Budget and Program Evaluation. The strategic plan is to be reflected throughout all DOE organizations as missions, goals, and activities at every level are to be aligned with national energy and security policies. Among its objectives, the Strategic Management System intends to ensure that all DOE plans add value and are consistent with other DOE planning documents. The Strategic Management System states that annual performance plans are to include the results that DOE expects to deliver for the budget being requested and must be closely linked to the goals contained in the departmental strategic plan. The Strategic Management System also notes that DOE’s performance-based management contracts, a form of contract that is used to manage and operate DOE facilities, are a critical force in turning DOE’s annual plan and commitments into actions and results. Performance goals for these contracts are to be consistent with the commitments made in DOE’s annual performance plan. To work effectively, the Strategic Management System will need to integrate DOE’s complex mission structure and organization. DOE was created in 1977 from several diverse functions, including those of the Federal Energy Administration, the Energy Research and Development Administration, and the Federal Power Commission. Moreover, DOE’s missions have changed focus over the years. For example, whereas DOE was once geared toward the production of nuclear weapons, it is now focused on restoring the environment at the facilities contaminated by that nuclear production. DOE’s diverse missions include • environmental restoration of its facilities and the management of hazardous wastes created during the nuclear research and production process; • management of the nation’s nuclear weapons complex; • nuclear arms control; • development of energy policy; • research and development on both energy and basic science; • management of five power marketing administrations, such as the Bonneville Power Administration; and • development and operation of a civilian nuclear waste repository. These changing missions have had a significant impact on DOE’s various programs. For example, in 1996, we reported that DOE undertook 80 major system acquisitions from 1980 to 1996 and that only 15 of them were ever completed. Thirty-one of the major systems were terminated prior to completion. One of the causes of this poor performance was DOE’s unclear or changing missions. DOE uses management and operating contractors (M&O) to carry out the bulk of its statutory responsibilities at its facilities. In fiscal year 1997, about 70 percent ($13.8 billion) of the Department’s total fiscal year obligations were for M&O contractors. These contractors employ about 107,000 employees, compared with the approximately 11,000 federal workers employed by DOE. The Chairman of the House Committee on Commerce, noting the importance of the Results Act, requested that we review DOE’s early efforts to implement the act. Specifically, we evaluated how well (1) DOE’s program and field units linked their subordinate plans to the departmental strategic plan and (2) DOE linked the goals of its strategic plan to its annual performance plan and the goals for its performance-based management and operating contracts. We conducted our review at DOE program offices, field organizations, and facilities managed by M&O contractors. The program offices we chose for our review—the Office of Defense Programs, the Office of Energy Research, and the Office of Environmental Management—are the largest in DOE’s budget. The field organizations we reviewed were at DOE’s offices in Albuquerque, New Mexico; Argonne, Illinois; Oak Ridge, Tennessee; Rocky Flats, Colorado; and Savannah River, South Carolina. The facilities managed by M&O contractors we reviewed were at the Argonne National Laboratory in Illinois; the Oak Ridge Reservation in Tennessee; the Rocky Flats Environmental Technology Site in Colorado; the Sandia National Laboratories in New Mexico; and the Savannah River Site in South Carolina. At the field offices and individual facilities, we focused on programmatic activities and not on the operational activities of those organizations. To evaluate how DOE linked its various organizations’ strategic plans to DOE’s departmental strategic plan, we requested strategic and/or multiyear plans from DOE’s Albuquerque, Chicago, Rocky Flats, Oak Ridge, and Savannah River offices. We analyzed the plans and attempted to link the programmatic work supporting Defense Programs, Energy Research, and Environmental Management that was specified in those plans to DOE’s departmental strategic plan. We then asked DOE personnel at these offices to separately identify linkages between their plans and DOE’s departmental strategic plan. We discussed the planning activities of these offices with staff from their planning and budget offices. Additionally, we discussed DOE’s strategic planning activities with staff of the Office of Policy and International Affairs, the Office of Defense Programs, the Office of Energy Research, and the Office of Environmental Management. We discussed DOE’s strategic planning with staff of the Office of Management and Budget (OMB). Finally, we reviewed the Results Act; the Senate Committee on Governmental Affairs’ report on the Results Act; OMB’s guidance on the Results Act; and DOE’s guidance on its Strategic Management System. To evaluate the linkage between DOE’s annual performance plan for fiscal year 1999 and its departmental strategic plan, we compared the 1999 annual performance plan with the departmental strategic plan to determine if we could identify links among goals, objectives, and measures. We also analyzed the 1999 annual performance plan to determine if the levels of performance identified in it could be linked to the budgetary resources requested for those levels of performance. However, we did not evaluate the extent to which individual performance goals and measures will enable DOE to effectively achieve its goals and objectives. To evaluate the links between DOE’s strategic goals and the work of its M&O contractors, we discussed the performance goals and incentive fees for the contracts for the Argonne National Laboratory and the Rocky Flats Environmental Technology Site with officials from DOE’s Chicago and Rocky Flats offices. However, as of February 1998, performance goals and incentive fees for fiscal year 1998 had not been completed and made a part of the contracts. As a result, we could not evaluate the linkage of the contracts to the departmental strategic plan. We also discussed these contracts with staff of the Office of Procurement and Assistance Management and requested information to determine if the late inclusion of performance goals and fees was a systemic problem within DOE. We performed our review from June 1997 through March 1998 in accordance with generally accepted government auditing standards. DOE’s strategic plan focuses on its broad missions and not on the Department’s programs or organizational structure. As a result, DOE’s programs, field offices, and contractors have often prepared their own subordinate plans. However, we found that it was difficult to link the subordinate plans of DOE’s programs, field offices, and contractors to the Department’s strategic goals, objectives, and strategies. DOE has not provided specific guidance on the nature or extent of these subordinate plans. As a result, organizations throughout DOE have developed subordinate plans even though some of the plans appear to be duplicative. DOE’s strategic plan is structured according to business lines, but DOE’s organizational structure is quite different. As we pointed out in June 1996, organizations’ activities should be aligned to support their mission and outmoded organizational structures should be changed. DOE’s strategic plan includes a mission statement that is short and overarching, but the substance of its missions is described in four business lines: energy resources, national security, environmental quality, and science and technology. The strategic plan also includes a functional section on corporate management that cuts across the four business lines. But DOE itself is not organized into four business lines. It has three large headquarters program offices, many smaller individual headquarters offices, operations and field offices, and a number of contractors that manage and operate DOE facilities across the country. Because the business lines in DOE’s strategic plan are not aligned with the organizational structure, more than one DOE organization contributes to the same business line. For example, DOE’s three main headquarters program offices are Defense Programs, Energy Research, and Environmental Management. Both the Office of Defense Programs and the Office of Environmental Management contribute to the environmental quality business line as do two smaller headquarters offices. Similarly, the Office of Defense Programs and five other headquarters offices contribute to the national security business line. The relationships among programs and business lines become more complicated when the field-level structure is considered. DOE’s field structure includes 10 major operations and field offices and several area offices. Each of these offices may contribute to the business lines through various headquarters offices. For example, the Oak Ridge Operations Office performs work for each of the three main headquarters program offices as well as for other headquarters offices and, in doing so, performs work for all four business lines. Because Oak Ridge conducts its work through the contractors that manage its facilities, the performance of the work for the business lines is broken down further and accomplished at organizations below the operations office level. In the end, the business lines are simply a summation of the basic types of work that DOE is to accomplish through the various parts of its complex organization. In addition to DOE’s strategic plan, various programs and offices prepare subordinate plans that are defined as being strategic or multiyear or both. These include plans at different levels within DOE’s programs, field offices, and M&O contractors that are developed to meet various requirements, including those of federal laws, DOE orders, and total quality management initiatives. However, little guidance exists within the Department to define the need for planning below the departmental level. Appendix I provides a list of more than 2 dozen strategic and multiyear plans that we identified and an explanation of these plans. DOE’s Environmental Management, Defense Programs, and Energy Research programs are developing their own strategic plans. The planning processes for Environmental Management and Defense Programs are aimed at integrating their program goals with the goals at the field and operations office levels. According to Energy Research officials, they are revising the program’s previous strategic plan and believe that each subprogram office also should have separate plans to guide the facilities that perform work within the program. The Environmental Management Program, in June 1997, issued a discussion draft of its planning effort called Accelerating Cleanup: Focus on 2006. After receiving feedback from the program and field levels on this draft, Environmental Management officials plan to develop a draft national plan by early 1998. Although the goals of the plan are not final, the basic direction is to (1) clean up as many DOE sites with environmental problems as possible by the year 2006, acknowledging that cleanup at some sites will not be completed by then; (2) reduce costs and increase productivity during the cleanup process; and (3) comply with all regulatory requirements. These program goals, together with program performance measures, are being used at the field level to develop site plans and individual project plans at the sites. In 1996, DOE’s Office of Defense Programs released the Stockpile Stewardship and Management Plan, also referred to as the Green Book.This plan, which the Office of Defense Program officials considered a strategic plan but also referred to as an implementation plan, outlines the program’s strategic goals and implementation objectives for current and future years. Defense Program officials told us that this plan minimizes the need for subordinate strategic plans. However, the officials acknowledged that offices may still develop strategic or multiyear plans to provide more detail or direction, or to market or publicize specific programs and projects. Currently, at least one subprogram in Defense Programs has a multiyear plan—the Accelerated Strategic Computing Initiative—but other subprograms are developing their own multiyear plans to meet their specific needs. In 1995, DOE’s Office of Energy Research published its first strategic plan to guide DOE facilities that perform work within the program. It is now working on an updated strategic plan. The Office of Energy Research officials told us that DOE’s departmental strategic plan is too broad to serve as a meaningful “road map” for the Energy Research programs. The Energy Research subprograms will be expected to develop subordinate plans linked to the Energy Research strategic plan. These plans will then be used to guide facilities that perform work for the program. Additionally, according to an official, the Office of Fusion Energy Sciences has prepared strategic plans in response to congressional interest. For example, the Strategic Plan for the Restructured Fusion Energy Sciences Program was done in response to the Conference Report accompanying the Energy and Water Appropriations Act of 1996. Each field office we visited either had or was updating some form of a strategic or multiyear plan. The Albuquerque and Savannah River operations offices, each of which participated in more than one program, had their own field-level strategic plans, while the Oak Ridge Operations Office intended to develop a site-level strategic plan. At the Savannah River Site, we found a site program office with its own strategic plan. Multiyear and strategic plans also were prepared for individual laboratories by the contractors that operated them and for the contractors’ own organizations. In addition, because all of the field offices we visited performed work for the Environmental Management program, they had prepared their own site-specific plans under the Accelerating Cleanup: Focus on 2006 program. At Savannah River, the operations office had prepared a strategic plan that included a goal and several objectives for environmental management. The operations office’s Environmental Restoration Office had also prepared a strategic plan with goals and objectives for the environmental restoration program. This was in addition to the site-level Accelerating Cleanup: Focus on 2006 plan that included the program’s goals. The Albuquerque Operations Office, which works with the Office of Defense Programs on the Stockpile Stewardship and Management Program, has its own strategic plan that provides a corporate vision and focus areas that include Albuquerque’s contributions to the U.S. nuclear weapons program. However, defense planning officials in Albuquerque told us that they rely on the Green Book, and not on the Albuquerque strategic plan, to develop the weapons program at the Albuquerque Operations Office. Although Albuquerque planning officials acknowledged that their strategic plan added little to the programmatic aspects of planning, they said that it does provide a corporate vision and defines the mission of the Albuquerque Operations Office. In contrast to the field offices at Albuquerque and Savannah River, the Chicago Operations Office’s strategic planning focused solely on the office’s administrative and oversight functions. Finance and planning officials there explained that their office does not address the programmatic work being done at the Argonne National Laboratory because programmatic guidance for the laboratory’s basic research activities is provided directly by the Office of Energy Research and its subprograms. Finally, in addition to the plans prepared by DOE, we found that multiyear and strategic plans also were prepared for DOE laboratories and some other facilities by the contractors who operated them. For example, each of the laboratories in our review prepared an institutional plan. DOE guidance explains that institutional plans provide a means to consider each laboratory as an institution rather than as a collection of programs and to review its mission, its health as an institution, and its plans for the future. Although the Sandia National Laboratories had an institutional plan, it also had prepared a separate strategic plan. We found that the contractors at both Savannah River and Oak Ridge also prepared strategic plans for their own operations. The Oak Ridge Reservation contractor had prepared an internal strategic plan for its operations that included a vision, business strategy, and objectives. The Savannah River Site contractor had developed a strategic plan in 1994 to prepare itself for the new direction and strategy defined by the Secretary of Energy. While extensive planning is going on at different levels of DOE, it is not clear how all of the plans prepared by the programs, field offices, and contractors are linked to DOE’s departmental strategic plan. According to DOE, the departmental strategic plan “is the highest level tier of planning for the Department.” The plan itself notes that performance is the common link that ties the planning system together throughout the Department. However, DOE’s Strategic Management System does not provide specific directions on how to link the goals of subordinate strategic and multiyear plans to the goals, objectives, and strategies of the departmental strategic plan. “ggressively clean up the environmental legacy of nuclear weapons and civilian nuclear research and development programs, minimize future waste generation, safely manage nuclear materials, and permanently dispose of the Nation’s radioactive wastes.” But the goal in the Savannah River site-level strategic plan is to “demonstrate excellence in environmental stewardship.” This goal is supported by several site objectives. In addition to the site-level strategic plan, the site’s Environmental Restoration Program has its own strategic plan with five goals and multiple objectives. The goals of this plan are • demonstrate safety excellence, • meet or expedite regulatory requirements, • maximize deployment of innovative technologies, and • demonstrate cost-effectiveness. While one can argue that the two different Savannah River plans have goals that can be encompassed in the environmental quality business line goal, neither of the subordinate strategic plans explains whether its goals and objectives are to fulfill a departmental strategic goal, objective, or strategy. As a result, we believe the significance of Savannah River’s contribution to the DOE strategic plan, as expressed in its own various strategic plans, is not clear. While the various plans did not show clear linkage to DOE’s strategic plan, planning officials from headquarters’ program and field offices told us that they believed that the goals in their plans were linked to the goals in the departmental strategic plan. In some cases, however, they noted that it was easy to show a linkage because the DOE strategic plan’s goals were so vague. For example, the budget and program officials in DOE’s Albuquerque Operations Office said they found it relatively easy to show the linkages among the Sandia National Laboratories’ contract documents, the Albuquerque Operations Office’s strategic plan, DOE’s departmental strategic plan, and the Office of Defense Programs’ Green Book even though we had difficulty discerning the linkage. They said the goals of DOE’s strategic plan and the Green Book were sufficiently broad for them to easily show a linkage. One of the features of the Environmental Management program’s new Accelerating Cleanup: Focus on 2006 strategic plan is that it is clearly linked with the field level plans. While the Environmental Management program’s strategic plan was not yet complete, planning staff from the program office told us that they were seeking to integrate the program’s goals with the goals of the field offices by requiring these offices to develop their plans based on the program’s goals. The June 1997 draft plan provides several program goals that, when final, should provide clear and direct links between the goals of the program and the field offices. In addition to the program goals, the plan lists program performance measures to be achieved at the field level. Because the field offices are required to use the program’s goals and performance measures in the development of their site plans, the site-level goals and measures should be linked to the program’s goals and measures. We discussed our difficulty in linking subordinate plans to the departmental strategic plan with the Acting Director of Strategic Planning, Budget and Program Evaluation, who explained that many of the linkages are more implicit than explicit for two important reasons. First, the DOE strategic plan required by the Results Act was only published in September 1997. Hence, there has not been enough time for all levels of DOE to fully adjust their plans to show explicit linkage. Second, DOE’s Strategic Management System has not yet matured and will need several planning cycles to produce the desired results. While we do not disagree with the Acting Director’s comments, we also found that subordinate plans were not clearly linked with DOE’s 1994 strategic plan. DOE’s Strategic Management System requires that only a minimum number of plans be published. However, it does not state which plans should be prepared or whether they should be strategic or operational plans. According to DOE guidance, strategic plans are to address what is to be done and the operational plans are to address how it is to be done. According to the Strategic Management System, • only a minimum number of plans are to be published; • plans should be consolidated and redundancies eliminated wherever • only plans that are required by laws, directives, or plans that contribute to effective management should be published. DOE’s guidance further explains that published plans should identify their purpose and their relationship to the Strategic Management System, which seeks to mesh the Department’s interrelated strategic planning. Determining which plans are unnecessary and which represent a “minimum” of plans is a difficult task. However, in some cases, it was apparent that planning staff were not sure if all plans were used or needed, as the following examples show: • The Albuquerque Operations Office included a section for Defense Programs in its strategic plan, but officials there told us that they do not use it in the development of the Sandia National Laboratories’ defense projects. • Planning officials at the Office of Defense Programs told us that the Green Book is the primary plan for the office’s program; however, as discussed above, at least one subprogram within the office has produced its own plan to meet its specific needs and others are being developed on a case-by-case basis. • The Sandia National Laboratories had both an institutional plan, which included a strategic plan, and a second, separate strategic plan. • The Savannah River Operations Office’s strategic plan includes a section for environmental management; the site’s Environmental Restoration Program has its own strategic plan; and the site also prepared a site-level Focus on 2006 plan for the Environmental Management program. We discussed the proliferation of strategic and multiyear plans with the Acting Director of Strategic Planning, Budget and Program Evaluation, who explained that his office does not dictate the number of strategic plans that are appropriate or what the plans should look like. Currently, the number and appearance of strategic plans are left up to the individual headquarters program offices. We believe DOE has an opportunity to increase the integration and cohesiveness of its programs by aligning its organization with its business lines and providing specific direction on how plans should be linked, from the lowest-level strategic and multiyear plans to the departmental strategic plan. Currently, DOE’s strategic plan focuses the Department’s activities in four business lines, but DOE itself is organized more traditionally with multiple programs and related headquarters and field offices that in turn are supported by contractors that operate DOE facilities. We recognize that changing DOE’s organizational structure may not be easy. However, as DOE becomes more outcome oriented, it may find that its organizational structure is outmoded and must be changed to better fulfill its strategic missions and goals. Furthermore, DOE’s strategic planning is not being done with the benefit of a well-defined road map. The Strategic Management System lays out a program, but it does not provide sufficient detail to make strategic planning work efficiently or effectively. As a result, different ideas about strategic planning are emerging in DOE’s program and field offices. To develop a systematic, cohesive, and comprehensive strategic planning process, DOE needs to provide its offices with clear direction on strategic planning. Such direction should lay out which plans are needed and whether they should be strategic or operational plans. We recommend that the Secretary of Energy take the following actions: • Review the Department’s organizational structure and seek opportunities to better align the organization with its strategic plan’s business lines. • Direct the Office of Strategic Planning, Budget and Program Evaluation to develop specific procedures that state how subordinate strategic and multiyear plans are to relate to the departmental strategic plan. In developing these procedures, the office should consider whether the goals and objectives of the subordinate plans should be linked to the departmental strategic goals, objectives, or strategies. • Direct the Office of Strategic Planning, Budget and Program Evaluation to review DOE’s requirements for subordinate strategic and multiyear plans and modify or eliminate those requirements that produce superfluous strategic and multiyear plans. DOE generally agreed with our findings and recommendations and provided comments to clarify its position. DOE pointed out that its Strategic Management System was designed as a framework for implementing the Results Act and was not meant to be a prescriptive directive that would replace basic, good management. DOE also reiterated that it may take several planning cycles to perfect its strategic planning process. Finally, DOE stated that its ongoing efforts seek to implement our recommendations. The Results Act envisions that federal agencies will achieve their strategic goals by meeting the performance goals in their annual performance plans. In addition, agencies’ plans are to serve as a means of showing how budgetary resources will be used to achieve annual performance goals. Although DOE’s annual performance plan for fiscal year 1999 broadly conforms to this vision, it does not allocate the requested budgetary resources to its annual performance goals. Such a linkage would show the Congress the budgetary resources DOE intends to apply to provide the level of performance indicated by the performance goals and measures in the agency’s annual performance plan. In another matter related to performance, DOE was late in providing annual contract goals and incentive fees for its performance-based management contracts for the fiscal year that began October 1, 1997. As a result, the contractors managing DOE’s facilities began their work before approved goals and incentive fees were made a part of their contracts. The performance goals in DOE’s 1999 annual performance plan are linked to the agency’s strategic goals and objectives. Although this linkage of goals and objectives meets the Results Act’s requirement that annual performance plans be consistent with strategic plans, the annual performance plan does not directly link requested budgetary resources to the level of performance that is to be achieved during the fiscal year. An initiative undertaken by the Office of Environmental Management holds promise as a way to systematically link required budgetary resources to performance levels throughout DOE. In its first annual performance plan under the Results Act, DOE met the act’s requirement that an agency’s performance plan be consistent with the agency’s strategic plan by aligning the performance plan’s goals and measures with those in its strategic plan. However, DOE could improve its annual performance plan and address another of the Results Act’s expectations by identifying the budgetary resources required to meet its annual performance goals. “The Department of Energy and its partners promote secure, competitive, and environmentally responsible energy systems that serve the needs of the public.” “demonstrating four advanced production enhancement technologies that could ultimately add 190 million barrels of domestic reserves, including 30 million barrels during fiscal year 1999.” If DOE’s annual performance plan presented the requested budgetary resources with this specific annual performance goal, the annual performance plan could be used to evaluate the anticipated performance in light of the funds requested to support it. According to DOE’s Acting Director of Strategic Planning, Budget and Program Evaluation, the annual performance plan does not provide this level of specificity because DOE’s budget request is performance-based. The Acting Director explained that the annual performance plan links DOE’s programs and their requested budget amounts to the strategic objectives to which they contribute. The Acting Director further explained that by reviewing the budget request for those programs, it was possible to identify the annual performance goals and the funds requested to achieve those goals. We attempted to link DOE’s annual performance goals and measures from the annual performance plan to the budget request to see if it was possible. The budget request for the Office of Defense Programs did include a matrix that listed the performance goals and measures from the annual performance plan and identified the funds being requested to achieve those goals and measures. However for the budgets of the offices of Energy Research and Environmental Management, and several smaller offices that we looked at, the same clear linkage was not present. For example, for several performance goals and measures from the annual performance plan, it was possible to find the same or similar goals listed in various sections of the budget request. However, the funds to achieve these goals were not identified specifically with the individual goals. Furthermore, while the Office of Defense Programs’ budget request included a matrix with the goals and the funds requested in one place, the other programs listed goals throughout the various sections of the budget request but did not include, in all cases, the associated resources. DOE can make the performance goals in its performance-based budget more useful by clearly linking them to the annual performance goals and measures in the annual performance plan. A key feature of the Office of Environmental Management’s planning and budgeting process described in its draft plan, Accelerating Cleanup: Focus on 2006, is an electronic management system. This system is intended to tie budgetary resources to the expected level of performance for the program, field offices, sites, and individual projects. The system covers several hundred individual environmental management projects—each documented in a project baseline summary. The project baseline summaries provide information such as the overall scope of work, schedule, estimated cost, and performance measures. These project baseline summaries will be used in the formulation of the annual budget and in the identification of the proposed levels of performance that are to meet the overall Environmental Management program’s performance goals and measures. Finally, the project baseline summaries are used to track actual performance. As a result, the Office of Environmental Management should have the information it will need to prepare future budget requests identifying the budgetary resources needed to achieve specific performance goals and measures in its annual performance plans. Because this information is based on the project baseline summaries, the office’s budget requests will link the amount of resources needed with the expected level of performance and the related measures of that performance by the program, field office, site, and individual project involved. The office also expects to evaluate performance at these same organizational levels. Although this planned system may succeed in providing specific links between the program’s performance goals and required resources, we do not know if it is directly transferrable to DOE’s other programs. As it is currently constructed, the system is designed to work for site-specific activities but may require some modification for programs that carry out single missions at several sites. For example, while the Environmental Management program is primarily measuring the individual performance of various sites, the Office of Defense Programs’ planning officials explained that the Stockpile Stewardship and Management Program requires measuring the performance of several sites working together to accomplish the program’s goals. In 1994, DOE adopted performance-based management contracts as part of its contract reform effort for the companies and universities that manage its facilities. While performance-based contracts can help DOE implement the Results Act by translating annual program performance goals into goals specific to particular contractors, the Department did not reach closure with its contractors on their annual goals and incentive fees before they began work under their contracts for fiscal year 1998. “for two of the measures, the Department’s requirements had not been established in advance of contractor performance. . . . As the fiscal year progressed, milestones were established that identified the work to be performed for Bechtel to earn its incentive fee. However, many of the milestones were added after the work had already been accomplished by Bechtel.” “performance milestones established after the fact do not incentivize future contractor performance. This practice created a retroactive, artificial basis to support the payment of contractor fees and was incompatible with the basic principles of performance-based contracting.” In light of the implementation and other problems identified by us, DOE’s Inspector General, and DOE’s Office of Procurement and Assistance Management, the Office of Procurement and Assistance Management, on August 28, 1997, required all performance objectives and associated incentive fees to be submitted to it for review and approval prior to the start of negotiations with the contractor. DOE, in its performance-based management contracts, seeks to have performance goals and incentive fees incorporated in the contracts by the start of the fiscal year. However, for 16 of the 20 contractors, annual performance goal and incentive fee agreements were not approved until after the fiscal year began on October 1, 1997. Of these 16, 6 were approved in November 1997, 3 were approved in December 1997, 6 were approved in January 1998, and 1 was approved in March 1998. Several of the goal and incentive fee agreements were resubmitted after the beginning of the fiscal year because of budget uncertainties. Table 3.1 lists the dates on which the Office of Procurement and Assistance Management received the goals and incentive fees for review from the contracting offices and the dates on which it approved the plans so that negotiations could begin with the contractors. Once the performance goals and incentive fees are approved, additional time may be required to negotiate the final agreements with the contractors. For example, the fiscal year 1998 performance goals and incentive fees for the Argonne National Laboratory contract were approved on January 15, 1998. However, the DOE contracting office in Chicago did not plan to begin fee negotiations with the contractor until late February 1998. Similarly, the fiscal year 1998 performance goals and incentive fees for the Rocky Flats Environmental Technology Site were approved on November 24, 1997. But after considerable negotiation, the Rocky Flats field office and the contractor had not, as of February 5, 1998, reached final agreement on the amount of incentive fees to be allocated to each individual goal. As a result of these delays, contractors were performing fiscal year 1998 work before DOE finalized the approved contractors’ fiscal year 1998 annual performance goals and incentives. DOE officials at the Chicago and Rocky Flats offices told us that the Office of Procurement and Assistance Management’s new review process had contributed to the delays. A Rocky Flats planning official also noted that delays were attributable to his office’s own efforts to perfect the performance goals and incentive fees before submitting them to headquarters for review and to the difficult negotiations his office had with the contractor. However, the Rocky Flats official acknowledged that his office could improve its own planning process and develop draft performance goals further in advance of receiving its final appropriations. The Congress, in passing the Results Act, intended to improve the information that it receives from federal agencies for its policy-and decision-making. One of the Congress’s goals was to get information on the level of agency performance to be expected for the amount of funds requested in the agency’s budget request. For fiscal year 1999, DOE’s annual performance plan did associate the funds requested with its broad strategic goals. However, if DOE explicitly identified its requested budgetary resources with the performance goals and measures in the annual performance plan, the Department would provide the Congress with an enhanced understanding of the budget requested to meet planned program results. Although the Office of Environmental Management’s system for planning, budgeting, and reporting is not yet in final form, we believe that it shows promise in directly relating the required budgetary resources to expected levels of performance. This is an important and necessary feature of any annual planning system. Moreover, incorporating performance goals and incentive fees in performance-based management contracts after contractors have already begun the work reduces the effectiveness of these contracts. Because these incentive fees are provided to enhance the contractors’ efforts to meet the specified goals, adding the goals and incentive fees to contracts after work starts is contrary to the concepts of performance-based contracting. We recommend that the Secretary of Energy take the following actions: • Direct the Office of Strategic Planning, Budget and Program Evaluation to work with DOE’s various programs to develop integrated management systems that directly link required budgetary resources to the level of performance that is identified in the annual performance plans. • Modify the agency’s contracting process to ensure adequate time is available to incorporate performance goals and fees in contracts for the start of the fiscal year’s work. DOE generally agreed with our findings and recommendations and provided comments to clarify its position. DOE explained that it is currently working on a “mapping” effort that will better show the linkage of its annual performance plan to its budget request. Additionally, DOE stated that its ongoing efforts seek to implement our recommendations. | Pursuant to a congressional request, GAO reviewed certain aspects of the Department of Energy's (DOE) implementation of the Government Performance and Results Act of 1993, focusing on the Department's strategic and annual planning. GAO noted that: (1) subordinate strategic and multiyear plans prepared by DOE's programs, field offices, and contractors are not clearly linked to the goals, objectives, and strategies of the Department's strategic plan; (2) although DOE's Strategic Management System guidance provides a basic outline of the planning process, it does not provide clear directions on how these subordinate plans should be linked to DOE's strategic plan; (3) additionally, DOE formed its Strategic Management System around its business lines and its organizations are not aligned with the business lines; (4) for example, DOE has three main program offices--Defense Programs, Energy Research, and Environmental Management--whose work is done through various field organizations and management and operating contractors; (5) as a result, these different program offices and their supporting organizations often contribute to the fulfillment of the same business lines through a variety of different, complex, crosscutting relationships; (6) DOE, in its first annual performance plan under the Results Act, links the annual performance plan's goals and measures to those in the strategic plan; (7) DOE also provides a description of how budgetary resources are linked to its strategic goals; (8) however, the annual performance plan could be more useful if it described how the requested budgetary resources are linked to the annual performance goals in the plan; (9) in addition, DOE did not incorporate the approved performance goals and incentive fees in its performance-based management and operating contracts--accounting for 70 percent of DOE's obligations--until after the start of the current fiscal year and after the contractors had already begun their work; (10) the goals and incentive fees agreed to in these contracts are intended to guide and enhance the contractors' performance; and (11) not incorporating the goals and incentive fees until after the contractors begin work reduces the usefulness of performance-based contracting. |
Title XI of the Merchant Marine Act of 1936, as amended, authorizes the Secretary of Transportation to guarantee debt issued for the purpose of financing or refinancing the construction, reconstruction, or reconditioning of U.S.-flag vessels or eligible export vessels built in U.S. shipyards and the construction of advanced and modern shipbuilding technology of general shipyard facilities located in the United States. Title XI guarantees are backed by the full faith and credit of the United States. Title XI was created to help promote growth and modernization of the U.S. merchant marine and U.S. shipyards by enabling owners of eligible vessels and shipyards to obtain long-term financing on terms and conditions that might not otherwise be available. Under the program, MARAD guarantees the payment of principal and interest to purchasers of bonds issued by vessel and shipyard owners. These owners may obtain guaranteed financing for up to 87.5 percent of the total cost of constructing a vessel or modernizing a shipyard. Borrowers obtain funding for guaranteed debt obligations in the private sector, primarily from banks, pension funds, life insurance companies, and the general public. MARAD loan guarantees represent about 10 percent of the U.S.-flagged maritime financing market, according to MARAD officials. However, MARAD plays a greater role in certain segments of the maritime finance market. For example, according to a private-sector maritime lender, MARAD guarantees financing on about 15 percent of the country’s inland barge market. Over the last 10 years, MARAD experienced defaults in amounts that totaled $489 million. One borrower, AMCV, defaulted on five loan guarantee projects in amounts totaling $330 million, 67 percent of the total defaulted amounts. Figure 1 shows the nine defaults experienced by MARAD over the past 10 years, five of which were associated with AMCV and which are shown in gray. Once an applicant submits a Title XI application to MARAD, and prior to execution of a guarantee, MARAD must determine the economic soundness of the project, as well as the applicant’s capability to construct or operate the ship or shipyard. For example, the shipowner or shipyard must have sufficient operating experience and the ability to operate the vessels or employ the technology on an economically sound basis. The shipowner or shipyard must also meet certain financial requirements with respect to working capital and net worth. The amount of the obligations that MARAD may guarantee for a project is based on the ship or shipyard costs. Title XI permits guarantees not exceeding 87.5 percent of the actual cost of the ship or shipyard, with certain projects limited to 75 percent financing. The interest rate of the guaranteed obligations is determined by the private sector. MARAD also levies certain fees associated with the Title XI program. For example, applicants must pay a nonrefundable filing fee of $5,000. In addition, prior to issuance of the commitment letter, the applicant must pay an investigation fee against which the filing fee is then credited. Participants must also pay a guarantee fee, which is calculated by determining the amount of obligations expected to be outstanding and disbursed to the shipowner or shipyard during each year of financing. The Title XI program is also subject to the Federal Credit Reform Act (FCRA) of 1990, which was enacted to require that agency budgets reflect a more accurate measurement of the government’s subsidy costs for direct loans and loan guarantees. FCRA is intended to provide better cost comparisons both among credit programs and between credit and noncredit programs. The credit subsidy cost is the government’s estimated net cost, in present value terms, of direct or guaranteed loans over the entire period the loans are outstanding. Credit reform was intended to ensure that the full cost of credit programs would be reflected in the budget so that the executive branch and Congress might consider these costs when making budget decisions. Each year, as part of the President’s Budget, agencies prepare estimates of the expected subsidy costs of new lending activity for the upcoming year. Unless OMB approves an alternative proposal, agencies are also required to reestimate this cost annually. OMB has oversight responsibility for federal loan program compliance with FCRA requirements and has responsibility for approving subsidy estimates and reestimates. All credit programs automatically receive any additional budget authority that may be needed to fund reestimates. For discretionary programs this means there is a difference in the budget treatment of the original subsidy cost estimates and of subsidy cost reestimates. The original estimated subsidy cost must be appropriated as part of the annual appropriation process and is counted under any existing discretionary funding caps. However, any additional appropriation for upward reestimates of subsidy cost is not constrained by any budget caps. This design could result in a tendency to underestimate the initial subsidy costs of a discretionary program. Portraying a loan program as less costly than it really is when competing for funds means more or larger loans or loan guarantees could be made with a given appropriation because the program then could rely on a permanent appropriation for subsequent reestimates to cover any shortfalls. This built-in incentive is one reason to monitor subsidy reestimates. Monitoring reestimates is a key control over tendencies to underestimate costs as well as a barometer of the quality of agencies’ estimation processes. When credit reform was enacted, it generally was recognized that agencies did not have the capacity to implement fully the needed changes in their accounting systems in the short-term and that the transition to budgeting and accounting on a present-value basis would be difficult. However, policy makers expected that once agencies established a systematic approach to subsidy estimation based on auditable assumptions, present value-based budgeting for credit would provide them with significantly better information. MARAD has not fully complied with some key Title XI program requirements. We found that MARAD generally complied with requirements to assess an applicant’s economic soundness before issuing loan guarantees. MARAD used waivers or modifications, which, although permitted by MARAD regulations, allowed MARAD to approve some applications even though borrowers had not met all financial requirements. MARAD did not fully comply with regulations and established practices pertaining to project monitoring and fund disbursement. Finally, while MARAD has guidance governing the disposition of defaulted assets, adherence to this guidance is not mandatory, and MARAD did not always follow it in the defaulted cases we reviewed. We looked at five MARAD-financed projects (see table 1). MARAD regulations do not permit MARAD to guarantee a loan unless the project is determined to be economically sound. MARAD generally complied with requirements to assess an applicant’s economic soundness before approving loan guarantees, and we were able to find documentation addressing economic soundness criteria for the projects included in our review. Specifically, we were able to find documentation addressing supply and demand projections and other economic soundness criteria for the projects included in our review. In 2002, MARAD’s Office of Statistical and Economic Analysis found a lack of a standardized approach for conducting market analyses. Because of this concern, in November 2002, it issued guidance for conducting market research on marine transportation services. However, adherence to these guidelines is not required. According to the Department of Transportation (DOT) Assistant Secretary for Administration, the market research guidelines developed by the Office of Statistical and Economic Analysis were neither requested nor approved by Title XI program management. Finally, while MARAD may not waive economic soundness criteria, officials from the Office of Statistics and Economic Analysis which is responsible for providing independent assessment of the market impact on economic soundness expressed concern that their findings regarding economic soundness might not always be fully considered when MARAD approved loan guarantees. They cited a recent instance where they questioned the economic soundness of a project that was later approved without their concerns being addressed. According to the Associate Administrator for Shipbuilding, all concerns, including economic soundness concerns, are considered by the MARAD Administrator. Shipowners and shipyard owners are also required to meet certain financial requirements during the loan approval process. However, MARAD used waivers or modifications, which, although permitted by Title XI regulations, allowed MARAD to approve some applications even though borrowers had not met all financial requirements that pertained to working capital, long-term debt, net worth, and owner-invested equity. For example, AMCV’s Project America, Inc., did not meet the qualifying requirements for working capital, among other things. Although MARAD typically requires companies to have positive working capital, an excess of current assets over current liabilities, the accounting requirements for unterminated passenger payments significantly affect this calculation because this deferred revenue is treated as a liability until earned. Because a cruise operator would maintain large balances of current liabilities, MARAD believed it would be virtually impossible for AMCV to meet a positive working capital requirement if sound cash management practices were followed. Subsequently, MARAD used cash flow tests for Project America, Inc., in lieu of working capital requirements for purposes of liquidity testing. According to the Assistant Secretary for Administration, one of the major cruise lines uses cash flow tests as a measure of its liquidity. According to MARAD officials, waivers or modifications help them meet the congressional intent of the Title XI program, which is to promote the growth and modernization of the U. S. merchant marine industry. Further, they told us that the uniqueness of the Title XI projects and marine financing lends itself to the use of waivers and modifications. However, by waiving or modifying financial requirements, MARAD officials may be taking on greater risk in the loans they are guaranteeing. Consequently, the use of waivers or modifications could contribute to the number or severity of loan guarantee defaults and subsequent federal payouts. In a recent review, the Department of Transportation Inspector General (IG) noted that the use of modifications increases the risk of the loan guarantee to the government and expressed concern about MARAD undertaking such modifications without taking steps to mitigate those risks. The IG recommended that MARAD require a rigorous analysis of the risks from modifying any loan approval criteria and impose compensating requirements on borrowers to mitigate these risks. MARAD did not fully comply with requirements and its own established practices pertaining to project monitoring and fund disbursement. Program requirements specify periodic financial reporting, controls over the disbursement of loan funds, and documentation of amendments to loan agreements. MARAD could not always demonstrate that it had complied with financial reporting requirements. In addition, MARAD could not always demonstrate that it had determined that projects had made progress prior to disbursing loan funds. Also, MARAD broke with its own established practices for determining the amount of equity a shipowner must invest prior to MARAD making disbursements from the escrow fund. MARAD did so without documenting this change in the loan agreement. Ultimately, weaknesses in MARAD’s monitoring practices could increase the risk of loss to the federal government. MARAD regulations specify that the financial statements of a company in receipt of a loan guarantee shall be audited at least annually by an independent certified public accountant. In addition, MARAD regulations require companies to provide semiannual financial statements. However, MARAD could not demonstrate that it had received required annual and semiannual statements. For example, MARAD could not locate several annual or semiannual financial statements for the Massachusetts Heavy Industries (MHI) project. Also, MARAD could not find the 1999 and 2000 semiannual financial reports for AMCV. The AMCV financial statements were later restated, as a result of a Securities and Exchange Commission (SEC) finding that AMCV had not complied with generally accepted accounting principles in preparing its financial statements. In addition, several financial statements were missing from MARAD records for Hvide Van Ommeran Tankers (HVIDE) and Global Industries Ltd. When MARAD could provide records of financial statements, it was unclear how the information was used. Further, the Department of Transportation Inspector General (IG) in its review of the Title XI program found that MARAD had no established procedures or policies incorporating periodic reviews of a company’s financial well-being once a loan guarantee was approved. An analysis of financial statements may have alerted MARAD to financial problems with companies and possibly given it a better chance to minimize losses from defaults. For example, between 1993 and 2000, AMCV had net income in only 3 years and lost a total of $33.3 million. Our analysis showed a significant decline in financial performance since 1997. Specifically, AMCV showed a net income of $2.4 million in 1997, with losses for the next 3 years, and losses reaching $10.1 million in 2000. Although AMCV’s revenue increased steadily during this period by a total of 25 percent, or nearly $44 million, expenses far outpaced revenue during this period. For example, the cost of operations increased 29 percent, or $32.3 million, while sales and general and administrative costs increased over 82 percent or $33.7 million. During this same period, AMCV’s debt also increased over 300 percent. This scenario combined with the decline in tourism after September 11, 2001, caused AMCV to file for bankruptcy. On May 22, 2001 Litton Ingalls Shipbuilding notified AMCV that it was in default of its contract due to nonpayment. Between May 22 and August 23, 2001, MARAD received at least four letters from Ingalls, the shipbuilder, citing its concern about the shipowner’s ability to pay construction costs. However, it was not until August 23 that MARAD prepared a financial analysis to help determine the likelihood of AMCV or its subsidiaries facing bankruptcy or another catastrophic event. MARAD could not always demonstrate that it had linked disbursement of funds to progress in ship construction, as MARAD requires. We were not always able to determine from available documents the extent of progress made on the projects included in our review. For example, a number of Project America, Inc., disbursement requests did not include documentation that identified the extent of progress made on the project. Also, while MARAD requires periodic on-site visits to verify the progress on ship construction or shipyard refurbishment, we did not find evidence of systematic site visits and inspections. For Project America, Inc., MARAD did not have a construction representative committed on-site at Ingalls Shipyard, Inc. until May 2001, 2 months after the MARAD’s Office of Ship Design and Engineering Services recommended a MARAD representative be located on-site. For the Searex Title XI loan guarantee, site visits were infrequent until MARAD became aware that Ingalls had cut the vessels into pieces to make room for other projects. For two projects rated low-risk, Hvide Van Ommeran Tankers and Global Industries, Ltd., we found MARAD conducted site visits semiannually and annually, respectively. We reviewed MHI’s shipyard modernization project, which was assigned the highest risk rating, and found evidence that construction representatives conducted monthly site visits. However, in most instances, we found that a project’s risk was not routinely linked to the extent of project monitoring. Further, without a systematic approach to on-site visits, MARAD relied principally on the shipowner’s certification and documentation of money spent in making decisions to approve disbursements from the escrow fund. We also found that, in a break with its own established practice, MARAD permitted a shipowner to define total costs in a way that permitted earlier disbursement of loan funds from the escrow fund. MARAD regulations require that shipowners expend from their own funds at least 12.5 percent or 25 percent, depending on the type of vessel or technology, of the actual cost of a vessel or shipyard project prior to receiving MARAD-guaranteed loan funds. In practice, MARAD has used the estimated total cost of the project to determine how much equity the shipowner should provide. In the case of Project America, Inc., the single largest loan guarantee in the history of the program, we found that MARAD permitted the shipowner to exclude certain costs in determining the estimated total costs of the ship at various points in time, thereby deferring owner-provided funding while receiving MARAD-guaranteed loan funds. This was the first time MARAD used this method of determining equity payments, and MARAD did not document this agreement with the shipowner as required by its policy. In September 2001, MARAD amended the loan commitment for this project, permitting the owner to further delay the payment of equity. By then, MARAD had disbursed $179 million in loan funds. Had MARAD followed its established practice for determining equity payments, the shipowner would have been required to provide an additional $18 million. Because MARAD had not documented its agreements with AMCV, the amount of equity the owner should have provided was not apparent during this period. Further, MARAD systems do not flag when the shipowner has provided the required equity payment for any of the projects it finances. MARAD officials cited several reasons for its limited monitoring of Title XI projects, including insufficient staff resources, travel budget restrictions and limited enforcement tools. For example, officials of MARAD’s Office of Ship Construction, which is responsible for inspection of vessels and shipyards, told us that they had only two persons available to conduct inspections, and that the office’s travel budget was limited. The MARAD official with overall responsibility for the Title XI program told us that, at a minimum, the Title XI program needs three additional staff. The Office of Ship Financing needs two additional persons to enable a more thorough review of company financial statements and more comprehensive preparation of credit reform materials. Also, the official said that the Office of the Chief Counsel needs to fill a long-standing vacancy to enable more timely legal review. With regard to documenting the analysis of financial statements, MARAD officials said that, while they do require shipowners and shipyard owners to provide financial statements, they do not require MARAD staff to prepare a written analysis of the financial condition of the Title XI borrower. MARAD Assistant Secretary for Administration noted that if financial documents were not submitted after a request for missing documents was made, MARAD’s only legal recourse was to call the loan in default, pay off the Title XI debt and then seek recovery against the borrower. He said that MARAD tries to avoid takings these steps. We found no evidence that MARAD routinely requested missing financial statements or did any analysis. Also, the IG report on the Title XI program released in March 2003 noted that MARAD does not closely monitor the financial health of its borrowers over the term of their loans. We recognize that MARAD has limited enforcement resources, however, for such publicly traded companies as AMCV, financial statements filed with the Securities and Exchange Commission could be used. However, we found no evidence that MARAD attempted to use SEC filings. Inconsistent monitoring of a borrower’s financial condition limits MARAD’s ability to protect the federal government’s financial interests. For example, MARAD would not know if a borrower’s financial condition had changed so that it could take needed action to possibly avoid defaults or minimize losses. Further, MARAD’s practices for assessing project progress limit its ability to link disbursement of funds to progress made by shipowners or shipyard owners. This could result in MARAD disbursing funds without a vessel or shipyard owner making sufficient progress in completing projects. Likewise, permitting project owners to minimize their investment in MARAD-financed projects increases the risk of loss to the federal government. MARAD has guidance governing the disposition of defaulted assets. However, MARAD is not required to follow this guidance, and we found that MARAD does not always adhere to it. MARAD guidelines state that an independent, competent marine surveyor or MARAD surveyor shall survey all vessels, except barges, as soon as practicable after the assets are taken into custody. In the case of filed or expected bankruptcy, an independent marine surveyor should be used. In the case of Searex, MARAD conducted on-site inspections after the default. However, these inspections were not conducted in time to properly assess the condition of the assets. With funds no longer coming in from the project, Ingalls cut the vessels into pieces to make it easier to move the vessels from active work-in-process areas to other storage areas within the property. The Searex lift boat and hulls were cut before MARAD inspections were made. According to a MARAD official, the cutting of one Searex vessel and parts of the other two Searex vessels under construction reduced the value of the defaulted assets. The IG report on the Title XI program released in March 2003 noted that site visits were conducted on guaranteed vessels or property only in response to problems or notices of potential problems from third parties or from borrowers. The guidelines also state that sales and custodial activities shall be conducted in such a fashion as to maximize MARAD’s overall recovery with respect to the asset and debtor. Market appraisals (valuations) of the assets shall be performed by an independent appraiser, as deemed appropriate, to assist in the marketing of the asset. MARAD did not have a market appraisal for the defaulted Project America assets. Also, MARAD relied on an interested party to determine the cost of making Project America I seaworthy. An appraisal of Project America assets immediately after default would have assisted MARAD in preparing a strategy for offering the hull of Project America I and the parts of Project America II for sale. According to MARAD officials, as of March 2003, MARAD had received $2 million from the sale of the Project America I and II vessels. Without a market appraisal, it is unclear whether this was the maximum recovery MARAD could have received. MARAD hired the Defense Contract Audit Agency (DCAA) to verify the costs incurred by Northrop Grumman Ship Systems, Inc., since January 1, 2002, for preparing and delivering Project America I in a weather-tight condition suitable for ocean towing in international waters. A MARAD official said that the DCAA audit would allow MARAD to identify any unsupported costs and recover these amounts from the shipyard. The DCAA review was used to verify costs incurred, but not to make a judgment as to the reasonableness of the costs. DCAA verified costs of approximately $17 million. MARAD officials cite the uniqueness of the vessels and projects as the reason for using guidelines instead of requirements for handling defaulted assets. However, certain practices for handling defaulted assets can be helpful regardless of the uniqueness of a project. Among these are steps to immediately assess the value of the defaulted asset. Without a definitive strategy and clear requirements, defaulted assets may not always be secured, assessed, and disposed of in a manner that maximizes MARAD’s recoveries—resulting in unnecessary costs and financial losses to the federal government. Private-sector maritime lenders we interviewed told us that it is imperative for lenders to manage the financial risk of maritime lending portfolios. In contrast to MARAD, they indicated that to manage financial risk, among other things, they (1) establish a clear separation of duties for carrying out different lending functions; (2) adhere to key lending standards with few, if any, exceptions; (3) use a more systematic approach to monitoring the progress of projects; and (4) primarily employ independent parties to survey and appraise defaulted projects. The lenders try to be very selective when originating loans for the shipping industry. While realizing that MARAD does not operate for profit, it could benefit from the internal control practices employed by the private sector to more effectively utilize its limited resources and to enhance its ability to accomplish its mission. Table 2 describes the key differences in private-sector and MARAD maritime lending practices used during the application, monitoring, and default and disposition phases. Private-sector lenders manage financial risk by establishing a separation of duties to provide a system of checks and balances for important maritime lending functions. Two private-sector lenders indicated that there is a separation of duties for approving loans, monitoring projects financed, and disposing of assets in the event of default. For example, marketing executives from two private-sector maritime lending institutions stated that they do not have lending authority. Also, separate individuals are responsible for accepting applications and processing transactions for loan underwriting. In contrast, we found that the same office that promotes and markets the MARAD Title XI program also has influence and authority over the office that approves and monitors Title XI loans. In February 1998, MARAD created the Office of Statistical and Economic Analysis in an attempt to obtain independent market analyses and initial recommendations on the impact of market factors on the economic soundness of projects. Today, this office reports to the Associate Administrator for Policy and International Trade rather than the Associate Administrator for Shipbuilding. However, the Associate Administrator for Shipbuilding is primarily responsible for overseeing the underwriting and approving of loan guarantees. Title XI program management is primarily handled by offices that report to the Associate Administrator for Shipbuilding. In addition, the same Associate Administrator controls, in collaboration with the Chief of the Division of Ship Financing Contracts within the Office of the Chief Counsel, the disposition of assets after a loan has defaulted. Most recently, MARAD has taken steps to consolidate responsibilities related to loan disbursements. In August 2002, the Maritime Administrator gave the Associate Administrator for Shipbuilding sole responsibility for reviewing and approving the disbursement of escrow funds. According to a senior official, prior to August 2002 this responsibility was shared with the Office of Financial and Rate Approvals under the supervision of the Associate Administrator for Financial Approvals and Cargo Preference. As a result of the consolidation, the same Associate Administrator who is responsible for underwriting and approving loan guarantees and disposing of defaulted assets is also responsible for approval of loan disbursements and monitoring financial condition. MARAD undertook this consolidation in an effort to improve performance of analyses related to the calculation of shipowner’s equity contributions and monitoring of changes in financial condition. However, as mentioned earlier, MARAD does not have controls for clearly identifying the shipowner’s required equity contribution. The consolidation of responsibilities for approval of loan disbursements does not address these weaknesses and precludes any potential benefit from separation of duties. The private-sector lenders we interviewed said they apply rigorous financial tests for underwriting maritime loans. They analyze financial statements such as balance sheets, income statements, and cash flow statements, and use certain financial ratios such as liquidity and leverage ratios that indicate the borrower’s ability to repay. Private-sector maritime lenders told us they rarely grant waivers, or exceptions, to underwriting requirements or approve applications when borrowers do not meet key minimum requirements. Each lender we interviewed said any approved applicants were expected to demonstrate stability in terms of cash on hand, financial strength, and collateral. One lender told us that on the rare occasions when exceptions to the underwriting standards were granted, an audit committee had to approve any exception or waiver to the standards after reviewing the applicant’s circumstances. However, according to one MARAD official the waivers are often made without a deliberative process. Nonetheless, MARAD points to its concurrence system as a deliberative process for key agency officials to concur on loan guarantees and major waivers and modifications. However, as mentioned earlier, the official responsible for performing a macro analysis of the market is not always included in the concurrence process. We found in the cases we reviewed that MARAD often permits waivers or modifications of key financial requirements. Also, a recent IG report found that MARAD routinely modified financial requirements in order to qualify applicants for loan guarantees. Further, the IG noted that MARAD reviewed applications for loan guarantees primarily with in-house staff and recommended that MARAD formally establish an external review process as a check on MARAD’s internal loan application review. A MARAD official told us that MARAD is currently developing the procedures for an external review process of waivers and modifications. These private-sector lenders also indicated that preparing an economic analysis or an independent feasibility study assists in determining whether or not to approve funding based on review and discussion of the marketplace, competition, and project costs. Each private-sector lender we interviewed agreed that performance in the shipping industry was cyclical and timing of projects was important. In addition, reviewing historical data provided information on future prospects for a project. For example, one lender uses these economic analyses to evaluate how important the project will be to the overall growth of the shipping industry. Another lender uses the economic analyses and historical data to facilitate the sale of a financed vessel. In the area of economic soundness analysis, MARAD requirements appear closer to those of the private-sector lenders, in that external market studies are also used to help determine the overall economic soundness of a project. However, assessments of economic soundness prepared by the Office of Statistical and Economic Analysis may not be fully considered when MARAD approves loan guarantees. Private-sector lenders minimized financial risk by establishing loan monitoring and control mechanisms such as analyzing financial statements and assigning risk ratings. Each private-sector lender we interviewed said that conducting periodic reviews of a borrower’s financial statements helped to identify adverse changes in the financial condition of the borrower. For example, two lenders stated that they annually analyzed financial statements such as income statements and balance sheets. The third lender evaluated financial statements quarterly. Based on the results of these financial statement reviews, private-sector lenders then reviewed and evaluated the risk ratings that had been assigned at the time of approval. Two lenders commented that higher risk ratings indicated a need for closer supervision, and they then might require the borrower to submit monthly or quarterly financial statements. In addition, a borrower might be required to increase cash reserves or collateral to mitigate the risk of a loan. Further, the lender might accelerate the maturity date of the loan. MARAD notes that in certain cases, such as a loan guarantee to a subsidiary of Enron, it already uses such requirements. The DOT IG noted that MARAD should place covenants in its loan guarantees concerning the required financial performance and condition of its borrowers, as well as measures to which MARAD is entitled should these provisions be violated. However, the IG expressed concern that MARAD’s minimum monitoring approach would not provide financial information in a timely and sufficient manner. Private-sector lenders use risk ratings in monitoring overall risk, which in turn helped to maintain a balanced maritime portfolio. At MARAD, we found no evidence that staff routinely analyzed or evaluated financial statements or changed risk categories after a loan was approved. For example, we found in our review that for at least two financial statement reporting periods, MARAD was unable to provide financial statements for the borrower, and, in one case, one financial statement was submitted after the commitment to guarantee funds. Our review of the selected Title XI projects indicated that risk categories were primarily assigned for purposes of estimating credit subsidy costs at the time of application, not for use in monitoring the project. Further, we found no evidence that MARAD changed a borrower’s risk category when its financial condition changed. In addition, neither the support office that was initially responsible for reviewing and analyzing financial statements nor the office currently responsible maintained a centralized record of the financial statements they had received. Further, while one MARAD official stated that financial analyses were performed by staff and communicated verbally to top-level agency officials, MARAD did not prepare and maintain a record of these analyses. Private-sector lenders also manage financial risk by linking the disbursement of loan funds to the progress of the project. All the lenders we interviewed varied project monitoring based on financial and technical risk, familiarity with the shipyard, and uniqueness of the project. Two lenders thought that on-site monitoring was very important in determining the status of projects. Specifically, one lender hires an independent marine surveyor to visit the shipyard to monitor construction progress. This lender also requires signatures on loan disbursement requests from the shipowner, shipbuilder, and loan officer before disbursing any loan funds. This lender also relies on technical managers and classification society representatives who frequently visit the shipyard to monitor progress. Shipping executives of this lender make weekly, and many times daily, calls to shipowners to further monitor the project based on project size and complexity. This lender also requires shipowners to provide monthly progress reports so the progress of the project could be monitored. MARAD also relied on site visits to verify construction progress. However, the linkage between the progress of the project and the disbursement of loan funds was not always clear. MARAD tried to adjust the number of site visits based on the amount of the loan guarantee, the uniqueness of project (for example, whether the ship is the first of its kind for the shipowner), the degree of technical and engineering risk, and familiarity with the shipyard. However, the frequency of site visits was often dependent upon the availability of travel funds, according to a MARAD official. Private-sector maritime lenders said they regularly use independent marine surveyors and technical managers to appraise and conduct technical inspections of defaulted assets. For example, two lenders hire independent marine surveyors who are knowledgeable about the shipbuilding industry and have commercial lending expertise to inspect the visible details of all accessible areas of the vessel, as well as its marine and electrical systems. In contrast, we found that MARAD did not always use independent surveyors. For example, we found that for Project America, the shipbuilder was allowed to survey and oversee the disposition of the defaulted asset. As mentioned earlier, MARAD hired DCAA to verify the costs incurred by the shipbuilder to make the defaulted asset ready for sale; however, MARAD did not verify whether the costs incurred were reasonable or necessary. For Searex, construction representatives and officials from the Offices of the Associate Administrator of Shipbuilding and the Chief of the Division of Ship Financing Contracts were actively involved in the disposition of the assets. According to top-level MARAD officials, the chief reason for the difference between private-sector and MARAD techniques for approving loans, monitoring project progress, and disposing of assets is the public purpose of the Title XI program, which is to promote growth and modernization of the U.S. merchant marine and U.S. shipyards. That is, MARAD’s program purposefully provides for greater flexibility in underwriting in order to meet the financing needs of shipowners and shipyards that otherwise might not be able to obtain financing. MARAD is also more likely to work with borrowers that are experiencing financial difficulties once a project is under way. MARAD officials also cited limited resources in explaining the limited nature of project monitoring. While program flexibility in financial and economic soundness standards may be necessary to help MARAD meet its mission objectives, the strict use of internal controls and management processes is also important. Otherwise, resources that could have been used to further the program might be wasted. To aid agencies in improving internal controls, we have recommended that agencies identify the risks that could impede their ability to efficiently and effectively meet agency goals and objectives. Private-sector lenders employ internal controls such as a systematic review of waivers during the application phase and risk ratings of projects during the monitoring phase. However, MARAD does neither. Without a more systematic review of underwriting waivers, MARAD might not be giving sufficient consideration to the additional risk such decisions represent. Likewise, without a systematic process for assessing changes in payment risk, MARAD cannot use its limited monitoring resources most efficiently. Further, by relying on interested parties to estimate the value of defaulted loan assets, MARAD might not maximize the recovery on those assets. Overall, by not employing the limited internal controls it does possess, and not taking advantage of basic internal controls such as those private-sector lenders employ, MARAD cannot ensure it is effectively utilizing its limited administrative resources or the government’s limited financial resources. MARAD uses a relatively simplistic cash flow model that is based on outdated assumptions, which lack supporting documentation, to prepare its estimates of defaults and recoveries. These estimates differ significantly from recent actual experience. Specifically, we found that in comparison with recent actual experience, MARAD’s default estimates have significantly understated defaults, and its recovery estimates have significantly overstated recoveries. If the pattern of recent experience were to continue, MARAD would have significantly underestimated the costs of the program. Agencies should use sufficient reliable historical data to estimate credit subsidies and update—reestimate—these estimates annually based on an analysis of actual program experience. While the nature and characteristics of the Title XI program make it difficult to estimate subsidy costs, MARAD has never performed the basic analyses necessary to determine if its default and recovery assumptions are reasonable. Finally, OMB has provided little oversight of MARAD’s subsidy cost estimate and reestimate calculations. FCRA was enacted, in part, to require that the federal budget reflect a more accurate measurement of the government’s subsidy costs for loan guarantees. To determine the expected cost of a credit program, agencies are required to predict or estimate the future performance of the program. For loan guarantees, this cost, known as the subsidy cost, is the present value of estimated cash flows from the government, primarily to pay for loan defaults, minus estimated loan guarantee fees paid and recoveries to the government. Agency management is responsible for accumulating relevant, sufficient, and reliable data on which to base the estimate and for establishing and using reliable records of historical credit performance. In addition, agencies are supposed to use a systematic methodology to project expected cash flows into the future. To accomplish this task, agencies are instructed to develop a cash flow model, using historical information and various assumptions including defaults, prepayments, recoveries, and the timing of these events, to estimate future loan performance. MARAD uses a relatively simplistic cash flow model, which contains five assumptions—default amount, timing of defaults, recovery amount, timing of recoveries, and fees—to estimate the cost of the Title XI loan guarantee program. We found that relatively minor changes in these assumptions can significantly affect the estimated cost of the program and that, thus far, three of the five assumptions, default and recovery amounts and the timing of defaults, differed significantly from recent actual historical experience. According to MARAD officials, these assumptions were developed in 1995 based on actual loan guarantee experience of the previous 10 years and have not been evaluated or updated. MARAD could not provide us with supporting documentation to validate its estimates, and we found no evidence of any basis to support the assumptions used to calculate these estimates. MARAD also uses separate default and recovery assumptions for each of seven risk categories to differentiate between levels of risk and costs for different loan guarantee projects. We attempted to analyze the reliability of the data supporting MARAD’s key assumptions, but we were unable to do so because MARAD could not provide us with any supporting documentation for how the default and recovery assumptions were developed. Therefore, we believe MARAD’s subsidy cost estimates to be questionable. Because MARAD has not evaluated its default and recovery rate assumptions since they were developed in 1995, the agency does not know whether its cash flow model is reasonably predicting borrower behavior and whether its estimates of loan program costs are reasonable. The nature and characteristics of the Title XI program make it difficult to estimate subsidy costs. Specifically, MARAD approves a small number of guarantees each year, leaving it with relatively little experience on which to base estimates for the future. In addition, each guarantee is for a large dollar amount, and projects have unique characteristics and cover several sectors of the market. Further, when defaults occur, they are usually for large dollar amounts and may not take place during easily predicted time frames. Recoveries may be equally difficult to predict and may be affected by the condition of the underlying collateral. This leaves MARAD with relatively limited information upon which to base its credit subsidy estimates. Also, MARAD may not have the resources to properly implement credit reform. MARAD officials expressed frustration that they do not have and, therefore, cannot devote, the necessary time and resources to adequately carry out their credit reform responsibilities. Notwithstanding these challenges, MARAD has not performed the basic analyses necessary to assess and improve its estimates. According to MARAD officials, they have not analyzed the default and recovery rates because most of their loan guarantees are in about year 7 out of the 25- year term of the guarantee, and it is too early to assess the reasonableness of the estimates. We disagree with this assessment and believe that an analysis of the past 5 years of actual default and recovery experience is meaningful and could provide management with valuable insight into how well its cash flow models are predicting borrower behavior and how well its estimates are predicting the loan guarantee program’s costs. We further believe that, while difficult, an analysis of its risk category system is meaningful for MARAD to ensure that it appropriately classified loan guarantee projects into risk category subdivisions that are relatively homogenous in cost. Of loans originated in the past 10 years, nine have defaulted, totaling $489.5 million in defaulted amounts. Eight of these nine defaults, totaling $487.7 million, occurred since MARAD implemented its risk category system in 1996. Because these eight defaults represent the vast majority (99.6 percent) of MARAD’s default experience, we compared the performance of all loans guaranteed between 1996–2002 with MARAD’s estimates of loan performance for this period. We found that actual loan performance has differed significantly from agency estimates. For example, when defaults occurred, they took place much sooner than estimated. On average, defaults occurred 4 years after loan origination, while MARAD had estimated that, depending on the risk category, peak defaults would occur between years 10–18. Also, actual default costs thus far have been much greater than estimated. We estimated, based on MARAD data, that MARAD would experience $45.5 million in defaults to date on loans originated since 1996. However, as illustrated by figure 2, MARAD has consistently underestimated the amount of defaults the Title XI program would experience. In total, $487.7 million has actually defaulted during this period—more than 10 times greater than estimated. Even when we excluded AMCV, which represents about 68 percent of the defaulted amounts, from our analysis, we found that the amount of defaults MARAD experienced greatly exceeded what MARAD estimated it would experience by $114.6 million (or over 260 percent). In addition, MARAD’s estimated recovery rate of 50 percent of defaulted amounts within 2 years of default is greater than the actual recovery rate experienced since 1996, as can be seen in figure 3. Although actual recoveries on defaulted amounts since 1996 have taken place within 1–3 years of default, most of these recoveries were substantially less than estimated, and two defaulted loans have had no recoveries to date. For the actual defaults that have taken place since 1996, MARAD would have estimated, using the 50 percent recovery rate assumption, that it would recover approximately $185.3 million dollars. However, MARAD has only recovered $94.9 million or about 51 percent of its estimated recovery amount. When we excluded AMCV, which represents about 68 percent of the defaulted amounts, from our analysis, we found that MARAD has more accurately estimated the amount it would recover on defaulted loans, and in fact, has underestimated the actual amount by about $10 million (or about 15 percent). If the overall pattern of recent default and recovery experiences were to continue, MARAD would have significantly underestimated the costs of the program. We also attempted to analyze the process MARAD uses to designate risk categories for projects, but were unable to do so because the agency could not provide us with any documentation about how the risk categories and MARAD’s related numerical weighting system originally were developed. According to OMB guidance, risk categories are subdivisions of a group of loans that are relatively homogeneous in cost, given the facts known at the time of designation. Risk categories combine all loan guarantees within these groups that share characteristics that are statistically predictive of defaults and other costs. OMB guidance states that agencies should develop statistical evidence based on historical analysis concerning the likely costs of expected defaults for loans in a given risk category. MARAD has not done any analysis of the risk category system since it was implemented in 1996 to determine whether loans in a given risk category share characteristics that are predictive of defaults and other costs and thereby comply with guidance. In addition, according to a MARAD official, MARAD’s risk category system is partially based on outdated MARAD regulations and has not been updated to reflect changes to these regulations. Further, MARAD’s risk category system is flawed because it does not consider concentrations of credit risk. To assess the impact of concentration risk on MARAD’s loss experience, we analyzed the defaults for loans originated since 1996 and found that five of the eight defaults, totaling $330 million, or 68 percent of total defaults, involved loan guarantees that had been made to one particular borrower, AMCV. Assessing concentration of credit risk is a standard practice in private- sector lending. According to the Federal Reserve Board’s Commercial Bank Examination Manual, limitations imposed by various state and federal legal lending limits are intended to prevent an individual or a relatively small group from borrowing an undue amount of a bank’s resources and to safeguard the bank’s depositors by spreading loans among a relatively large number of people engaged in different businesses. Had MARAD factored concentration of credit into its risk category system, it would likely have produced higher estimated losses for these loans. After the end of each fiscal year, OMB generally requires agencies to update or “reestimate” loan program costs for differences among estimated loan performance and related cost, the actual program costs recorded in accounting records, and expected changes in future economic performance. The reestimates are to include all aspects of the original cost estimate such as prepayments, defaults, delinquencies, recoveries, and interest. Reestimates allow agency management to compare original budget estimates with actual costs to identify variances from the original estimates, assess the reasonableness of the original estimates, and adjust future program estimates, as appropriate. When significant differences between estimated and actual costs are identified, the agency should investigate to determine the reasons behind the differences, and adjust its assumptions, as necessary, for future estimates and reestimates. We attempted to analyze MARAD’s reestimate process, but we were unable to do so because the agency could not provide us with adequate supporting data on how it determined whether a loan should have an upward or downward reestimate. According to agency management, each loan guarantee is reestimated separately based on several factors including the borrower’s financial condition, a market analysis, and the remaining balance of the outstanding loans. However, without conducting our own independent analysis of these and other factors, we were unable to determine whether any of MARAD’s reestimates were reasonable. Further, MARAD has reestimated the loans that were disbursed in fiscal years 1993, 1994, and 1995 downward so that they now have negative subsidy costs, indicating that MARAD expects these loans to be profitable. However, according to the default assumptions MARAD uses to calculate its subsidy cost estimates, these loans have not been through the period of peak default, which would occur in years 10–18 depending on the risk category. MARAD officials told us that several of these loans were paid off early, and the risk of loss in the remaining loans is less than the estimated fees paid by the borrowers. However, MARAD officials were unable to provide us with adequate supporting information for its assessment of the borrowers’ financial condition and how it determined the estimated default and recovery amounts to assess the reasonableness of these reestimates. Our analysis of MARAD’s defaults and recoveries demonstrates that, when defaults occur, they occur sooner and are for far greater amounts than estimated, and that recoveries are smaller than estimated. As a result, we question the reasonableness of the negative subsidies for the loans that were disbursed in fiscal years 1993, 1994, and 1995. MARAD’s ability to calculate reasonable reestimates is seriously impacted by the same outdated assumptions it uses to calculate cost estimates as well as by the fact that it has not compared these estimates with the actual default and recovery experience. As discussed earlier, our analysis shows that, since 1996, MARAD has significantly underestimated defaults and overestimated recoveries to date. Without performing this basic analysis, MARAD cannot determine whether its reestimates are reasonable, and it is unable to improve these reestimate calculations over time and provide Congress with reliable cost information to make key funding decisions. In addition, and, again, as discussed earlier, MARAD’s inability to devote sufficient resources to properly implement credit reform appears to limit its ability to adequately carry out these credit reform responsibilities. Based on our analysis, we believe that OMB provided little review and oversight of MARAD’s estimates and reestimates. OMB has final authority for approving estimates in consultation with agencies; OMB approved each MARAD estimate and reestimate, explaining to us that it delegates authority to agencies to calculate estimates and reestimates. However, MARAD has little expertise in the credit reform area and has not devoted sufficient resources to developing this expertise. FCRA assigns responsibility to OMB for coordinating credit subsidy estimates, developing estimation guidelines and regulations, and improving cost estimates, including coordinating the development of more accurate historical data and annually reviewing the performance of loan programs to improve cost estimates. Had OMB provided greater review and oversight of MARAD’s estimates and reestimates, it would have realized that MARAD did not have adequate support for the default and recovery assumptions it uses to calculate subsidy cost estimates. MARAD does not operate the Title XI loan guarantee program in a businesslike fashion to minimize the federal government’s fiscal exposure. MARAD does not (1) fully comply with its own requirements and guidelines, (2) have a clear separation of duties for handling loan approval and fund disbursement functions, (3) exercise diligence in considering and approving modifications and waivers, (4) adequately secure and assess the value of defaulted assets, and (5) know what its program costs. Because of these shortcomings, MARAD lacks assurance that it is effectively promoting growth and modernization of the U.S. merchant marine and U.S. shipyards or minimizing the risk of financial loss to the federal government. Consequently, the Title XI program could be vulnerable to waste, fraud, abuse, and mismanagement. Finally, MARAD’s questionable subsidy cost estimates do not provide Congress a basis for knowing the true costs of the Title XI program, and Congress cannot make well- informed policy decisions when providing budget authority. If the pattern of recent experiences were to continue, MARAD would have significantly underestimated the costs of the program. We recommend that Congress consider discontinuing future appropriations for new loan guarantees under the Title XI program until adequate internal controls have been instituted to manage risks associated with the program and MARAD has updated its default and recovery assumptions to more accurately reflect the actual costs associated with the program and that Congress consider rescinding the unobligated balances in MARAD’s program account. We also recommend that Congress consider clarifying borrower equity contribution requirements. Specifically, we recommend that Congress consider legislation requiring the entire equity down payment, based on the total cost of the project including total guarantee fees currently expected to be paid over the life of the project, be paid by the borrower before the proceeds of the guaranteed obligation are made available. Further, we recommend that Congress consider legislation that requires MARAD to consider, in its risk category system, the risk associated with approving projects from a single borrower that would represent a large percentage of MARAD’s portfolio. We recommend that the Secretary of Transportation direct the Administrator of the Maritime Administration to take immediate action to improve the management of the Title XI loan guarantee program. Specifically, to better comply with Title XI loan guarantee program requirements and manage financial risk, MARAD should establish a clear separation of duties among the loan application, project monitoring, and default management functions; establish a systematic process that ensures independent judgments of the technical, economic, and financial soundness of projects during loan guarantee approval; establish a systematic process that ensures the findings of each contributing office are considered and resolved prior to approval of loan guarantee applications involving waivers and exceptions made to program requirements; systematically monitor and document the financial condition of borrowers and link the level of monitoring to the level of project risk; base the borrower’s equity down payment requirement on a reasonable estimate of the total cost of the project, including total guarantee fees expected to be incurred over the life of the project; make apparent the amount of equity funds a shipowner or shipyard owner establish a system of controls, including automated controls, to ensure that disbursements of loan funds are not made prior to a shipowner or shipyard owner meeting the equity fund requirement; create a transparent, independent, and risk-based process for verifying and documenting the progress of projects under construction prior to disbursing guaranteed loan funds; review risk ratings of loan guarantee projects at least annually; and establish minimum requirements for the management and disposition of defaulted assets, including a requirement for an independent evaluation of asset value. To better implement federal credit reform, MARAD should establish and implement a process to annually compare estimated to actual defaults and recoveries by risk category, investigate any material differences that are identified, and incorporate the results of these analyses in its estimates and reestimates; establish and implement a process to document the basis for each key cash flow assumption—such as defaults, recoveries, and fees—and retain this documentation in accordance with applicable records retention requirements; establish and implement a process to document the basis for each reestimate, including an analysis of a borrower’s financial condition and a market analysis; review its risk category system to ensure that it appropriately classifies projects into subdivisions that are relatively homogenous in cost, given the facts known at the time of designation, and that risks and changes to risks are reflected in annual reestimates; and consider, in its risk category system, the risk associated with approving projects from a single borrower that would represent a large percentage of MARAD’s portfolio. To ensure that the reformed Title XI program is carried out effectively and in conformity with program and statutory requirements, MARAD should conduct a comprehensive assessment of its human capital and other resource needs. Such analysis should also consider the human capital needs to improve and strengthen credit reform data collection and analyses. To assist and ensure that MARAD better implements credit reform, and given the questionableness of MARAD’s estimates and reestimates, we also recommend that the Director of OMB provide greater review and oversight of MARAD’s subsidy cost estimates and reestimates. We provided a draft of this report to DOT for its review and comment. We received comments from the department’s Assistant Secretary for Administration, who noted that MARAD has already begun to take steps to improve the operations of the Title XI program consistent with several of our recommendations. The department disagreed with the manner in which we characterized some report findings and provided additional information and data that we have incorporated into our analyses and report as appropriate. We also provided a copy of the draft report to OMB for its review and comment. We received comments from OMB’s Program Associate Director for General Government Programs, and its Assistant Director for Budget, who agreed that recent recovery expectations should be incorporated into future reestimates, but disagreed that OMB had provided little or no oversight over the program’s subsidy cost estimates. The department noted that its Office of Inspector General recently identified a number of issues raised in our report and that MARAD is already addressing these issues. MARAD recognized that aspects of the program’s operation need improvement and said it is working to fine tune program operations and create additional safeguards. Specifically, MARAD has agreed to improve procedures for financial review, seek authorization for outside assistance in cases of unusual complexity, and expand, within resource constraints, its processes for monitoring company financial condition and the condition of assets. The department pointed out that MARAD is permitted, under Title XI regulations, to modify or waive financial criteria for loan guarantees. Before issuing waivers in the future, DOT reported that MARAD will identify any needed compensatory measures to mitigate associated risks. MARAD also agreed to consider using outside financial advisors to review uniquely complicated cases. In addition, DOT reported that MARAD is working to improve its financial monitoring processes by developing procedures to better document its regular assessments of each company’s financial health. The department stated that MARAD plans to highlight the results of these assessments to top agency management for any Title XI companies experiencing financial difficulties. The department also reported that MARAD is developing a system that leverages limited staff resources for providing more extensive monitoring of Title XI vessel condition. In this regard, DOT said MARAD is establishing a documentation process for each vessel that would include improved record keeping of annual certificates from the U.S. Coast Guard, vessel classification societies, and insurance underwriters. MARAD hopes to use this system, together with company financial condition assessments, to determine whether additional inspections are necessary. In addition, DOT indicated that MARAD has begun an analysis of the program’s results covering the full 10-year period since FCRA was implemented to improve the accuracy of subsidy cost estimates. We agree that MARAD should conduct this analysis as part of its annual reestimate process to determine if estimated loan performance is reasonably close to actual performance and are encouraged that MARAD has been able to obtain the historical data to conduct such an analysis. We had attempted to perform a similar analysis to assess the basis MARAD used for its default and recovery assumptions, but MARAD was unable to provide us with this data. The department believes that our analysis may provide results that do not accurately reflect the management of the program as a whole, and that the results we report are affected by our sample selection. It points out that the report is based on an analysis of only 5 projects, representing a minute segment of the Title XI program’s universe, 3 of which are defaulted projects, even though the program experienced only 9 defaults out of 104 projects financed over the last 10 years. We do not contend that this sample is representative of all of the projects MARAD finances. However, we do believe that these case studies uncover policies that permeate the program and do not provide for adequate controls or for the most effective methods for protecting the government’s interest. In addition, our conclusions also draw on the work of a recent IG review, which looked at 42 Title XI projects, as well as a comparison with practices of selected private sector lenders and our own experience in analyzing loan guarantee programs throughout the federal government. The department also believes that as a result of our emphasis on projects involving construction financing, a significant portion of the report is directed at issues associated solely with that type of financing, which only accounts for about 30 percent of Title XI projects since 1993. The department believes it is important for us to recognize that most projects (70 percent) have been for mortgage period financing because there are no disbursements made from an escrow fund for these types of projects, and there is virtually no need for agency monitoring of the construction process for these types of projects because the ship owner does not receive any Title XI funds until the vessel has been delivered and certified by the regulatory authorities as seaworthy. We believe that projects involving construction financing are at greater risk of fraud, waste, abuse, and mismanagement, and therefore require a greater level of oversight compared to projects involving only mortgage period financing. Again, as mentioned above, our overall conclusions are based on more than the cases we reviewed. DOT asserts that the report’s portrayal of events and the rationale behind our description of the assessment of defaulted Searex assets and the verification of the cost for completing Project America I are inaccurate. In the case of Searex, the department believes that we implied that had the program officials rigorously adhered to program guidelines, the vessels would not have been dismantled. We believe that while the use of rigorous program guidelines may not have prevented Ingalls from dismantling the vessels, adherence to existing program guidelines would have provided evidence of the value and condition of the assets at the time of default. This documentary evidence would be advantageous if legal action occurred. In the case of Project America, DOT believes that the report incorrectly asserts that MARAD relied on an interested party, Ingalls Shipbuilding, Inc., to determine the value of the Project America I assets. The department believes that MARAD relied on the shipbuilder only to provide an estimate of the cost of making Project America seaworthy. We revised the report to reflect that MARAD did not obtain a market appraisal of the assets, and that it relied on Ingalls to estimate the cost of making the vessel seaworthy. We believe that in order to market the Project America assets, MARAD needs to know the costs of the available options including the cost of making the hull seaworthy. The department also believes that the report does not convey a clear understanding of DCCA’s role in the handling of Project America assets after default. We disagree with this assertion, and believe that the report appropriately reflects DCCA’s role as outlined in its report entitled the Application of Agreed-Upon Procedures Incurred on Project America. DOT believes that the report uses a number of examples to show that granting waivers or “other occurrences” related to program guidelines somehow contributed to the three defaults among the cases studied and expresses concern that the report concludes that weak program oversight contributed to the defaults examined in the draft. First, the report correctly notes that MARAD is permitted to approve waivers under certain circumstances. Nonetheless, waiving financial requirements increases the risk borne by the federal government. MARAD is now recognizing this by agreeing to implement the IG recommendations calling for compensating provisions to mitigate risk when approving waivers. Second, the program’s vulnerability to fraud, waste, abuse and mismanagement is not only due to MARAD not complying with program requirements, but also because MARAD lacks requirements for the management of defaulted assets, does not utilize basic internal control practices, such as separation of duties, and cannot reasonably estimate the program’s cost. With regard to the private sector comparison, DOT does not agree that MARAD lacks a deliberative process for loan approvals. The department believes that, in each written loan guarantee analysis, MARAD discusses the basis for granting major modifications or waivers. Also, DOT believes MARAD has a deliberative process through its written concurrence system whereby key agency offices have to concur on actions authorizing waivers or modifications. We revised the report to reflect the differing opinions of MARAD officials regarding the process for approving loan guarantees and waivers or modifications. We believe that it is not clear that MARAD uses a deliberative process and our review of the project files showed that key agency offices were not always included in the concurrence process. DOT believes that the report should acknowledge that MARAD maintains separation of duties for disbursement. The report correctly notes that the ultimate decision to disburse funds is made by the same office that approves and monitors the Title XI loans. We added the name of the office that it then instructs to disburse funds. DOT noted that certain lenders consolidate rather than separate approval and monitoring functions in order to improve efficiencies. The lenders we spoke to, who are major marine lenders, do not combine these functions. They also separate approval and monitoring functions from marketing and disposition functions. Further, we do not believe that efficiencies achieved through consolidating these functions outweigh the greater vulnerability to fraud, waste, abuse, and mismanagement associated with consolidation. The department believes that MARAD’s determination of subsidy costs is in accordance with OMB guidance. While we did not assess MARAD’s compliance with OMB guidance, MARAD did not comply with other applicable, more specific guidance, which states that estimated cash flows should be compared to actuals, and estimates should be based on the best available data. The guidance is in the Accounting and Auditing Policy Committee’s Technical Release 3, Preparing and Auditing Direct Loan and Loan Guarantee Subsidies Under the Federal Credit Reform Act. This guidance was developed by an interagency group including members from OMB, Treasury, GAO, and various credit agencies to provide detailed implementation guidance on how to prepare reasonable credit subsidies. Regardless of whether MARAD complied with all applicable guidance, because MARAD did not conduct this fundamental analysis to assess whether its cash flow model was reasonably predicting borrower behavior, it did not know that for the past 5 years, defaults were occurring at a much higher rate and costing significantly more than estimated, and recoveries were significantly less than expected. In addition, MARAD did not appropriately incorporate these higher default rates and lower recovery rates into its cash flow models. The department also stated that the report should recognize that, as a result of its full compliance with FCRA, MARAD set aside adequate funds for all defaults to date. While MARAD may have complied with some of the broad requirements of FCRA in preparing estimates and reestimates, these estimates were based on outdated assumptions and MARAD could not demonstrate that the estimates were based on historical data or other meaningful analyses. Further, DOT’s response does not recognize that the appropriated funds are to cover expected losses over the life of the loan guarantee program. Because actual losses for the last 5 years have been significantly more and recoveries significantly less than expected, in the future actual losses will need to be significantly less and recoveries significantly more than estimated for MARAD not to require additional funding. In addition, DOT believes that our analysis of MARAD’s subsidy estimates was inaccurate and based on incomplete or incorrect data, and that we underreported actual recoveries from one of the defaulted projects (MHI). We disagree and believe our analysis was accurate, based on the information MARAD had provided. In its comments, the department provided new information on recoveries for the MHI project. We have now incorporated this new data, as appropriate, into our analysis. We did not include data provided on guarantee fees because these are paid upfront and should not be included in estimates of recoveries. The department also provided technical comments, which we have incorporated as appropriate. The department’s comments appear in appendix II. OMB agreed that recent recovery expectations on certain defaulted guarantees cited in our report should be incorporated into future reestimates, and plans to ensure that these expectations are reflected in next year’s budget. Further, OMB plans to work with MARAD to review recovery expectations for other similar loan guarantees. In addition, OMB has been working with DOT and MARAD staff to implement recommendations contained in the IG report, and expects that resulting changes will also address many of the concerns raised in our report. OMB disagreed with our finding that it provided little review and oversight of MARAD’s subsidy cost estimates and reestimates and points to the substantial amount of staff time it devotes to working with agencies on subsidy cost estimates. OMB claims that the data used in our report does not seem to support our assertion of a lack of OMB oversight and disagrees with our implication that the overall subsidy rates would be higher if it had provided oversight. We clarified our report to convey the message that if OMB had provided greater oversight, it would have realized that MARAD did not have adequate support for the default and recovery assumptions it uses to calculate subsidy cost estimates. While OMB asserts that the number of default claims made between 1992 and 1999 is substantially in line with the assumptions underlying the estimated subsidy costs, we could not verify the magnitude and timing of defaults prior to the period included in our review (1996–2002) because MARAD could not provide data on historical default experience. Because MARAD could not provide adequate support for its default and recovery assumptions, we question the basis for the estimates and whether OMB had provided sufficient oversight. We continue to believe that MARAD’s recent actual experience was significantly different than what MARAD had estimated and OMB had approved. Even when we exclude all of the AMCV projects, as well as the MHI project, from our analysis, we found that the amount of defaults MARAD experienced exceeded what MARAD estimated it would experience by $63.3 million (or about 177 percent). Should the program receive new funding in the future, the subsidy rate estimates should be calculated using updated default and recovery assumptions to incorporate recent actual experience. OMB also took issue with our use of data on the eight defaults, particularly those involving AMCV and MHI, in questioning MARAD’s most recent reestimates of the costs of loans guaranteed between 1992 and 1995. However, we continue to question the reasonableness of the negative subsidies for the loans that were disbursed in fiscal years 1993, 1994, and 1995. First, the loans in these cohorts have not been through what MARAD considers the period of peak default—years 10–18 depending on the risk category. Second, MARAD was unable to provide us with adequate supporting information for how it determined the estimated default and recovery amounts. OMB agrees that recent experience should be used to calculate reestimates and states in its comments that it generally requires agencies to use all historical data as a benchmark for future cost estimates and agreed that recent recovery experience should be incorporated into future reestimates. OMB’s comments appear in appendix III. We are sending copies of this report to the Secretary of Transportation. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me, or Mathew Scirè at 202-512-6794. Major contributors to this report are listed in appendix IV. To determine whether MARAD complied with key Title XI program requirements, we identified key program requirements and reviewed how these were applied to the management of five loan guarantee projects. We judgmentally selected 5 projects from a universe of 83 projects approved between 1996 and 2002. The selected projects represent active and defaulted loans and five of the six risk categories assigned during the 1996–2002 period. The projects selected include barges, lift boats, cruise ships, and tankers. (See table 3.) Two of the selected shipowners had multiple Title XI loan guarantees during 1996–2002 (HVIDE, five guarantees; and AMCV, the parent company of Project America, Inc., five). We interviewed agency officials and reviewed provisions of existing federal regulations set forth in Title 46, Part 298 of the Code of Federal Regulations to identify the key program requirements that influence the approval or denial of a Title XI loan guarantee. We reviewed internal correspondence and other documentation related to the compliance with program requirements for the approval of the loan guarantee, ongoing monitoring of the project, and disposition of assets for loans resulting in default. We interviewed agency officials and staff members from the Title XI support offices that contribute to the approval and monitoring of loans and disposal of a loan resulting in default. Also, we interviewed a retired MARAD employee involved in one of the projects. In addition, we interviewed officials that represented AMCV/Project America, Inc., including the former Vice President and General Counsel and former outside counsel. To determine how MARAD’s practices of managing financial risk compare to those of selected private-sector maritime lenders, we interviewed two leading worldwide maritime lenders, and one leading maritime lender in the Gulf Coast region. We interviewed these lenders to become familiar with private-sector lending policies, procedures, and practices in the shipping industry. Among the individuals we interviewed were those responsible for portfolio management and asset disposition. We did not verify that the lenders followed the practices described to us. To assess MARAD’s implementation of credit reform, we analyzed MARAD’s subsidy cost estimation and reestimation processes and examined how the assumptions MARAD uses to calculate subsidy cost estimates compare to MARAD’s actual program experience. We first identified the key cash flow assumptions MARAD uses to calculate its subsidy cost estimates. Once we identified these assumptions, we determined whether MARAD had a reliable basis—whether MARAD had gathered sufficient, relevant, and reliable supporting data—for the estimates of program cost and for their estimates of loan performance. We compared estimated program performance to actual program performance to determine whether variances between the estimates and actual performance existed. Further, we interviewed those MARAD officials who are responsible for implementing credit reform and compared the practices MARAD uses to implement credit reform to the practices identified in OMB and other applicable credit reform implementation guidance. We performed our work in Washington, D.C., and New York, N.Y., between September 2002 and April 2003 in accordance with generally accepted government auditing standards. In addition to those individuals named above, Kord Basnight, Daniel Blair, Rachel DeMarcus, Eric Diamant, Donald Fulwider, Grace Haskins, Rachelle Hunt, Carolyn Litsinger, Marc Molino, and Barbara Roesmann made key contributions to this report. | Title XI of the Merchant Marine Act of 1936, as amended, is intended to help promote growth and modernization of the U.S. merchant marine and U.S. shipyards by enabling owners of eligible vessels and shipyards to obtain financing at attractive terms. The program has committed to guarantee more than $5.6 billion in ship construction and shipyard modernization costs since 1993, but it has experienced several large-scale defaults over the past few years. Because of concerns about the scale of recent defaults, GAO was asked to (1) determine whether MARAD complied with key program requirements, (2) describe how MARAD's practices for managing financial risk compare to those of selected private-sector maritime lenders, and (3) assess MARAD's implementation of credit reform. The Maritime Administration (MARAD) has not fully complied with some key Title XI program requirements. While MARAD generally complied with requirements to assess an applicant's economic soundness before issuing loan guarantees, MARAD did not ensure that shipowners and shipyard owners provided required financial statements, and it disbursed funds without sufficient documentation of project progress. Overall, MARAD did not employ procedures that would help it adequately manage the financial risk of the program. MARAD could benefit from following the practices of selected private sector maritime lenders. These lenders separate key lending functions, offer less flexibility on key lending standards, use a more systematic approach to loan monitoring, and rely on experts to estimate the value of defaulted assets. With regard to credit reform implementation, MARAD uses a simplistic cash flow model to calculate cost estimates, which have not reflected recent experience. If this pattern of recent experience were to continue, MARAD would have significantly underestimated the cost of the program. MARAD does not operate the program in a businesslike fashion. Consequently, MARAD cannot maximize the use of its limited resources to achieve its mission, and the program is vulnerable to fraud, waste, abuse, and mismanagement. Also, because MARAD's subsidy estimates are questionable, Congress cannot know the true costs of the program. |
While CFSA is responsible for protecting thousands of foster care children, many children in CFSA’s care languished for extended periods of time due to managerial shortcomings and long-standing organizational divisiveness in the District of Columbia. As a result of these deficiencies, the U.S. District Court for the District of Columbia issued a remedial order in 1991 to improve the performance of the agency. Under a modified final order established by the court in 1993, CFSA was directed to comply with many requirements. In 1995, lacking sufficient evidence of program improvement, the agency was removed from the District’s Department of Human Services and placed in receivership. Among its efforts to improve agency performance, CFSA established an automated system, FACES, to manage its caseload. The District Court issued a consent order in 2000 establishing a process by which the agency’s receivership could be ended. The order also established a probationary period, which began when the receivership ended, and identified performance standards CFSA had to meet in order to end the probationary period. The court-appointed monitor, the Center for the Study of Social Policy, was to assess CFSA’s performance and had discretion to modify the performance standards. In April 2001, CFSA became a cabinet-level agency within the government of the District of Columbia. In June 2001, the court removed CFSA from the receivership and its probationary period began. In October 2001, responsibility for child abuse investigations was transferred to CFSA from the District’s Metropolitan Police Department. CFSA’s probationary period ended in January 2003. However, in September 2002, the court-appointed monitor reported that a 7-year old boy was abused by two children in a group home that CFSA had licensed to provide care for 9-21 year olds. The report also identified several actions CFSA took or failed to take and concluded that the child was not adequately protected or served by CFSA. For example, contrary to its policies, CFSA did not place the child with his sibling, and there was no evidence that CFSA assessed his social, emotional, or behavioral needs. According to the court-appointed monitor, these events indicated that CFSA’s operations and policies may still need improvement. CFSA operates in a complex child welfare system. Several federal laws, local laws, and regulations established goals and processes under which CFSA must operate. ASFA, with one of its goals to place children in permanent homes in a timelier manner, placed new responsibilities on all child welfare agencies nationwide. ASFA introduced new time periods for moving children toward permanent, stable care arrangements and established penalties for noncompliance. For example, ASFA requires child welfare agencies to hold a permanency planning hearing—during which the court determines the future plans for a child, such as whether the state should continue to pursue reunification with the child’s family or some other permanency goal—not later than 12 months after the child enters foster care. The District of Columbia Family Court Act of 2001 established the District’s Family Court and placed several requirements on the District’s Mayor and various District government agencies, including CFSA and OCC. The District of Columbia Family Court Act requires the Mayor, in consultation with the Chief Judge of the Superior Court, to ensure that CFSA and other District government agencies coordinate the provision of social services and other related services to individuals served by the Family Court. CFSA relies on services provided by other District government agencies. For example, both the Fire Department and the Health Department inspect facilities where children are placed, and D.C. Public Schools prepare individual education plans for some foster care children. CFSA also works with agencies in Maryland, Virginia, and other states to arrange for placements of District children and also works with private agencies to place children in foster and adoptive homes. In addition, CFSA is responsible for licensing and monitoring organizations with which it contracts, including group homes that house foster care children. The management of foster care cases involves several critical steps required by CFSA policy. (See fig. 1.) Typically, these cases begin with an allegation of abuse or neglect reported to the CFSA child abuse hot line. CFSA staff are required to investigate the allegations through direct contact with the reported victim. If required, the child may be removed from his or her home, necessitating various court proceedings handled by the District’s Family Court. CFSA caseworkers are responsible for managing foster care cases by developing case plans; visiting the children; participating in administrative review hearings involving CFSA officials, children, parents, and other officials; attending court hearings; and working with other District government agencies. CFSA caseworkers are also responsible for documenting the steps taken and decisions made related to a child’s safety, well-being, and proper foster care placement, as well as those related to developing the most appropriate goal for permanency. Depending on their circumstances, children leave foster care and achieve permanency through reunification with their birth or legal parents, adoption, legal guardianship with a relative, or independence. As of September 2002, a child’s length of stay in the District’s foster care program averaged 2.8 years. HHS is responsible for setting standards and monitoring the nation’s child welfare programs. In fiscal year 2001, about $6.2 billion in federal funds were appropriated to HHS for foster care and related child welfare services. HHS’s monitoring efforts include periodic reviews of the operations, known as Child and Family Services Reviews, and of the automated systems, known as Statewide Automated Child Welfare Information System (SACWIS) Reviews, in the states and the District of Columbia. HHS last reviewed CFSA’s child welfare information system in 2000 and its overall program in 2001. CFSA undertook actions to implement six of the nine ASFA requirements we reviewed and met or exceeded four of the eight performance criteria included in our study, but as of March 2003, its plans to improve its performance did not include all ASFA requirements not fully implemented or selected performance criteria. With regard to implementing ASFA requirements, for example, CFSA signed a border agreement to achieve more timely placement of District children in Maryland, which addresses the ASFA requirement to use cross-jurisdictional resources to facilitate timely adoptive or permanent placements for waiting children. Table 1 summarizes CFSA’s progress in implementing the nine ASFA requirements that we reviewed. HHS’s review of CFSA found that the agency did not meet three requirements. CFSA did not consistently petition the Family Court to terminate parental rights when returning the child to his or her family had been deemed inappropriate and the child had been in foster care for 15 of the last 22 months. Based on its review of 50 foster care cases, HHS reported that 54 percent of the children who were in care longer than 15 months did not have hearings initiated for the termination of parental rights and reasons for not initiating such hearings were not documented in the case plan or court order. HHS also found that not all cases had hearings to review a child’s permanency goal within the timeframe prescribed by ASFA. In addition, foster parents, relative caretakers, and pre-adoptive parents were not consistently notified of reviews or hearings held on behalf of the foster child. HHS found that not all caregivers and prospective caregivers were notified of the time and place of a hearing, if such notification took place at all. We also analyzed automated data from FACES related to eight foster care performance criteria and found that CFSA met or exceeded four of them. For example, one criterion requires 60 percent of children in foster care to be placed with one or more of their siblings; we found that as of November 30, 2002, 63 percent of children were placed with one or more siblings. The areas in which CFSA’s performance fell short were the criteria related to (1) caseworker visitation with children in foster care, (2) placement of children in foster homes with valid licenses, (3) progress toward permanency for children in foster care, and (4) parental visits with children in foster care who had a goal of returning home. For example, none of the 144 children placed in foster care during the 2-month period prior to November 30, 2002, received required weekly visits by a CFSA caseworker. Table 2 summarizes our analysis of the selected foster care performance criteria. CFSA’s Program Improvement Plan, a plan required by HHS to address those areas determined not met in a CFSR, identifies how it will address two of the unmet ASFA requirements—(1) to initiate or join proceedings to terminate parental rights (TPR) of certain children in foster care and (2) to ensure that children have a permanency hearing every 12 months after entering foster care. For example, CFSA has outlined steps to improve its filings of TPR petitions with the Family Court. To help facilitate this process, CFSA hired additional attorneys to expedite the TPR proceedings. The new attorneys have been trained in ASFA requirements and in the process for referring these cases to the Family Court. CFSA is also developing a methodology for identifying and prioritizing cases requiring TPR petitions. In another plan, the April 2003 Implementation Plan, CFSA states that it will redesign its administrative review process to improve, among other things, notification and attendance of relevant parties and to provide for a comprehensive review of case progress, permanency goals, and adequacy of services. However, this plan does not make it clear whether all applicable hearings and proceedings will be included, such as permanency hearings. Another CFSA plan, the Interim Implementation Plan, includes measures that were developed to show the agency’s plans for meeting the requirements of the modified final order issued by the U.S. District Court for the District of Columbia. This plan includes actions to address three of the four performance criteria the agency did not meet—visits between children in foster care and their parents, social worker visitation with children in foster care, and placement of children in foster homes with current and valid licenses. The plan states that, for new contracts, CFSA will require its contactors to identify community sites for parental visits to help facilitate visits between children in foster care and their parents. The plan also indicates that CFSA will concentrate on the recruitment and retention of caseworkers. According to CFSA officials, caseworkers would have more time for quality casework, including visitation with children, parents, and caregivers, once they hire more caseworkers. Additionally, the plan established a goal to have 398 unlicensed foster homes in Maryland licensed by December 31, 2002. According to an agency official, 104 of these foster homes remained unlicensed as of May 14, 2003. However, CFSA does not have written plans that address the performance criterion to reduce the number of children in foster care who, for 18 months or more, have had a permanency goal to return home. Without complete plans for improving performance for all measures, CFSA’s ability to comply with the ASFA requirements and meet the selected performance criteria may be difficult. Furthermore, unless these requirements and criteria are met, the time a child spends in foster care may be prolonged, or the best decisions regarding a child’s future well-being may not be reached. CFSA officials cited several factors that hindered their ability to fully implement the ASFA requirements and meet the selected performance criteria, including court-imposed requirements, staffing shortages, and high caseloads. For example, program managers and supervisors said that the new court-imposed mediation process intended to address family issues without formal court hearings places considerable demands on caseworkers’ time. The time spent in court for mediation proceedings, which can be as much as 1 day, reduces the time available for caseworkers to respond to other case management duties, such as visiting with children in foster care. Furthermore, managers and supervisors reported that staffing shortages have contributed to delays in performing critical case management activities, such as identifying cases for which attorneys need to file TPR petitions. However, staffing shortages are not a unique problem to CFSA. We recently reported that caseworkers in other states said that staffing shortages and high caseloads had detrimental effects on their abilities to make well-supported and timely decisions regarding children’s safety. We also reported that as a result of these shortages, caseworkers have less time to establish relationships with children and their families, conduct frequent and meaningful home visits, and make thoughtful and well-supported decisions regarding safe and stable permanent placements. CSFA has established many foster care policies, but caseworkers did not consistently implement the six we selected. These policies covered the range of activities involved in a foster care case, but did not duplicate those examined in our review of the AFSA requirements or the selected foster care performance criteria. In addition, CFSA’s automated system lacked data on four of the six policies we examined for at least 70 percent of its active foster care cases. Without information on all cases, caseworkers do not have a readily available summary of the child’s history needed to make decisions about a child’s care and managers do not have information needed to assess and improve program operations. While we previously reported in 2000 that CFSA lacked some important child protection and foster care placement policies, CFSA has now established many such policies and most are comparable to those recommended by organizations that develop standards applicable to child welfare programs. For example, CFSA has policies for investigating allegations of child abuse, developing case plans, and establishing permanency goals for foster children. In addition, one policy is more rigorous than suggested standards. Specifically, CFSA’s policy requires an initial face-to-face meeting with children within 24 hours of reported abuse or neglect, while the suggested standard is 24 to 48 hours or longer, depending on the level of risk to the child’s safety and well-being. However, CFSA does not have some recommended policies, namely those addressing (1) written time frames for arranging needed services for children and families (e.g., tutoring for children and drug treatment for family members); (2) limits on the number of cases assigned to a caseworker, based on case complexity and worker experience; and (3) procedures for providing advance notice to each person involved in a case about the benefits and risks of services planned for a child and alternatives to those services. CFSA managers said that the agency had not established these policies because agency executives gave priority to complying with court-ordered requirements. CFSA did not consistently implement the policies we examined. We selected six policies that did not duplicate those examined in our review of the AFSA requirements or the selected foster care performance criteria in order to cover most of the case management duties and responsibilities. CFSA could not provide automated data regarding the implementation of one policy requiring administrative review hearings every 6 months. As for the remaining five policies, data in FACES indicate that caseworkers’ implementation of them varied considerably. Table 3 summarizes these five policies and the percentage of cases for which the data indicated the policy was implemented. The policies related to initiating face-to-face investigations and completing safety assessments are particularly critical to ensuring children’s safety. CFSA’s policy requires caseworkers to initiate an investigation of alleged child abuse or neglect within 24 hours of the call to CFSA’s hot line through face-to-face contact with the child. Also, caseworkers are required to complete a safety assessment within 24 hours of the face-to-face contact with the child. While it took CFSA caseworkers considerably longer than the time specified in the policy to take these actions in some cases, CFSA’s performance has improved. CFSA has reduced the average time it takes to make contacts and complete the assessments. In 2000, it took caseworkers an average of 18 days to initiate a face-to-face investigation, whereas in 2002 the average was 2 days. Similarly, caseworkers took an average of 30 days to complete safety assessments in 2000, whereas the average time declined to 6 days in 2002. Although there were cases that took much longer than the 24-hour limits, there were fewer in 2002 than in 2000. CFSA caseworkers took 5 or more days to initiate a face-to-face investigation for 61 cases in 2000, and for 16 cases in 2002. Table 4 summarizes the number of cases for which caseworkers took 5 or more days to initiate investigations and complete safety assessments from 2000 through 2002. We also reviewed case files and examined related data from FACES for 30 foster care cases to assess compliance with policies requiring timely case planning, periodic administrative review hearings, and arrangements for needed services. The case files we reviewed were often voluminous, inconsistently organized, and contained information that was not always traceable to data entered in FACES. Our review found that case plans were not routinely completed within 30 days, as required by CFSA policy. The FACES data provided subsequent to our case file review supported this assessment. We also found that for almost half of the cases we examined administrative review hearings, which are held to ensure that key stakeholders are involved in decisions about a child’s permanent placement, were rescheduled, resulting in their being held less frequently than required by CFSA policy. CFSA policy requires that these hearings be held every 6 months, and FACES automatically schedules them to occur 6 months after the most recent hearing. However, CFSA officials are unable to track how frequently they are rescheduled or the length of time between hearings because the system overrides the dates of prior hearings. Agency officials explained that changes have been made to FACES to enable them to track how many times an administrative review is re-scheduled. Long delays between administrative review hearings could mean delays in getting children into permanent placement. As for arranging needed services, we could not determine from case files or FACES whether services recommended by caseworkers were approved by supervisors or if all needed services were provided. The FACES data indicate that at least one service was provided for 83 percent of the cases, but do not include a complete record of all services caseworkers determine to be needed, nor do they indicate whether the services were provided on a timely basis. Officials said that several factors affected the implementation of some of the policies we reviewed. Caseworkers’ supervisors and managers explained that, generally, the policies were not always implemented because of limited staff and competing demands, and the policies were not documented because some caseworkers did not find FACES to be user friendly. Agency officials explained that, in part, the data on the implementation of the initial investigations and safety assessment reflected a change in who was responsible for the initial investigation of child abuse cases. Until October 2001, the District’s Metropolitan Police Department had this responsibility and data on initial investigations were not entered into FACES. CFSA now has responsibility for both child abuse and neglect investigations. Further, program managers and supervisors said that several factors contributed to the time frames required to initiate face-to-face investigations, including difficulty in finding the child’s correct home address, contacting the child if the family tries to hide the child from investigators, and even obtaining vehicles to get to the location. Regarding administrative review hearings, the records indicate that they were rescheduled for a variety of reasons, such as the caseworker needing to appear at a hearing for another case or the attorney not being able to attend the hearing. Managers also said that the data on service delivery was not always entered into FACES because caseworkers sometimes arranged services by telephone and did not enter the data into FACES. CFSA officials said that they recently made changes to help improve the implementation of some of the policies we reviewed. They said that CFSA has focused on reducing the number of cases for which a risk assessment had not been completed and has reduced the number of these investigations open more than 30 days from 807 in May 2001 to 263 in May 2002. CFSA officials said that they also anticipate a reduction in the number of administrative review hearings that are rescheduled. They said the responsibility for notifying administrative review hearing participants about a scheduled hearing was transferred from caseworkers to staff in CFSA’s administrative review unit, and they intend to provide notification well in advance of the hearings. Additionally, another official said that CFSA has begun testing a process to ensure that all needed services are provided within 45 days. Such improvements are needed because without consistently implementing policies for timely investigations and safety and risk assessments, a child may be subject to continued abuse and neglect. Delays in case plan preparation and in holding administrative review hearings delay efforts to place children in permanent homes or reunite them with their families. Further, without knowing whether children or families received needed services, CFSA cannot determine whether steps have been taken to resolve problems or improve conditions for children in its care, which also delays moving children toward their permanency goals. In addition to its policies for managing cases, CFSA has policies for licensing and monitoring group homes, plans for training staff in group homes, and a goal to reduce the number of young children in group homes. CFSA’s policies for group homes are based primarily on District regulations that went into effect July 1, 2002. For example, the regulations prohibited CFSA from placing children in an unlicensed group home as of January 1, 2003. According to CFSA officials, as of March 2003, all CFSA group homes were licensed, except one, and CFSA was in the process of removing children from that home. CFSA plans to monitor group homes by assessing their compliance with contractual provisions and licensing requirements. CFSA also plans to provide training to group home staff to make it clear that, as District regulations require, any staff member who observes or receives information indicating that a child in the group home has been abused must report it. Further, CFSA has a goal to reduce the number of children under 13 who are placed in group homes. According to agency officials, CFSA has reduced the number of children under 13 in group homes from 128 in August 2002 to 70 as of February 2003 and has plans to reduce that number even further by requiring providers of group home care to link with agencies that seek foster care and adoptive families. In our efforts to assess CFSA’s implementation of the selected foster care policies related to the safety and well-being of children as shown in table 3, we determined that FACES lacked data on many active foster care cases. In December 2000, we reported that FACES lacked complete case information, and caseworkers had not fully used it in conducting their daily casework. During our most recent review, we determined that FACES lacked data on four of six foster care policies for at least 70 percent of its active foster care cases. Of the 2,510 foster care cases at least 6 months old as of November 30, 2002, data were not available for 1,763. CFSA officials explained that all of these cases predated FACES, and the previous system was used primarily to capture information for accounting and payroll purposes, not for case management. Top agency managers said that CFSA does not plan to make it an agency priority to transfer information kept in paper files for cases that predated FACES into the system. Additionally, FACES reports show that data were not available on many of the cases that entered the foster care system after FACES came on line. For example, complete data on the initiation of investigations and completion of safety assessments were not available for about half of the 943 cases that entered the foster care system after FACES came on line. CFSA officials explained that they intend to focus on improving a few data elements at a time for current and future events. Having systems that provide complete and accurate data is an important aspect of effective child welfare programs. HHS requires all states and D.C. to have an automated child welfare information system. These systems, known as SACWIS, must be able to record data related to key child welfare functions, such as intake management, case management, and resource management. In its review of FACES, HHS found CFSA’s system was in compliance with most of the requirements and identified several that needed improvement, including the requirements to prepare and document service/case plans and to conduct and record the results of case reviews. According to a CFSA official, D.C. responded to the HHS report and made changes to address most of the findings. He said that the changes included redesigning the FACES screens documenting service/case plans and the results of case reviews. These changes were made in collaboration with caseworkers to help improve usability. In addition to the standards and requirements established by HHS for all child welfare systems, the modified final order requirements established by the U.S. District Court for the District of Columbia direct CFSA to produce management data and many reports on their operations. For example, the modified final order requires that CFSA be able to produce a variety of data such as, the number of children (1) for whom a case plan was not developed within 30 days, (2) with a permanency goal of returning home for 12 months or more, and (3) placed in a foster home or facility who have been visited at specified intervals. Complete, accurate, and timely case management data enables caseworkers to quickly learn about new cases, supervisors to know the extent that caseworkers are completing their tasks, and managers to know whether any aspects of the agency’s operations are in need of improvement. Child welfare automated systems need to have complete case data to help ensure effective management of child welfare programs. A child welfare expert said that there is a great need to transfer information from old case records to new automated systems. For example, the expert said that records of older teens have been lost, and, with them, valuable information such as the identity of the children’s father. Without data in FACES, CFSA’s caseworkers will have to look for paper records in the case files, some of which are voluminous. This file review effort is much more time-consuming than reviewing an automated report and as a result, when cases are transferred to new caseworkers, it requires more time for them to become familiar with cases. CFSA has enhanced its working relationship with the D.C. Family Court by working more collaboratively, but several factors have hindered these relationships. By participating in committees and training sessions, collocating OCC attorneys with caseworkers, and communicating frequently, CFSA has enhanced its working relationship with the Family Court. CFSA participates in various planning committees with the Family Court, such as the Implementation Planning Committee, a committee to help implement the District of Columbia Family Court Act of 2001. CFSA caseworkers have participated in training sessions that include OCC attorneys and Family Court judges. These sessions provide all parties with information about case management responsibilities and various court proceedings, with the intent of improving and enhancing the mutual understanding about key issues. Additionally, CFSA assigned a liaison representative to the Family Court who is responsible for working with other District agency liaison representatives to assist social workers and case managers in identifying and accessing court-order services for children and their families at the Family Court. Also, since 2002, OCC attorneys have been located at CFSA and work closely with caseworkers. This arrangement has improved the working relationship between CFSA and the Family Court because the caseworkers and the attorneys are better prepared for court appearances. Furthermore, senior managers at CFSA and the Family Court communicated frequently about day-to-day operations as well as long-range plans involving foster care case management and related court priorities, and on several occasions expressed their commitment to improving working relationships. However, CFSA officials and Family Court judges also noted several hindrances that constrain their working relationship. These hindrances include the need for caseworkers to balance court appearances with other case management duties, an insufficient number of caseworkers, caseworkers who are unfamiliar with cases that have been transferred to them, and differing opinions about the responsibilities of CFSA caseworkers and judges. For example, although CFSA caseworkers are responsible for identifying and arranging services needed for children and their families, some caseworkers said that some Family Court judges overruled their service recommendations. Family Court judges told us that they sometimes made decisions about services for children because they believe caseworkers did not always recommend appropriate ones or provide the court with timely and complete information on the facts and circumstances of the case. Furthermore, the Presiding Judge of the Family court explained that it was the judges’ role to listen to all parties and then make the best decisions by taking into account all points of view. Caseworkers and judges agreed that appropriate and timely decisions about services for children and their families are important ones that can affect a child’s length of stay in foster care. CFSA officials and Family Court judges have been working together to address some of the hindrances that constrain their working relationship. CFSA managers said that scheduling of court hearings has improved. According to agency officials, in March 2003, CFSA began receiving daily schedules from the Family Court with upcoming hearing dates. This information allows caseworkers to plan their case management duties such that they do not conflict with court appearances. Also, as of March 2003, court orders were scanned into FACES to help ensure that caseworkers and others involved with a case have more complete and accurate information. To help resolve conflicts about ordering services, CFSA caseworkers and Family Court judges have participated in sessions during which they share information about their respective concerns, priorities, and responsibilities in meeting the needs of the District’s foster care children and their families. CFSA has taken steps to implement several ASFA requirements, met several performance criteria, developed essential policies, and enhanced its working relationship with the Family Court. In addition, CFSA has implemented new group home policies, improved the average time caseworkers took to implement certain policies and undertaken initiatives, in conjunction with the Family Court, to improve the scheduling of court hearings. However, CFSA needs to make further improvements in order to ensure the protection and proper and timely placement of all of the District’s foster care children. By implementing all ASFA requirements, meeting the performance criteria and effectively implementing all policies, CFSA will improve a child’s stay in the foster care system and reduce the time required to attain permanent living arrangements. Furthermore, complete, accurate, and timely case management data will enable caseworkers to quickly learn about new cases and the needs of children and their families, supervisors to know the extent to which caseworkers are completing all required tasks, and managers to know whether any critical aspects of the agency’s operations are in need of improvement. Without automated information on all cases, caseworkers do not have a readily available summary of the child’s history, which may be critical to know when making plans about the child’s safety, care, and well-being. To improve CFSA’s performance and outcomes for foster care children in the District of Columbia, we recommend that the Mayor require the Director of CFSA to (1) develop plans to fully implement all ASFA requirements; (2) establish procedures to ensure that caseworkers consistently implement foster care policies; and (3) document in FACES all activities related to active foster care cases, including information from paper case files related to the history of each active foster care case. We received written comments from the Director of the District of Columbia’s Child and Family Services Agency who provided them on behalf of the Deputy Mayor for Children, Youth, Families, and Elders. These comments are reprinted in appendix II. The Director generally agreed with our findings related to the extent to which CFSA implemented ASFA requirements, developed policies, and improved its relationship with the D.C. Family Court. Although the CFSA Director did not directly address the recommendations, she generally agreed with the areas we identified for continued improvement and said that CFSA is deeply committed to continuing improvements in the FACES information system. Additionally the Director provided overall comments concerning the (1) implementation plan, (2) establishment of CFSA, and (3) timeframes of the receivership. CFSA suggested that we modify the report to reflect strategies listed in the April 2003 Implementation Plan regarding timely notification and reducing the number of children in foster care for 18 months or more with a permanency goal of returning home. We changed the report to reflect the notification strategy but did not make changes regarding children and their progress towards permanency because the April 2003 plan did not include a relevant strategy. CFSA also suggested that we include the date the agency was established as a single cabinet-level District agency and the date the agency gained responsibility for abuse cases from the Metropolitan Police Department. We made these changes. Additionally, CFSA recommended that we discuss the policy implementation trends earlier in the report, and asked that we note the time period for the cases included in the HHS review, which we did. CFSA also asked that we explain that the data we collected and analyzed generally covered the October 1999 to mid-2002 period. We did not make this change. As explained in the scope and methodology section of the report, we reviewed and analyzed a variety of data related to all active foster care cases. Some of the data was as of November 2002, and some analyses were based on active cases that began prior to October 1999. The CFSA Director also made several detailed comments. As she suggested, we added language to clarify the requirement for a permanency hearing, included information on changes made to FACES regarding rescheduling administrative reviews and corrected the number of CFSA staff assigned to the Family Court. We did not include the March 2003 data listed in the comments because we could not verify the accuracy of the data. Although the CFSA director did not directly address the recommendations, we continue to think that in order for CFSA to further improve its performance, the agency should develop plans to fully implement all ASFA requirements, establish procedures to ensure that caseworkers consistently implement all foster care policies, and document in FACES all activities related to active foster care cases. As agreed with your office, unless you publicly release its contents earlier, we will make no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Mayor of the District of Columbia; the Deputy Mayor for Children, Youth, Families, and Elders; the Director of the District of Columbia Child and Family Services Agency; and the Chief Judge of the District of Columbia Superior Court. We will also make copies of this report available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me on (202) 512-8403. Other contacts and staff acknowledgments are listed in appendix III. To provide a comprehensive assessment of the Child and Family Services Agency’s (CFSA) performance relative to Adoption and Safe Families Act of 1997 (ASFA) requirements and selected foster care performance criteria, we relied on several sources of information and analyses. We reviewed the U. S. Department of Health and Human Services’ (HHS) Child and Family Services Review (February 2002) and obtained and analyzed data to assess CFSA’s implementation of ASFA’s requirements. Our analysis of CFSA’s implementation of ASFA identified whether the agency had implemented procedures in accordance with the ASFA requirements and did not assess the extent to which or how well it had implemented the requirement across all applicable foster care cases. To perform our assessment of CFSA’s performance with regard to the selected performance criteria established during its probationary period, we obtained and analyzed relevant automated data from FACES on all active foster care cases as of November 30, 2002, the last complete month for which data were available at the time of our work. We analyzed these data for six of the eight criteria. For the other two criteria, we analyzed data on all foster homes as of November 30, 2002, and data on case plans as of September 30, 2002. Additionally, we obtained and analyzed automated FACES data for 943 foster care cases that were at least 6 months old as of November 30, 2002, to assess how CFSA caseworkers implemented foster care policies that covered several key functions from investigations through the delivery of services to foster children and their families. Many of the active foster care cases began prior to October 1999. We also obtained and analyzed reports by the court-appointed monitor to assess CFSA’s performance relative to the specified requirements and criteria. In addition, we reviewed and included relevant information from several of our prior reports on CFSA and the District’s Family Court. In addition, we independently verified the reliability of automated data by reviewing related reports on the data maintained in FACES and by assessing the degree to which FACES contained erroneous or illogical data entries. To obtain additional information on policy implementation and documentation, we reviewed case files for children who entered the foster care system at different times. Our case file review included analyses of data contained in FACES and in paper case files for selected foster care cases. We pretested our data collection instrument for collecting case file information and received training in the content and use of FACES. In addition, while FACES did not contain all data on the implementation of the policies we selected, we analyzed information on CFSA’s most recent performance to provide a comprehensive assessment of various agency initiatives intended to improve implementation of foster care policies. We also reviewed federal and local laws, regulations, and selected CFSA policies. Using interview protocols, we interviewed CFSA executives, managers, and supervisors; OCC officials; the Office of the Deputy Mayor for Children, Youth, Families, and Elders; Family Court judges and other court officials; and child welfare experts in organizations that recommend policies applicable to child welfare programs. The following individuals also made important contributions to this report: Sheila Nicholson, Vernette Shaw, Joel Grossman, and James Rebbe. D.C. Child and Family Services: Key Issues Affecting the Management of Its Foster Care. GAO-03-758T. Washington, D.C.: May 16, 2003. Foster Care: States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-03-626T. Washington, D.C.: April 8, 2003. District of Columbia: Issues Associated with the Child and Family Services Agency’s Performance and Policies. GAO-03-611T. Washington, D.C.: April 2, 2003. Child Welfare: HHS Could Play a Greater Role in Helping Child Welfare Agencies Recruit and Retain Staff. GAO-03-357. Washington, D.C.: March 31, 2003. District of Columbia: More Details Needed on Plans to Integrate Computer Systems With the Family Court and Use Federal Funds. GAO- 02-948. Washington, D.C.: August 7, 2002. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. D.C. Family Court: Progress Made Toward Planned Transition and Interagency Coordination, but Some Challenges Remain. GAO-02-797T. Washington, D.C.: June 5, 2002. D.C. Family Court: Additional Actions Should Be Taken to Fully Implement Its Transition. GAO-02-584. Washington, D.C.: May 6, 2002. D.C. Family Court: Progress Made Toward Planned Transition, but Some Challenges Remain. GAO-02-660T. Washington, D.C.: April 24, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D.C.: December 29, 2000. Foster Care: Status of the District of Columbia’s Child Welfare System Reform Efforts. GAO/T-HEHS-00-109. Washington, D.C.: May 5, 2000. | The District of Columbia (D.C.) Child and Family Services Agency (CFSA) is responsible for protecting children at risk of abuse and neglect and ensuring that services are provided for them and their families. GAO was asked to discuss the extent to which CFSA has (1) met requirements of the Adoption and Safe Families Act (ASFA) of 1997 and other selected performance criteria, (2) adopted and implemented child protection and foster care placement policies, and (3) enhanced its working relationship with the D.C. Family Court. To address these questions, GAO analyzed data from CFSA's child welfare information system, known as FACES; reviewed laws, regulations, and reports; examined case files; and interviewed officials. CFSA's performance relative to three sets of measures--nine ASFA requirements, eight selected performance criteria and six of the agency's foster care policies--has been mixed. The agency took actions to implement six of the nine ASFA requirements related to the safety and well-being of foster children and met or exceeded four of the eight selected foster care performance criteria, but its plans did not address all requirements not fully implemented and unmet performance criteria. CSFA has established many foster care policies, but caseworkers did not consistently implement the six GAO examined. In addition, FACES lacked data related to four of the policies reviewed for at least 70 percent of its active foster care cases. CFSA has enhanced its working relationship with the D.C. Family Court, but several factors hindered this relationship. For example, CFSA's top management and Family Court judges talk frequently about foster care case issues. However, differing opinions among CFSA caseworkers and judges about their responsibilities have hindered the relationships. CFSA officials and Family Court judges have been working together to address these hindrances. |
The inappropriate use of prescription drugs is a problem that is particularly acute for the elderly. The elderly use more prescription drugs than any other age group and are more likely to be taking multiple prescription drugs, which increases the probability of adverse drug reactions. Furthermore, the elderly are more susceptible to adverse drug reactions because of the aging process. As a result, many experts believe that some drugs are generally inappropriate for the elderly because equally effective and safer alternative drugs exist. Additionally, other drugs though appropriate should be used at reduced dosage levels to accommodate elderly physiology. Based on 1987 data from the National Medical Expenditure Survey, a research study published in July 1994 concluded that almost 25 percent of the noninstitutionalized elderly 65 or older used prescription drugs at least once during the year that are generally considered unsuitable for their age group. The study used a list of 20 drugs, based on criteria published in 1991, that generally should not be used by elderly patients. A second study published in October 1994 reinforced the findings of the earlier study. In this study, the researchers interviewed a sample of community residents, 75 or older living in Santa Monica, California, during 1989 and 1990 about their use of prescription drugs, over-the-counter medications, and home remedies within the 4 weeks prior to the interview. The researchers used primarily the same criteria as the July 1994 study, but only looked at drug usage over a 1-month period rather than the entire year. This study concluded that 14 percent of those interviewed used at least one of the drugs generally identified as not suitable for elderly patients. Several experts we interviewed expressed reservations about the appropriateness of using 1991 criteria to evaluate prescription drug use in years before the criteria were developed. However, these experts did not disagree with the criteria themselves. To determine if there was much change in prescribing patterns after 1991, we analyzed data from the Medicare Current Beneficiary Survey conducted by HCFA’s Office of the Actuary to see what percentage of noninstitutionalized Medicare beneficiaries in 1992 used any of the 20 drugs. Our analysis showed that an estimated 17.5 percent of the almost 30 million senior citizens in the survey used at least one of these drugs in 1992. Although this represented an improvement over the 1987 data, more than one out of six elderly patients were still using prescription drugs generally considered unsuitable for their age group. Many health care practitioners questioned whether the use of these drugs should always be characterized as inappropriate. They maintained that, under certain circumstances, their use would be perfectly acceptable. For example, if a patient was already using a particular drug and doing well, there would be little medical justification for switching to another drug. Still, none of these practitioners said that this rationale would account for the high percentage of elderly patients using drugs deemed inappropriate. All the experts we interviewed agreed that the inappropriate use of prescription drugs continues to be a significant health problem. Several experts also pointed out that these research studies only looked at one type of the inappropriate use of prescription drugs. In their opinion, when other examples are considered such as potentially dangerous drug interactions or incorrect dosages, the percentage of senior citizens affected by the inappropriate use of prescription drugs would be even greater than the estimates provided in those studies. The elderly are more likely than other segments of the population to be affected by the inappropriate use of prescription drugs. As a group, the elderly are more likely to suffer from more than one disease or chronic condition concurrently, which means that they may take several different drugs at one time. As the number of prescriptions increases, so does the potential for adverse drug reactions caused by drug interactions or drug-disease contraindications. The physiological changes of aging are a major reason drugs have the potential to cause problems in the elderly. Elderly patients often lack the ability to eliminate drugs from their systems as efficiently as younger patients do because of decreased liver and kidney function. In addition, they are more sensitive to the effects of drugs. Thus, they are not able to accommodate the normal adult dosage. The inappropriate use of prescription drugs is a major cause of adverse drug reactions that, if severe enough, can result in hospitalization or death. Since the elderly are more vulnerable to the effects of the inappropriate prescription drug use, they are at greater risk from adverse drug reactions than other segments of the population. Studies indicate that about 3 percent of all hospital admissions are caused by adverse drug reactions.However, the percentage is much higher for the elderly. One study estimated the percentage of hospitalizations of elderly patients due to adverse drug reactions to be 17 percent, almost 6 times greater than for the general population. Applying an average unit cost to the proportion of hospital admissions that are drug-related, FDA estimates that hospitalizations due to inappropriate prescription drug use cost about $20 billion annually. Less severe adverse drug reactions may go unnoticed or be discounted by both health practitioners and the elderly as the normal effects of the aging process. However, these side effects, such as drowsiness, loss of coordination, and confusion, can result in falls or car accidents. A study estimated that 32,000 senior citizens annually suffer hip fractures as a result of falls caused by adverse drug reactions. Another study concluded that about 16,000 car accidents resulting in injuries each year can be attributed to adverse drug reactions experienced by elderly drivers. Even if no serious bodily injury occurs, adverse drug reactions decrease the general quality of life for patients because of drug-induced mental impairment, loss of coordination, or addiction. The factors leading to the inappropriate use of prescription drugs are multifaceted and interconnected, according to experts we interviewed. These factors reflect the behavior of the physician, pharmacist, and patient, either collectively or individually. From the time a drug is prescribed to the point where the drug is taken, many possible events, often interconnected with each other, can lead to an adverse drug reaction or other serious results. The inappropriate use of prescription drugs can take several different forms, ranging from potentially life-threatening drug-drug interactions to therapeutic duplication (using two or more similar drugs to treat the same problem), which yields little benefit at increased cost. Other examples of the inappropriate use of prescription drugs include drug-age contraindication, drug-allergy contraindication, drug-disease contraindication, incorrect drug dosage, incorrect duration of drug therapy, and less effective drug therapy. A physician, a pharmacist, or a patient may take or omit actions that can produce an adverse drug reaction. For example, a drug-drug interaction could be due to a physician not recognizing that a prescribed drug interacts badly with another prescribed medication or over-the-counter drug used by the patient. (See fig. 1.) The pharmacist may contribute to the situation by not detecting the negative interaction or by failing to determine which drugs the elderly patient is taking. The elderly patient may not give the doctor and pharmacist a complete list of all the medications, including over-the-counter drugs, that he or she is taking. Thus, all three parties may contribute to a drug-drug interaction, with potentially serious consequences to the patient. Health care professionals noted that the overuse and underuse of drug therapies may also contribute to the inappropriate use of prescription drugs. Drug overuse occurs when a medication is prescribed but either no medication was needed or an alternative treatment approach existed. For example, changes in diet and lifestyle may be more appropriate than drug therapy. More controversial is the selection of drug therapy over counseling to treat psychological conditions such as anxiety or depression. Drug underuse occurs when an appropriate medication either is not prescribed or is underprescribed. For example, one study reported that patients with advanced cancer were at risk of receiving less than adequate pain medication. According to several experts we interviewed, lowering the elderly’s risk of adverse drug reactions requires that more detailed information on the impact of drug therapies on the elderly be developed and disseminated to health practitioners. Furthermore, many health practitioners agreed that physicians, pharmacists, and patients should all participate in the drug therapy decision-making process. Increased communication between and among physicians, pharmacists, and patients is vital to ensuring that this process is effective. One difficulty in prescribing drugs for the elderly has been the lack of specific information on dosage levels established for the elderly through clinical tests. Recognizing the need for additional information on the effects of drugs on the elderly, FDA issued voluntary guidelines in 1989 governing the testing of new drugs intended for elderly patients. These guidelines call for the inclusion of elderly patients during the drug’s testing process. The intent of these guidelines is to develop better information for both physicians and pharmacists on dosage standards for new drugs intended for elderly patients as well as to identify side effects that are more pronounced in the elderly than in the general population. FDA’s Director of Drug Policy and Evaluation stated that he believed that pharmaceutical manufacturers have complied with these guidelines. However, several experts said that clinical trials performed under these guidelines are not representative of the elderly population as a whole. For example, they believe that elderly patients over 75 are underrepresented. The medical community has only recently started to emphasize the study of geriatrics and elderly clinical pharmacology. For example, board certification in geriatrics was offered for the first time in 1988. Recognizing the aging of the population, most medical schools now offer courses in geriatrics, though only 12 schools require courses devoted solely to geriatrics. Experts we interviewed agreed that medical schools could improve how they train doctors in geriatrics. Moreover, several experts stressed the need to improve the quality of continuing education in geriatrics, because a large portion of the education doctors receive in medical school becomes outdated during their careers. Since medical schools have only recently introduced geriatric training in their curricula, many doctors in practice today have had little formal training in that area. Two experts also pointed out a similar need for an emphasis on geriatrics in the training of pharmacists, both in pharmacy school and through continuing education. While preclinical training in pharmacology is routinely provided in medical school, several experts said that improvements are needed in the teaching of clinical pharmacology, which trains doctors in the use of drug therapies to treat disease. Doctors obtain their clinical pharmacology training during their residencies. Physicians’ clinical knowledge of the unique aspects of elderly pharmacology depends on their exposure to elderly patients. Several experts believe that the real expertise in pharmacology rests with the pharmacists and that doctors need to use this expertise in deciding the most appropriate drug therapy to prescribe. One strategy that is increasingly used to identify and minimize the inappropriate use of prescription drugs involves drug utilization reviews. Drug utilization reviews are intended to screen drug therapies for potential problems, such as drug-drug interactions, drug-disease contraindications, incorrect dosages, or improper duration of treatment. These reviews can be done either prospectively or retrospectively. Prospective drug utilization reviews are designed to detect potential problems before a prescription is filled by the pharmacist. Retrospective drug utilization reviews occur after the prescription is filled and are intended to detect prescribing patterns that indicate inappropriate or unnecessary medical treatment as well as fraud or abuse. The Omnibus Budget Reconciliation Act of 1990 requires all states to conduct ongoing retrospective reviews of Medicaid prescription drug claims and prospective reviews before each prescription is filled. Most states have expanded that requirement to mandate drug utilization review of all prescriptions. While several experts acknowledged the potential benefits of drug utilization review systems, two experts cautioned that these benefits have not been thoroughly documented to date. A prospective drug utilization review system allows point-of-sale vendors such as pharmacies to check a prescription and a patient’s history against a central database. This database can alert a pharmacist to possible drug-drug interactions or a drug-disease contraindication. Our study of prospective Medicaid drug utilization review systems in five states during fiscal year 1993 found that pharmacies’ use of automated drug utilization review systems linked to statewide Medicaid databases provided a more thorough prospective review than a manual or localized system. This type of automated review can reduce the risk of inappropriate drug therapy and increase patient safety, though we recognized the need for these benefits to be demonstrated conclusively and recommended that HCFA take steps to do so. We also recommended that HCFA develop guidance for the development of these systems to ensure standard implementation of effective drug utilization review systems. New York State’s Elderly Pharmaceutical Insurance Coverage program provides prescription drug insurance coverage for low-income senior citizens not eligible for Medicaid. This program uses a retrospective drug utilization review system for its therapeutic drug monitoring program. This review system monitors each client’s prescriptions, using data from prescription claims submitted for payment by pharmacies, to detect potential problems such as overutilization of a drug or a drug-drug interaction. Once a potential problem is detected, a program official notifies the prescribing physician. The alert is informational only and provides the doctor with a history of the patient’s prescription drug usage, the suspected problem, its effect and severity, and recommendations for resolving the problem. No action is required, but the doctor is asked to respond. In one analysis conducted by program staff, 38.4 percent of the patients whose doctors received letters alerting them to a potential problem subsequently had their drug therapy changed. The Massachusetts Medicaid program also uses its retrospective drug utilization review system to detect questionable prescribing practices affecting any of its recipients. For example, if a patient is prescribed a nonsteroidal anti-inflammatory drug commonly used to relieve the symptoms of arthritis, the system will monitor that patient for potential side effects of this type of medication, such as stomach or intestinal bleeding. If the patient later begins to take antiulcer medications, the system will issue an alert to the prescribing doctor that the usage of the first drug may be the cause of the ulcers. This allows the doctor to evaluate the situation and, if warranted, alter the patient’s drug therapy. A retrospective drug utilization review system can also monitor patient compliance with a prescribed drug therapy. For example, a patient may discontinue his or her blood pressure medication when the symptoms disappear. Despite the lack of symptoms, the causes remain, leaving the patient still at risk. A retrospective drug utilization review can detect the patient’s failure to refill a prescription and alert the patient’s doctor to the situation for further action. One way to lower the potential of adverse drug reactions is to ensure that patients are counseled by either their doctor or pharmacist on the usage and characteristics of a prescription drug. Often, subtle side effects of drugs are ignored by patients and not reported back to the doctor or pharmacist. Unless alerted that the patient is experiencing side effects, a doctor would not be likely to change drug therapies. Counseling not only improves the information received by the patient but also that obtained from the patient. This improved communication between doctor or pharmacist and the patient may prompt a question that leads to the discovery of a drug-drug interaction or drug-allergy interaction. Although effective counseling by doctors and pharmacists can help reduce the likelihood of an adverse drug reaction, two studies have found that many patients do not receive this counseling. A Consumer Reports study of 70,000 people published in 1995 found that about 26 percent had not been counseled by a physician about their drug therapies. A 1989 study by the American Association of Retired Persons (AARP) found that more than one out of three patients reported that they were not counseled by their doctors on their drug therapies. Time pressures on both doctors and pharmacists may also be an obstacle to effective counseling. Recognizing the importance of counseling, the Omnibus Budget Reconciliation Act of 1990 mandated that pharmacists counsel Medicaid patients when they receive prescription drugs. A majority of states have expanded this requirement to include all patients. Officials of the American Pharmaceutical Association and the American Society of Consultant Pharmacists, two professional associations that represent pharmacists, stressed the importance of counseling but noted that the current system of compensation for pharmacists is based on dispensing drugs and lacks meaningful incentives for counseling. For example, a pharmacist may detect a potential problem with a prescription and, after consultation with the doctor and patient, cancel the prescription. If another drug is not substituted and no drug is dispensed, the pharmacist receives no reimbursement for the professional services rendered. Patients who seek information about their drug therapies can reduce their likelihood of experiencing adverse drug reactions. Besides requesting counseling from both the doctor and pharmacist, public advocacy groups urge individuals to develop their own knowledge of drugs. To achieve this objective, AARP encourages the development of package product inserts in large type that are easy for the elderly to read and understand. Public Citizen Health Research Group, a public advocacy group, has also published a consumer guidebook for prescription drugs. Moreover, pharmaceutical manufacturers have begun to make information available directly to consumers. Increased understanding of their drugs, dosage requirements, and possible side effects makes patients more likely to avoid the inappropriate use of drugs. State and local agencies have developed several initiatives to alert consumers to the dangers of inappropriately using prescription drugs. For example, the Massachusetts Department of Public Health sponsors brown bag seminars at senior citizen or community centers. At these seminars, elderly patients are encouraged to bring in all their medicines for review by pharmacists. The goal is to inventory all the medications a senior citizen has and eliminate those that are for conditions no longer being treated or which have expired. The remaining drugs are cataloged in what is called a “medicine passport” that can be shown to doctors and pharmacists as new or additional drugs are prescribed. This record allows health practitioners to quickly review what other medications the person is taking and why. Recent changes in the health care delivery system have implications for the use of prescription drugs. The growing emphasis on controlling health care costs creates a strong incentive to reduce the inappropriate use of prescription drugs and the physical and financial costs associated with adverse drug reactions. Likewise, the increasing importance of cost containment has helped spur the emergence of managed care as a major form of health care delivery. The number of people covered by managed care plans has increased dramatically from 10 million in 1980 to almost 90 million in 1992. Moreover, many managed care plans have recently initiated major marketing efforts to enroll elderly patients. Similarly, the number of people whose prescriptions are filled by pharmacy benefit management companies has also increased. While it is too early to understand the full impact these changes may have on reducing inappropriate drug use—in general, and among the elderly in particular—several experts we interviewed stated that these changes have the potential to improve the coordination of care and to increase the ability to detect inappropriate use of drugs. However, one expert cautioned that the achievement of these goals might be adversely affected by pressures to contain costs or increase profits. Many elderly patients are under the care of several specialists as well as their primary care physician. At times, these doctors may prescribe several drugs to treat various ailments. Unless these various drug therapies are coordinated, adverse drug reactions pose a serious risk. Experts in gerontology and elderly clinical pharmacology that we spoke to stated that the most effective way to deal with the inappropriate use of prescription drugs was to improve the coordination of care. Ideally, this role should fall to the patient’s primary physician. Proponents of managed care have stressed improved coordination of care as a major goal. Though there are several variations of managed care such as health maintenance organizations (HMO) or preferred provider organizations, a basic characteristic of managed care is control over utilization. Often, this is done through a gatekeeper. A gatekeeper is usually the patient’s designated primary doctor who oversees the individual’s care, referring the patient to specialists as needed. This allows one doctor to coordinate various treatments, including drug regimens. Several experts agreed that such coordination could help lower the risk of adverse drug reactions posed by inappropriate drug therapy or a patient receiving multiple prescriptions from different doctors. However, they cautioned that the improved coordination of care is dependent on the quality of patient care, which varies widely among managed care plans. Managed care plans also have the potential to use formularies to reduce the inappropriate use of prescription drugs. A formulary lists the preferred drugs used to treat certain diseases or conditions. Typically, the formulary is developed by a committee of doctors and pharmacists associated with the managed care plan, who seek to identify the most effective drug therapies at the lowest cost to the plan. For example, if two drug therapies are deemed equally effective, then the plan will recommend the less costly of the two as the preferred treatment. However, several experts expressed the concern that cost concerns rather than effectiveness may be the primary driving force in selecting which drugs to place on a managed care plan’s formulary. A managed care plan can change its formulary to reflect new drug therapies. This has an impact on the prescribing behavior of a plan’s doctors who may have to seek an exception if they wish to prescribe a drug not designated by the plan’s formulary. However, two experts said that few managed care plans have used their formularies to improve prescribing practices for elderly patients though the experts acknowledged this potential exists. Another potential advantage of managed care is the data collected on patients. This gives managed care plans the information needed to monitor both the drug therapies patients receive and the prescribing patterns of physicians. One HMO we visited provided its doctors with periodic analyses of their drug-prescribing habits as compared with standards developed by the HMO. This comparison allows the HMO to identify doctors who may need additional training or counseling in prescribing drugs for their patients, particularly the elderly. Over the past few years, the number of people who receive their prescription drugs through pharmacy benefit management firms has increased dramatically from fewer than 60 million in 1989 to 100 million in 1993. Pharmacy benefit management firms manage prescription drug benefits on behalf of health plan sponsors, including self-insured employers, insurance companies, and managed care plans. The initial attraction of pharmacy benefit management firms is their ability to reduce administrative costs and obtain discounts on prescriptions drugs through volume buying. However, these firms can also provide formulary management and drug utilization review services with the potential to reduce inappropriate drug use. For example, the drug utilization review done by one pharmacy benefit management firm, PCS Health Systems, generated almost 25 million alerts in 1994. Of these alerts, 25 percent dealt with drug-age contraindications and excessive daily dosages, two types of inappropriate drug use prevalent among the elderly. Likewise, by monitoring patient prescription drug use, pharmacy benefit management firms can detect a patient’s failure to refill a prescription for a persistent medical condition such as high blood pressure. The firm can then alert the patient’s doctor to this situation for further action if required. Pharmacy benefit management firms can also develop initiatives to address the inappropriate use of drugs among the elderly. For example, Medco has instituted an educational program called “Partners for Healthy Aging.” This program provides specialized information to doctors, pharmacists, and patients to alert them to potential concerns in the use of prescription drugs among the elderly. As the number of patients served by these firms has increased so has the information gathered on patients. With the accumulation of data on patient characteristics, medical conditions, and drug therapies, pharmacy benefit management firms are developing the necessary database for engaging in outcomes research. Such research allows companies to demonstrate the effectiveness of different courses of treatment for a disease from both a therapeutic and cost perspective. This would permit doctors, patients, and payers to make both financially and clinically informed health care decisions. A draft of this report was reviewed and commented on by five leading experts in the field of elderly clinical pharmacology. Where appropriate, the report was changed to reflect their comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will make copies available upon request. This report was prepared by John C. Hansen, Assistant Director, Frank Putallaz, and Tom Taydus. Please call Mr. Hansen at (202) 512-7105 if you or your staff have any questions about this report. To determine the significance of the inappropriate use of prescription drugs among the elderly, we conducted a literature review and obtained documents and testimonial evidence from leading researchers in the fields of gerontology and elderly clinical pharmacology. Additionally, we interviewed other knowledgeable professionals concerning the issues related to the use of prescription drugs by the elderly. Included among these professionals were representatives of FDA, HCFA, senior citizen and consumer advocacy groups, the American Pharmaceutical Association, the Pharmaceutical Research and Manufacturers of America, the American Medical Association, the American Society of Consultant Pharmacists, and the American Association of Medical Colleges. We interviewed state officials in Massachusetts, New York, and Vermont to see how state and federal health programs deal with the inappropriate use of prescription drugs among the elderly. Massachusetts was selected because of the number of prominent researchers in the areas of elderly clinical pharmacology located there. New York was selected because the state administers the Elderly Pharmaceutical Insurance Coverage program, which provides prescription drug coverage to low-income senior citizens. Vermont, a rural state in contrast to Massachusetts and New York, was selected because it was one of only eight states that had implemented a statewide automated prospective drug utilization review system for Medicaid prior to 1994. In each state, we obtained information on the operation of Medicaid drug utilization review systems as well as various state initiatives to help senior citizens avoid adverse drug reactions. To update the results from earlier research studies, we analyzed data from HCFA’s 1992 Medicare Current Beneficiary Survey, the most recently available data. According to HCFA, this survey is designed to provide reliable baseline data to project Medicare costs and is representative of the Medicare population as a whole. To determine what causes the inappropriate use of prescription drugs, we reviewed the literature and interviewed the leading experts previously cited. We also obtained information on physician gerontology education and questioned state officials about the implementation of drug utilization review programs and their effect on the causes of the inappropriate use of prescription drugs. To determine how physicians, pharmacists, and patients receive information on drug therapies, we identified actions that drug manufacturers and FDA have taken to provide better dosage information for elderly patients as well as changes in how drug manufacturers disseminate information to physicians, pharmacists, and patients. We also obtained information on efforts by state agencies, senior citizen advocacy groups, pharmacy groups, and medical organizations to improve communication between and among physicians, pharmacists, and patients. To identify emerging trends in the health care delivery system and their potential effects on the inappropriate use of prescription drugs, we obtained information on how managed care plans develop formularies, train their staff on new drug therapies, and track both patient and physician use of prescription drugs. To assess the effect of the growth of pharmacy benefit management firms, we obtained information on how these plans coordinate and monitor drug therapies. The Medicare Current Beneficiary Survey is a continuous, multipurpose survey of a representative sample of the Medicare population. It is administered by HCFA’s Office of the Actuary and began gathering data in 1991. The survey generates data on issues of prime importance to the management of the Medicare program and the development of health care policy. Focusing on health care use and expenditures, the survey generates data to (1) allow HCFA to monitor the financial effects of changes in the Medicare program; (2) develop reliable and current information on the use and cost of services not covered by Medicare such as prescription drugs and long-term care; and (3) obtain information on the sources of payments for costs of covered services not assumed by Medicare. Although its focus is on the financing of health care, the survey collects a variety of information about the Medicare population, including demographic characteristics, health status, insurance coverage, financial resources, and family support. The survey is based on a sample of Medicare recipients drawn from the Medicare enrollment file. The sample is representative of the Medicare population as a whole. Since the survey is a longitudinal study, those selected for participation are interviewed three times a year for several years to form a continuous profile of their health care. Initial participants who completed the first round of interviews numbered 12,677. Of these, 942 resided in an institutional setting and 11,735 were community-based. The sample is adjusted annually for attrition and for newly eligible persons. The initial interview gathers baseline data on demographic characteristics, health status, insurance coverage, financial resources, and family support. Subsequent interviews gather details of the participants’ health care use since the last interview emphasizing the type of health care used and the source for paying for it. This includes information on the prescription drugs a participant is using even though Medicare does not provide reimbursement for their cost. Information collected is edited for consistency, documented, and organized into files. Later, these files are merged with HCFA claims payment records. Also, administrative data such as Medicaid buy-in status and capitated plan membership are added to the file. All personal identifying information is removed. Table III.1 lists the 20 drugs deemed generally inappropriate for elderly patients by a panel of experts. The reasons given by this panel for judging a drug inappropriate are also provided as is the purpose of these drugs. The panel’s results and methodology were published in 1991. Though the goal of this panel was to identify drugs inappropriate for the elderly living in a nursing home setting, a later examination of these drugs by another panel of experts also judged these drugs as generally inappropriate for elderly patients living in a community-based setting. Several of the experts we interviewed agreed that these drugs should normally not be used with elderly patients though they stressed that there would be some medical situations where the use of these drugs would be appropriate. One expert noted the need for research studies based on patient-related outcomes data to confirm the views of the expert panelists. Shorter-acting benzodiazepines are safer alternatives. Shorter-acting benzodiazepines are safer alternatives. Shorter-acting benzodiazepines are safer alternatives. Shorter-acting benzodiazepines are safer alternatives. Safer sedative-hypnotics are available. Safer sedative-hypnotics are available. Other antidepressant medications cause fewer side effects. Other nonsteroidal anti-inflammatory agents cause less toxic reactions. Other nonsteroidal anti-inflammatory agents cause less toxic reactions. (continued) Other oral hypoglycemic medications have shorter half-lives and do not cause inappropriate antidiuretic hormone secretion. Other analgesic medications are more effective and safer. Other narcotic medications are safer and more effective. To improve blood circulation Effectiveness is in doubt. To improve blood circulation Effectiveness is in doubt. This drug is no longer available in the U.S. Effectiveness at low dosage is in doubt. Toxic reaction is high at higher dosages. Safer alternatives exist. Minimally effective while causing toxicity. Potential for toxic reaction is greater than potential benefit. Minimally effective while causing toxicity. Potential for toxic reaction is greater than potential benefit. Minimally effective while causing toxicity. Potential for toxic reaction is greater than potential benefit. Minimally effective while causing toxicity. Potential for toxic reaction is greater than potential benefit. Least effective of available antiemetics. At our request, HCFA’s Office of the Actuary used data from the Medicare Current Beneficiary Survey to determine the percentage of community-based elderly who used at least 1 of the 20 drugs identified in appendix III as generally inappropriate for their age group. The most current compiled data are for 1992. The first step was to identify survey participants who were 65 or older and who were noninstitutionalized. Of their survey population, 9,182 participants met these criteria. This group represented 29,862,854 Medicare beneficiaries nationwide according to HCFA’s Office of the Actuary. The next step was to determine which of these participants used at least 1 of the 20 drugs sometime during 1992 and project that use to the national population. The results indicated that an estimated 17.5 percent or 5,219,811 noninstitutionalized Medicare beneficiaries 65 or older used at least 1 of those drugs during 1992. These results are displayed in table IV.1. Specifically, the table lists the percentages of noninstitutionalized elderly found to be using each of the 20 drugs. The middle column details the results based on research using data from the 1987 National Medical Expenditure Survey covering noninstitutionalized residents 65 or older. The right-hand column presents the results of the analysis described above. We did not include the research results based on interviews conducted during 1989 and 1990 of a sample of noninstitutionalized elderly 75 or older residing in Santa Monica, California. This was because the participants in this study represented one community rather than a national sample and belonged to a different age group than the other two studies. In addition, their use of the 20 drugs was measured during a period of 1 month versus 1 year in the other 2 analyses. Use of a drug not recommended for the age group of a patient. The potential for, or the occurrence of, an adverse drug reaction as a result of the use of two or more drugs together. The potential for, or the occurrence of, an allergic reaction as a result of drug therapy. The potential for, or occurrence of, an undesirable alteration of the therapeutic effect of a given prescription because of the presence, in the patient for whom it is prescribed, of an additional disease condition. Also, the potential for, or the occurrence of, an adverse effect of the drug on the patient’s disease condition. A dosage that lies outside the daily recommended dosage range as specified in predetermined standards as necessary to achieve therapeutic benefit. The number of days of prescribed therapy exceeds or falls short of the recommendations contained in the predetermined standards. The prescribing and dispensing of two or more drugs from the same therapeutic class such that the combined daily dose puts the patient at risk of an adverse drug reaction or yields no additional therapeutic benefit. Use of a drug therapy that is less desirable than other alternatives because of factors such as therapeutic effectiveness, presence of side effects, ease of use, or cost. Avorn, Jerry. “Grant Watch—Medication Use and the Elderly: Current Status and Opportunities.” Health Affairs (Spring 1995), pp. 276-86. Beard, Keith. “Adverse Reactions as a Cause of Hospital Admission in the Aged.” Drugs & Aging, Vol. 2, No. 4 (July/Aug. 1992), pp. 356-67. Beers, Mark, Joseph G. Ouslander, Irving Rollingher, and others. “Explicit Criteria for Determining Inappropriate Medication Use in Nursing Home Residents.” Archives of Internal Medicine, Vol. 151 (Sept. 1991), pp. 1825-32. Beers, Mark, Jerry Avorn, Stephen B. Soumerai, and others. “Psychoactive Medication Use in Intermediate-Care Facility Residents.” Journal of the American Medical Association, Vol. 260, No. 20 (Nov. 25, 1988), pp. 3016-20. Beers, Mark, Michele Storrie, and Genell Lee. “Potential Adverse Drug Interactions in the Emergency Room.” Annals of Internal Medicine, Vol. 112, No. 1 (Jan. 1, 1990), pp. 61-64. Bero, Lisa A., Helene L. Lipton, and Joyce Adair Bird. “Characterization of Geriatric Drug-Related Hospital Readmissions.” Medical Care, Vol. 29, No. 10 (Oct. 1991), pp. 989-1000. Cleeland, Charles, Rene Gonin, Alan K. Hatfield, and others. “Pain and Its Treatment in Outpatients with Metastatic Cancer.” New England Journal of Medicine, Vol. 330, No. 9 (Mar. 3, 1994), pp. 592-96. Col, Nananda, James E. Fanale, and Penelope Kronholm. “The Role of Medication Noncompliance and Adverse Drug Reactions in Hospitalizations of the Elderly.” Archives of Internal Medicine, Vol. 150, No. 4 (Apr. 1990), pp. 841-45. Colt, Henri G., and Alvin P. Shapiro. “Drug-Induced Illness as a Cause of Admission to a Community Hospital.” Journal of the American Geriatrics Society, Vol. 37, No. 4 (Apr. 1989), pp. 323-26. Gurwitz, Jerry H., “Suboptimal Medication Use in the Elderly: The Tip of the Iceberg.” Journal of the American Medical Association, Vol. 272, No. 4 (July 27, 1994), pp. 316-17. Kane, Robert L., and Judith Garrard. “Changing Physician Prescribing Practices: Regulation vs. Education.” Journal of the American Medical Association, Vol. 271, No. 5 (Feb. 2, 1994), pp. 393-94. Lurie, Peter, and Philip R. Lee. “Fifteen Solutions to the Problems of Prescription Drug Abuse.” Journal of Psychoactive Drugs, Vol. 23, No. 4 (Oct.-Dec. 1991), pp. 349-57. Manasse, Henri R. Jr. “Medication Use in an Imperfect World: Drug Misadventuring as an Issue of Public Policy, Part 1.” American Journal of Hospital Pharmacy, Vol. 46 (May 1989), pp. 929-44. Manasse, Henri R. Jr. “Medication Use in an Imperfect World: Drug Misadventuring as an Issue of Public Policy, Part 2.” American Journal of Hospital Pharmacy, Vol. 46 (June 1989), pp. 1141-52. Ray, Wayne A., Marie R. Griffin, and Ronald I. Shorr. “Adverse Drug Reactions and the Elderly.” Health Affairs, Vol. 9, No. 3 (Fall 1990), pp. 114-22. Ray, Wayne A., Randy L. Fought, Michael D. Decker. “Psychoactive Drugs and the Risk of Injurious Motor Vehicle Crashes in Elderly Drivers.” American Journal of Epidemiology, Vol. 136, No. 7 (Oct. 1, 1992), pp. 873-83. Ray, Wayne A., Marie R. Griffin, William Schaffner, and others. “Psychotropic Drug Use and the Risk of Hip Fracture.” New England Journal of Medicine, Vol. 316, No. 7 (Feb. 12, 1987), pp. 363-69. Soumerai, Stephen B., and Helene L. Lipton. “Sounding Board—Computer-Based Drug-Utilization Review—Risk, Benefit, or Boondoggle.” New England Journal of Medicine, Vol. 332, No. 24 (June 15, 1995), pp. 1641-45. Stuck, Andreas E., Mark Beers, Andrea Steiner, and others. “Inappropriate Medication Use in Community-Residing Older Persons.” Archives of Internal Medicine, Vol. 154 (Oct. 10, 1994), pp. 2195-2200. Sullivan, Sean D., David H. Kreling, and Thomas K. Hazlet. “Noncompliance With Medication Regimens and Subsequent Hospitalization: A Literature Analysis and Cost of Hospitalization Estimate.” Journal of Research in Pharmaceutical Economics, Vol. 2, No. 2 (1990), pp. 19-33. Wilcox, Sharon M., David U. Himmelstein, and Steffie Woolhandler. “Inappropriate Drug Prescribing for the Community-Dwelling Elderly.” Journal of the American Medical Association, Vol. 272, No. 4 (July 27, 1994), pp. 292-96. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the elderly's inappropriate use of prescription drugs, focusing on: (1) whether the inappropriate use of prescription drugs by the elderly is widely viewed as a serious health problem; (2) the ways prescription drugs are used inappropriately and why these situations occur; (3) how public knowledge of prescription drugs can be improved; and (4) how emerging trends in health care delivery affect drug prescribing for the elderly. GAO found that: (1) the inappropriate prescription drug use is a serious health risk for the elderly, since they take more prescription drugs than other age groups, they often take several drugs at once, resulting in adverse drug reactions, and they do not efficiently eliminate drugs from their systems due to decreased body function; (2) the percentage of Medicare recipients over 65 using unsuitable prescription drugs dropped from 25 percent in 1987 to 17.5 percent in 1992; (3) inappropriate prescription drug use results from physicians using outdated prescribing practices, pharmacists not performing drug utilization reviews, and patients not informing their physician and pharmacist of all the drugs they are taking; (4) to address the problem of inappropriate prescription drug use, the government is working to disseminate information on the effect of prescription drugs on the elderly, the medical community is working to increase physicians' knowledge of geriatrics, and patients are increasingly seeking information about their drug therapies; and (5) enrollment in managed care plans has grown rapidly, particularly among senior citizens, allowing for the potential to improve the coordination of drug therapies for newly enrolled elderly patients. |
HUD’s procurement offices annually award and administer millions of dollars worth of contracts on behalf of HUD’s program offices. This process entails receiving descriptions of need, soliciting and receiving offers, awarding contracts, making necessary contract modifications, resolving disputes, and closing out completed contracts. The Office of Procurement and Contracts performs these functions for headquarters offices, and the three Administrative Service Centers (located in New York, N.Y.; Atlanta, Ga.; and Denver, Colo.) perform them for HUD’s field offices. The major types of goods and services procured by headquarters include information technology hardware and software, mortgage-accounting and claims-processing services, advertising for the sale of HUD’s properties, and various professional, technical, and administrative management support services. The typical goods and services purchased by the field offices include real estate management services and mortgage insurance-related activities, such as mortgage credit analyses, appraisals, and mortgage insurance endorsement processing. HUD’s staffing levels decreased from 12,823 in 1993 to 9,200 in 1998. While HUD has been downsizing, its annual obligations for headquarters contracts have steadily increased. According to HUD’s data systems, the annual contract obligations at HUD’s headquarters grew from $213 million in fiscal year 1991 to $376 million in fiscal year 1996 (in constant 1996 dollars). No historical data are available for field office contracting activities. HUD’s 2020 Management Reform Plan and supporting documents indicate that the Department’s reliance on contractors to help carry out its responsibilities will remain significant. For instance, the plan calls for HUD to contract with private firms for a number of functions, including physical building inspections of public housing and multifamily insured projects; legal, investigative, audit, and engineering services; and activities to clean up the backlog of troubled assisted multifamily properties. Previously, physical inspections of multifamily projects were carried out by HUD personnel, mortgagees, and regional contractors. The plan also encompasses the potential use of contractors to manage construction under the HOPE VI program. Finally, the 2020 reforms call for transferring the Office of Housing’s contract administration activities for its rental assistance programs to contract administrators. The new arrangement would be similar to the process under the Office of Public and Indian Housing’s rental assistance programs. Currently, approximately 1.1 million assisted rental units are administered by the Office of Housing under contracts with project owners. The Office of Housing performs the role of contract administrator and makes monthly rent payments to owners on behalf of eligible families. Under HUD’s proposal, these activities would be carried out directly by contractors (often, housing finance agencies or housing authorities) instead of HUD employees. We, HUD’s Inspector General, and the National Academy of Public Administration have identified weaknesses in HUD’s contracting practices. For example, our review of HUD’s oversight of real estate asset management (REAM) contractors, who are responsible for safeguarding foreclosed HUD properties, found that HUD did not have an adequate system in place to assess its field offices’ oversight of these contractors. Our audit work found that HUD does not have a system in place for monitoring its field offices’ administration of REAM contracts. To safeguard and maintain the approximately 30,000 properties that HUD has in its inventory at any given time, HUD obtains the services of REAM contractors. These contractors are to secure and inspect the properties, report their condition to HUD, notify interested parties of HUD’s ownership, perform exterior maintenance, and ensure that the properties are free of debris and hazardous conditions. REAM contractors are therefore essential to HUD’s achieving its goal of returning these properties to private ownership as soon as possible, while obtaining a maximum sale price for HUD. HUD’s guidance makes headquarters staff responsible for overseeing the administration of REAM contracts. Specifically, the guidance requires regional offices to ensure that field offices are monitoring REAM contractors and requires headquarters staff to review regional offices’ oversight actions through regional reviews. We found, however, that headquarters staff have not been reviewing the field offices since HUD reorganized its field office structure in 1995 and eliminated the regional offices. According to HUD Single-Family Property Disposition officials, the regional offices’ oversight function was never absorbed into headquarters after the regional offices were eliminated. Also, after the reorganization, HUD’s guidance was not updated to ensure that the administration of REAM contracts was monitored by headquarters. In addition, HUD’s field office staff are not consistently providing adequate oversight of REAM contractors. We believe this lack of oversight contributed to some of the poor property conditions—ranging from graffiti and debris to imminent safety hazards—that we saw when we visited 66 HUD properties. Such conditions can decrease the marketability of HUD’s properties, decrease the value of surrounding homes, increase HUD’s holding costs and, in some cases, threaten the health and safety of neighbors and potential buyers. Our report made recommendations to HUD for improving its oversight of REAM contractors. HUD’s field office staff are directly responsible for overseeing REAM contractors. We found, however, that some key oversight responsibilities were not always performed by staff at the three HUD field offices we visited. For example, HUD’s field staff did not always evaluate REAM contractors as required. Field office staff are supposed to evaluate the REAM contractor’s performance every year in the month prior to the contract’s anniversary date. This annual evaluation is used to make decisions on contract extensions and, if necessary, to act on inadequate performance. However, at all three field offices we visited, these evaluations were not always conducted or were not always completed in time to provide useful information for contract renewal decisions. For example, one of the field offices we visited has evaluated the REAM contractor’s performance only once since the REAM contract was awarded in June 1995, and that evaluation was conducted several weeks after the contract had already been extended beyond the base year. Officials in that field office told us that performance evaluations were not performed because they did not have the staff resources or travel funds to visit the contractor’s office. However, it should be noted that the REAM contractor’s office is only 37 miles from HUD’s field office. Furthermore, in the one evaluation conducted, field office staff did not convey the results of the evaluation to the REAM contractor, as required. In this evaluation, HUD cited the contractor for failing to remove debris from some properties. Our inspection of properties in this field location revealed that the debris removal problem still existed at the time of our review, more than 1 year later. One property had been shown by realtors eight times while it contained debris. In fact, a realtor noted that the only accessible entrance to the property was blocked with furniture and debris, which was the case when we visited the property. During our August 1997 inspection of 24 properties in this location, we found that most of the properties contained either interior or exterior debris. Consequently, prospective buyers were sometimes viewing properties littered with household trash, personal belongings, and other debris. In addition, HUD’s field office staff did not always inspect the properties managed by REAM contractors, as suggested by HUD’s guidance. Because HUD recognizes that physical inspections are the best method for monitoring the contractors’ work, HUD’s guidance suggests that field office staff conduct monthly physical inspections of a minimum number of properties assigned to each contractor. To help meet this target, the guidance allows the field offices to contract out for property inspection services. Without adequate on-site inspections, HUD cannot be assured that it is receiving the services for which it has paid. In two of the field offices we visited, property files contained evidence that some properties were being inspected. However, of the 42 property files we reviewed in the third field office, HUD’s field office staff had not inspected any of those properties. Field office staff told us they did not get out to inspect properties because they did not have the travel funds or staff resources to do so. Subsequent to our visit, in December 1997, this field office started using contractors to make property inspections. Moreover, HUD’s field office staff did not always ensure that the REAM contractors conducted property inspections and submitted appropriate reports for HUD’s review. HUD’s guidance requires REAM contractors to submit initial inspection reports within 5 working days of being notified that a property has been assigned, but it offers no specific guidance on the submission of routine inspection reports. The REAM contractor’s submission of initial and routine inspection reports is essential for HUD to determine its marketing strategy for the properties and to mitigate potential losses to the properties. For example, the initial inspection reports, along with appraisals, are the primary tools for determining the repairs that must be made and whether the property meets certain standards that would allow it to be sold with HUD-insured financing. At the three offices we reviewed, the requirements placed on REAM contractors for submitting inspection reports and the extent to which the reports were actually submitted to the field offices varied considerably. For example, at one location, all of the property files we reviewed contained initial inspections, while in another location, 43 percent of the files contained no initial inspections. Without inspection reports, HUD is unable to readily determine whether the contractors are conducting inspections as required. At all three locations that we visited, we found instances where properties were not maintained as required by the REAM contracts. During our inspection of approximately 20 properties in each location, we identified properties that (1) were not properly secured, (2) had physical conditions that did not match those that the REAM contractor had reported to HUD, or (3) had imminent hazards. For instance, of the 66 properties we visited in all three locations, we found that approximately 39 percent were not sufficiently secured to prevent access to the property. The failure to properly secure properties can lead to trespassing, vandalism, and the property’s deterioration. In fact, we visited unsecured properties that had broken windows, graffiti, and exposed walls in the bathrooms where valuable copper piping had been ripped out. In addition, we found physical conditions that did not match those that the REAM contractors had reported to the three HUD field offices we visited. For example, one property contained animal feces, fur, and personal possessions, while the contractor’s inspection report indicated that the house was free of debris. If contractors do not accurately report on the condition of properties, HUD may lack vital information on which to make disposition decisions and to address safety hazards. As a result, the government may sell the property for less than it is worth or incur unnecessary holding and maintenance costs because it is not marketable. Furthermore, almost 71 percent of the properties we visited in one field office, and about 37 percent in another, contained imminent hazards, such as broken or rotting stairs. Inspection reports submitted to HUD for one property noted that the front steps were dangerous—a condition warranting immediate repair by the contractor. Nonetheless, when we inspected the property about 3 months after the contractor initially reported the problem, the stairs still had not been repaired. Other imminent hazards that we saw included a refrigerator with the door intact on a back porch and properties containing household waste, food, soiled diapers, paints, and solvents. The failure to address imminent hazards endangers would-be buyers, as well as neighbors, and puts the government at risk of litigation. On the basis of our review of files and properties in the three locations, we found that the properties were generally in better condition in the locations where staff more actively monitored the contractors’ performance. We recognize, however, that the condition of the properties is not totally attributable to HUD’s oversight of the contractors. Other factors can contribute to the condition of the properties, including the overall quality of the contractor’s work and the susceptibility of the neighborhood to crime and vandalism. We, the Inspector General, and the National Academy of Public Administration have identified other weaknesses in HUD’s contracting with respect to the Department’s procurement systems, needs assessment and planning functions, and oversight of contractors’ performance. Both we and the Inspector General found that HUD’s ability to manage contracts has been limited because its procurement systems have not always contained accurate critical information regarding contract awards and modifications and their associated costs. Although HUD recently combined several of its procurement systems, the new system is not yet integrated with HUD’s financial systems, thus limiting the data available to manage the Department’s contracts. The Inspector General reported in September 1997 that (1) inadequate oversight of contractors’ performance has led HUD to pay millions of dollars for services without determining the adequacy of the services provided and (2) many HUD staff had a poor understanding of their contract management roles and have not always maintained adequate documentation of their reviews of contractors. This situation limits assurances that adequate monitoring has occurred.In a May 1997 preliminary report on the contracting activities of HUD’s Federal Housing Administration (FHA), the National Academy of Public Administration identified a variety of problem areas associated with the procurement process, including the fact that procurements took too long; FHA’s oversight of contracted services was inadequate; and FHA sometimes used contracting techniques that limited competition. The Academy is in the process of carrying out a more in-depth review of FHA’s contracting activities and is also reviewing procurement practices in other parts of HUD. In a December 1997 report, HUD’s Inspector General noted that a potential reliance on contractors as a means of supplanting HUD staff may not be in the best interests of HUD and the taxpayers. The report noted that HUD relies heavily on contractors to perform studies, design systems, administer functions, and develop plans and strategies but has made little effort to date to formally evaluate the effectiveness and cost/benefits of its contracted work. HUD has recognized the need to improve its contracting processes and has begun taking actions to address weaknesses that we and the Inspector General have identified. In its latest self-assessment of management controls under the Federal Managers’ Financial Integrity Act, HUD added contracting as a new material weakness. The 2020 plan also includes an effort to redesign the contract procurement process. HUD has recently appointed a chief procurement officer who will be responsible for improving HUD procurement planning and policies, reviewing and approving all contracts of over $5 million, and implementing recommendations that may result from an ongoing study of HUD’s procurement practices by the National Academy of Public Administration. HUD is also establishing a contract review board, composed of the chief procurement officer and other senior HUD officials, that will be responsible for reviewing and approving each HUD program office’s strategic procurement plan and reviewing the offices’ progress in implementing the plans. In addition, HUD is establishing standard training requirements for the HUD staff responsible for monitoring contractors’ progress and performance by including standards relating to monitoring contractors in its system for evaluating employees’ performance. HUD is also planning actions to integrate its procurement and financial systems. In addition, HUD officials told us that they are planning to take actions to strengthen the Department’s oversight of REAM contractors and to involve headquarters in ensuring that field staff effectively oversee the contractors’ performance. Furthermore, with respect to the problems found in property disposition contracting, single-family housing officials have proposed changes that they anticipate would result in only a minimal inventory of properties and therefore only a limited need for REAM contractor services. Specifically, according to HUD Single-Family Housing Division officials, the Department plans to sell the rights to properties before they enter HUD’s inventory, thus enabling them to be quickly disposed of once they become available. Although the details of these sales, which HUD refers to as “privatization sales,” remain to be developed, HUD envisions that properties would be pooled on a regional basis and purchased by entities that could use their existing structures to sell the properties in the same way that the Department currently does, namely, through competitive sales to individuals. In addition, as a part of its budget request for fiscal year 1999, HUD proposed new legislation to allow the Department to take back notes when a claim is paid, rather than requiring lenders to foreclose and convey properties. HUD would then transfer the note to a third party for servicing and/or disposition. We view the actions that HUD has taken to improve its contracting procedures as positive steps. However, some key issues concerning their implementation are still being finalized, such as the precise role of the contract review board in overseeing HUD’s procurement actions, and HUD’s ability to have the necessary resources in place to carry out its procurement responsibilities effectively. Perhaps even more important is the extent to which these actions will lead to a change in HUD’s culture, so that acquisition planning and effective oversight of contractors will be viewed by both management and staff as being intrinsic to HUD’s ability to carry out its mission successfully. Mr. Chairman this concludes our statement. We would be pleased to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed issues related to contracting activities at the Department of Housing and Urban Development (HUD), focusing on the: (1) extent of HUD's reliance on contractors to carry out the Department's responsibilities; (2) weaknesses in HUD's current contracting practices, particularly with respect to the oversight of property management contractors; and (3) HUD's actions to address its contracting weaknesses. GAO noted that: (1) HUD's annual obligations for headquarters contracts have steadily increased in recent years, growing from $213 million in fiscal year (FY) 1991 to $376 million in FY 1996, according to HUD's data systems; (2) furthermore, the Department will continue to rely heavily on contractors to help carry out its responsibilities under its 2020 Management Reform Plan; (3) for instance, the plan calls for HUD to contract with private firms for a number of functions, including physical building inspections of public housing and multi-family insured projects; legal, investigative, audit, and engineering services; and activities to clean up the backlog of troubled assisted multi-family properties; (4) GAO, HUD's Inspector General, and the National Academy of Public Administration have identified weaknesses in HUD's contract administration and monitoring of contractors' performance; (5) the three HUD field offices GAO visited varied greatly in their efforts to monitor real estate asset management contractors' performance, and none of the offices adequately performed all of the functions needed to ensure that the contractors meet their contractual obligations to maintain and protect HUD-owned properties; (6) GAO's physical inspection of the properties for which the contractors in each location were responsible identified problems at the properties, including vandalism, maintenance problems, and safety hazards, which may decrease the marketability of HUD's properties, decrease the value of surrounding homes, increase HUD's holding costs, and in some cases, threaten the health and safety of neighbors and potential buyers; (7) HUD has recognized the need to improve its contracting processes and has begun taking actions to address the weaknesses that GAO and the Inspector General have identified; (8) HUD has recently appointed a chief procurement officer and is also establishing a contract review board; and (9) HUD is taking steps to revise its property disposition activities which could reduce its reliance on asset management contractors. |
Financial assistance to help students and families pay for postsecondary education has been provided for many years through student grant and loan programs authorized under title IV of the Higher Education Act of 1965, as amended. Examples of these programs include Pell Grants for low-income students, PLUS loans to parents and graduate students, and Stafford loans. Much of this aid has been provided on the basis of the difference between a student’s cost of attendance and an estimate of the ability of the student and the student’s family to pay these costs, called the expected family contribution (EFC). The EFC is calculated based on information provided by students and parents on the Free Application for Federal Student Aid (FAFSA). Statutory definitions establish the criteria that students must meet to be considered independent of their parents for the purpose of financial aid, and statutory formulas establish the share of income and assets that are expected to be available for the student’s education. In fiscal year 2005, the Department of Education made approximately $14 billion in grants, and title IV lending programs made available another $57 billion in loan assistance. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work- study aid, collectively known as campus-based aid. Table 1 provides brief descriptions of the title IV programs that we reviewed in our 2005 report and includes two programs—Academic Competitiveness Grants and National Science and Mathematics Access to Retain Talent Grants—that were created since that report was issued. Postsecondary assistance also has been provided through a range of tax preferences, including postsecondary tax credits, tax deductions, and tax- exempt savings programs. For example, the Taxpayer Relief Act of 1997 allows eligible tax filers to reduce their tax liability by receiving, for tax year 2006, up to a $1,650 Hope tax credit or up to a $2,000 Lifetime Learning tax credit for tuition and course-related fees paid for a single student. The fiscal year 2005 federal revenue loss estimate of the postsecondary tax preferences that we reviewed was $9.15 billion dollars. Tax preferences discussed as part of our 2005 report include the following: Lifetime Learning Credit—income-based tax credit claimed by tax filers on behalf of students enrolled in one or more postsecondary education courses. Hope Credit—income-based tax credit claimed by tax filers on behalf of students enrolled at least half-time in an eligible program of study and who are in their first 2 years of postsecondary education. Student Loan Interest Deduction—income-based tax deduction claimed by tax filers on behalf of students who took out qualified student loans while enrolled at least half-time. Tuition and Fees Deduction—income-based tax deduction claimed by tax filers on behalf of students who are enrolled in one or more postsecondary education courses and have either a high school diploma or a General Educational Development (GED) credential. Section 529 Qualified Tuition Programs—College Savings Programs and Prepaid Tuition Programs—non-income-based programs that provide favorable tax treatment to investments and distributions used to pay the expenses of future or current postsecondary students. Coverdell Education Savings Accounts—income-based savings program providing favorable tax treatment to investments and distributions used to pay the expenses of future or current elementary, secondary, or postsecondary students. As figure 1 demonstrates, the use of tax preferences has increased since 1997, both in absolute terms and relative to the use of title IV aid. Postsecondary student financial assistance provided through programs authorized under title IV of the Higher Education Act and the tax code differ in timing of assistance, the populations that receive assistance, and the responsibility of students and families to obtain and use the assistance. Title IV programs and education-related tax preferences differ significantly in when eligibility is established and in the timing of the assistance they provide. Title IV programs generally provide benefits to students while they are in school. Education-related tax preferences, on the other hand, (1) encourage saving for college through tax-exempt saving, (2) assist enrolled students and their families in meeting the current costs of postsecondary education through credits and tuition deductions, and (3) assist students and families repaying the costs of past postsecondary education through a tax deduction for student loan interest paid. While title IV programs and tax preferences assist many students and families, program and tax rules affect eligibility for such assistance. These rules also affect the distribution of title IV aid and the assistance provided through tax preferences. As a result, the beneficiaries of title IV programs and tax preferences differ. Title IV programs generally have rules for calculating grant and loan assistance that give different consideration to family income, assets, and college costs in the award of financial aid. For example, Pell Grant awards are calculated by subtracting the student’s EFC from the maximum Pell Grant award ($4,050 in academic year 2006-2007), or the student’s cost of attendance, whichever is less. Because the EFC is closely linked to family income and circumstances (such as the size of the family and the number of dependents in school), and modest EFCs are required for Pell eligibility, Pell awards are made primarily to families with modest incomes. In contrast, the maximum unsubsidized Stafford loan amount is calculated without direct consideration of financial need: students may borrow up to their cost of attendance, minus the estimated financial assistance they will receive. As table 2 shows, 92 percent of Pell financial support in 2003-2004 was provided to dependent students whose family incomes were $40,000 or below, and the 38 percent of Pell recipients in the lowest income category ($20,000 or below) received a higher share (48 percent) of Pell financial support. Because independent students generally have lower incomes and accumulated savings than dependent students and their families, patterns of program participation and dollar distribution differ. Participation of independent students in Pell, subsidized Stafford, and unsubsidized Stafford loan programs is heavily concentrated among those with incomes of $40,000 or less: from 74 percent (unsubsidized Stafford) to 95 percent (Pell) of program participants have incomes below this level. As shown in table 3, the distribution of award dollars follows a nearly identical pattern. Many education-related tax preferences have both de facto lower limits created by the need to have a positive tax liability to obtain their benefit and income ceilings on who may use them. For example, the Hope and Lifetime Learning tax credits require that tax filers have a positive tax liability to use them and income-related phase-out provisions in 2005 that began at $45,000 and $90,000 for single and joint filers, respectively. Furthermore, tax-exempt savings are more advantageous to families with higher incomes and tax liabilities because, among other reasons, these families hold greater assets to invest in these tax preferences and have a higher marginal tax rate, and thus benefit the most from the use of these tax preferences. Table 4 shows the income categories of tax filers claiming the three tax preferences available to current students and/or their families along with the reduced tax liabilities from those preferences in 2004. The federal government and postsecondary institutions have significant responsibilities in assisting students and families in obtaining assistance provided under title IV programs but only minor roles with respect to tax filers’ use of education-related tax preferences. To obtain federal student aid, applicants must first complete the FAFSA, a form which required students to complete up to 100 fields in 2006-2007. Submitting a completed FAFSA to the Department of Education largely concludes students’ and families’ responsibility in obtaining aid. The Department of Education is responsible for calculating students’ and families’ EFC on the basis of the FAFSA, and students’ educational institutions are responsible for determining aid eligibility and the amounts and packaging of awards. In contrast, higher education tax preferences require students and families to take more responsibility. Although postsecondary institutions provide students and IRS with information about higher education attendance, they have no other responsibilities for higher education tax credits, deductions, or tax-preferred savings. The federal government’s primary role with respect to higher education tax preferences is the promulgation of rules; the provision of guidance to tax filers; and the processing of tax returns, including some checks on the accuracy of items reported on those tax returns. The responsibility for selecting among and properly using tax preferences rests with tax filers. Unlike title IV programs, users must understand the rules, identify applicable tax preferences, understand how these tax preferences interact with one another and with federal student aid, keep records sufficient to support their tax filing, and correctly claim the credit or deduction on their return. According to our analysis of IRS data on the use of Hope and Lifetime tax credits and the tuition deduction in our 2005 report, some tax filers appear to make less-than-optimal choices among them. The apparent suboptimal use of postsecondary tax preferences may arise, in part, from the complexity of these provisions. Making poor choices among tax preferences for postsecondary education may be costly to tax filers. For example, families may strand assets in a tax-exempt savings vehicle and incur tax penalties on their distribution if their child chooses not to go to college. They may also fail to minimize their federal income tax liability by claiming a tax credit or deduction that yields less of a reduction in taxes than a different tax preference or by failing to claim any of their available tax preferences. For example, if a married couple filing jointly with one dependent in his/her first 2 years of college had an adjusted gross income of $50,000, qualified expenses of $10,000 in 2006, and tax liability greater than $2,000, their tax liability would be reduced by $2,000 if they claimed the Lifetime Learning credit but only $1,650 if they claimed the Hope credit. In our 2005 report, we found that some people who appear to be eligible for tax credits and/or the tuition deduction did not claim them. The files of about 77 percent of the tax year 2002 tax returns that we were able to review were apparently eligible to claim one or more of the three tax preferences. However, about 27 percent of those returns, representing about 374,000 tax filers, failed to use the any of them. The amount by which these tax filers failed to reduce their tax averaged $169; 10 percent of this group could have reduced their tax liabilities by over $500. Suboptimal choices were not limited to tax filers who prepared their own tax returns. A possible indicator of the difficulty people face in understanding education-related tax preferences is how often the suboptimal choices we identified were found on tax returns prepared by paid tax preparers. We estimate that about 50 percent of the returns we found that appear to have failed to optimally reduce the tax filer’s tax liability were prepared by paid tax preparers. Generalized to the population of tax returns we were able to review, returns prepared by paid tax preparers represent about 223,000 of the approximately 447,000 suboptimal choices we found. Our April 2006 study of paid tax preparers corroborated the problem of confusion over which of the tax preferences to claim. Of the 9 undercover investigation visits we made to paid preparers with a taxpayer with a dependent college student, 3 preparers did not claim the credit most advantageous to the taxpayer and thereby cost these taxpayers hundreds of dollars in refunds. In our investigative scenario, the expenses and the year in school made the Hope education credit far more advantageous to the taxpayer than either the tuition and fees deduction or the Lifetime Learning credit. The apparently suboptimal use of postsecondary tax preferences may arise, in part, because of the complexity of using these provisions. Tax policy analysts have frequently identified postsecondary tax preferences as a set of tax provisions that demand a particularly large investment of knowledge and skill on the part of students and families or expert assistance purchased by those with the means to do so. They suggest that this complexity arises from multiple postsecondary tax preferences with similar purposes, from key definitions that vary across these provisions, and from rules that coordinate the use of multiple tax provisions. Twelve tax preferences are outlined in the IRS publication, Tax Benefits for Education, for use in preparing 2005 returns (the most recent publication available). The publication includes 4 different tax preferences for educational saving. Three of these preferences—Coverdell Education Savings Accounts, Qualified Tuition Programs, and U.S. education savings bonds—differ across more than a dozen dimensions, including the tax penalty that occurs when account balances are not used for qualified higher education expenses, who may be an eligible beneficiary, annual contribution limits, and other features. In addition to learning about, comparing, and selecting tax preferences, filers who wish to make optimal use of multiple tax preferences must understand how the use of one tax preference affects the use of others. The use of multiple education-related tax preferences is coordinated through rules that prohibit the application of the same qualified higher education expenses for the same student to more than one education- related tax preference, sometimes referred to as “anti-double-dipping rules.” These rules are important because they prevent tax filers from underreporting their tax liability. Nonetheless, anti-double-dipping rules are potentially difficult for tax filers to understand and apply, and misunderstanding them may have consequences for a filer’s tax liability. Little is known about the effectiveness of federal grant and loan programs and education-related tax preferences in promoting attendance, choice, and the likelihood that students either earn a degree or continue their education (referred to as persistence). Many federal aid programs and tax preferences have not been studied, and for those that have been studied, important aspects of their effectiveness remain unexamined. In our 2005 report, we found no research on any aspect of effectiveness for several major title IV federal postsecondary programs and tax preferences. For example, no research had examined the effects of federal postsecondary education tax credits on students’ persistence in their studies or on the type of postsecondary institution they choose to attend. Gaps in the research-based evidence of federal postsecondary program effectiveness may be due, in part, to data and methodological challenges that have proven difficult to overcome. The relative newness of most of the tax preferences also presents challenges because relevant data are just now becoming available. In 2002, we recommended that Education sponsor research into key aspects of effectiveness of title IV programs, that Education and the Department of the Treasury collaborate on such research into the relative effectiveness of title IV programs and tax preferences, and that the Secretaries of Education and Treasury collaborate in studying the combined effects of tax preferences and title IV aid. In April 2006, Education’s Institute for Education Sciences (IES) issued a Request for Applications to conduct research on, among other things, “evaluating the efficacy of programs, practices, or policies that are intended to improve access to, persistence in, or completion of postsecondary education.” Multiyear projects funded under this subtopic are expected to begin in July 2007. As we noted in our 2002 report, research into the effectiveness of different forms of postsecondary education assistance is important. Without such information federal policymakers cannot make fact-based decisions about how to build on successful programs and make necessary changes to improve less effective programs. The budget deficit and other major fiscal challenges facing the nation necessitate rethinking the base of existing federal spending and tax programs, policies, and activities by reviewing their results and testing their continued relevance and relative priority for a changing society. In light of the long-term fiscal challenge this nation faces and the need to make hard decisions about how the federal government allocates resources, this hearing provides an opportunity to continue a discussion about how the federal government can best help students and their families pay for postsecondary education. Some questions that Congress should consider during this dialog include: Should the federal government consolidate postsecondary education tax provisions to make them easier for the public to use and understand? Given its limited resources, should the government further target title IV programs and tax provisions based on need or other factors? How can Congress best evaluate the effectiveness and efficiency of postsecondary education aid provided through the tax code? Can tax preferences and title IV programs be better coordinated to maximize their effectiveness? Mr. Chairman and Members of the Committee, this concludes our statement. We welcome any questions you have at this time. For further information regarding this testimony, please contact Michael Brostek at (202) 512-9039 or brostekm@gao.gov or George Scott at (202) 512-7215 or scottg@gao.gov. Individuals making contributions to this testimony include David Lewis, Assistant Director; Jeff Appel, Assistant Director; Shirley Jones, Sheila McCoy, John Mingus, Jeff Procak, Carlo Salerno, Andrew Stephens, and Michael Volpe. The federal government helps students and families save, pay for, and repay the costs of postsecondary education through grant and loan programs authorized under title IV of the Higher Education Act of 1965, and through tax preferences—reductions in federal tax liabilities that result from preferential provisions in the tax code, such as exemptions and exclusions from taxation, deductions, credits, deferrals, and preferential tax rates. Assistance provided under title IV programs include Pell Grants for low- income students, the newly established Academic Competitiveness and National Science and Mathematics Access to Retain Talent Grants, PLUS loans, which parents as well as graduate and professional students may apply for, and Stafford loans. While each of the three grant types reduces the price paid by the student, student loans help to finance the remaining costs and are to be repaid according to varying terms. Stafford loans may be either subsidized or unsubsidized. The federal government pays the interest cost on subsidized loans while the student is in school, and during a 6-month period known as the grace period, after the student leaves school. For unsubsidized loans, students are responsible for all interest costs. Stafford and PLUS loans are provided to students through both the FFEL program and the William D. Ford Direct Loan Program (FDLP). The federal government’s role in financing and administering these two loan programs differs significantly. Under the FFEL program, private lenders, such as banks, provide loan capital and make loans, and the federal government guarantees FFEL lenders a minimum yield on the loans they make and repayment if borrowers default. Under FDLP, federal funds are used as loan capital and loans are provided through participating schools. The Department of Education and its private-sector contractors jointly administer the program. Title IV also authorizes programs funded by the federal government and administered by participating higher education institutions, including the Supplemental Educational Opportunity Grant (SEOG), Perkins loans, and federal work-study aid, collectively known as campus-based aid. To receive title IV aid, students (along with parents, in the case of dependent students) must complete a Free Application for Federal Student Aid form. Information from the FAFSA, particularly income and asset information, is used to determine the amount of money—called the expected family contribution—that the student and/or family is expected to contribute to the student’s education. Statutory definitions establish the criteria that students must meet to be considered independent of their parents for the purpose of financial aid, and statutory formulas establish the share of income and assets that are expected to be available for the student’s education. Once the EFC is established, it is compared with the cost of attendance at the institution chosen by the student. The cost of attendance comprises tuition and fees; room and board; books and supplies; transportation; miscellaneous personal expenses; and, for some students, additional expenses. If the EFC is greater than the cost of attendance, the student is not considered to have financial need, according to the federal aid methodology. If the cost of attendance is greater than the EFC, then the student is considered to have financial need. Title IV assistance that is made on the basis of the calculated need of aid applicants is called need-based aid. Key characteristics of title IV programs are summarized in table 5 below. Prior to the 1990s, virtually all major federal initiatives to assist students with the costs of postsecondary education were provided through grant and loan programs authorized under title IV of the Higher Education Act. Since the 1990s, however, federal initiatives to assist families and students in paying for postsecondary education have largely been implemented through the federal tax code. The federal tax code now contains a range of tax preferences that may be used to assist students and families in saving for, paying, or repaying the costs of postsecondary education. These tax preferences include credits and deductions, both of which allow tax filers to use qualified higher education expenses to reduce their federal income tax liability. The tax credits reduce the tax filers’ income tax liability on a dollar-for-dollar basis but are not refundable. Tax deductions permit qualified higher education expenses to be subtracted from income that would otherwise be taxable. To benefit from a higher education tax credit or tuition deduction, a tax filer must use tax form 1040 or 1040A, have an adjusted gross income below the provisions’ statutorily specified income limits, and have a positive tax liability after other deductions and credits are calculated, among other requirements. Tax preferences also include tax-exempt savings vehicles. Section 529 of the tax code makes tax free the investment income from qualified tuition programs. There are two types of qualified tuition programs: savings programs established by states and prepaid tuition programs established either by states or by one or more eligible educational institutions. Another tax-exempt savings vehicle is the Coverdell Education Savings Account. Tax penalties apply to both 529 programs and Coverdell savings accounts if the funds are not used for allowable education expenses. Key features of these and other education-related tax preferences are described below, in table 6. Our review of tax preferences did not include exclusions from income, which permit certain types of education-related income to be excluded from the calculation of adjusted gross income on which taxes are based. For example, qualified scholarships covering tuition and fees and qualified tuition reductions from eligible educational institutions are not included in gross income for income tax purposes. Similarly, student loans forgiven when a graduate goes into certain professions for a certain period of time are also not subject to federal income taxes. We also did not include special provisions in the tax code that also extend existing tax preferences when tax filers support a postsecondary education student. For example, tax filers may claim postsecondary education students as dependents after age 18, even if the student has his or her own income over the limit that would otherwise apply. Also, gift taxes do not apply to funds used for certain postsecondary educational expenses, even for amounts in excess of the usual $11,000 limit on gifts. In addition, funds withdrawn early from an Individual Retirement Account are not subject to the usual 10 percent penalty when used for either a tax filer’s or his or her dependent’s postsecondary educational expenses. For an example of how the use of college savings programs and the tuition deduction is affected by “anti-double-dipping” rules, consider the following: To calculate whether a distribution from a college savings program is taxable, tax filers must determine if the total distributions for the tax year are more or less than the total qualified educational expenses reduced by any tax-free educational assistance, i.e., their adjusted qualified education expenses (AQEE). After subtracting tax-free assistance from qualified educational expenses to arrive at the AQEE, tax filers multiply total distributed earnings by the fraction (AQEE / total amount distributed during the year). If parents of a dependent student paid $6,500 in qualified education expenses from a $3,000 tax-free scholarship and a $3,600 distribution from a tuition savings program, they would have $3,500 in AQEE. If $1,200 of the distribution consisted of earnings, then $1,200 x ($3,500 AQEE / $3,600 distribution) would result in $1,167 of the earnings being tax free, while $33 would be taxable. However, if the same tax filer had also claimed a tuition deduction, anti-double-dipping rules would require the tax filer to subtract the expenses taken into account in figuring the tuition deduction from AQEE. If $2,000 in expenses had been used toward the tuition deduction, then the taxable distribution from the section 529 savings program would rise to $700. For families such as these, anti-double-dipping rules increase the computational complexity they face and may result in unanticipated tax liabilities associated with the use of section 529 savings programs. We used two data sets for this testimony: Education’s 2003-2004 National Postsecondary Student Aid Study and the Internal Revenue Service’s 2002 and 2004 Statistics of Income. Estimates from both data sets are subject to sampling errors and the estimates we report are surrounded by a 95 percent confidence interval. The following tables provide the lower and upper bounds of the 95 percent confidence interval for all estimate figures in the tables in this testimony. For figures drawn from these data, we provide both point estimates and confidence intervals. | Federal assistance helps students and families pay for postsecondary education through several policy tools--grant and loan programs authorized by title IV of the Higher Education Act of 1965 and more recently enacted tax preferences. This testimony summarizes and updates our 2005 report on (1) how title IV assistance compares to that provided through the tax code (2) the extent to which tax filers effectively use postsecondary tax preferences, and (3) what is known about the effectiveness of federal assistance. This hearing is an opportunity to consider whether any changes should be made in the government's overall strategy for providing such assistance or to the individual programs and tax provisions that provide the assistance. This statement is based on previously published GAO work and reviews of relevant literature. Title IV student aid and tax preferences provide assistance to a wide range of students and families in different ways. While both help students meet current expenses, tax preferences also assist students and families with saving for and repaying postsecondary costs. Both serve students and families with a range of incomes, but some forms of title IV aid--grant aid, in particular--provide assistance to those whose incomes are lower, on average, than is the case with tax preferences. Tax preferences require more responsibility on the part of students and families than title IV aid because taxpayers must identify applicable tax preferences, understand complex rules concerning their use, and correctly calculate and claim credits or deductions. While the tax preferences are a newer policy tool, the number of tax filers using them has grown quickly, surpassing the number of students aided under title IV in 2002. Some tax filers do not appear to make optimal education-related tax decisions. For example, among the limited number of 2002 tax returns available for our analysis, 27 percent of eligible tax filers did not claim either the tuition deduction or a tax credit. In so doing, these tax filers failed to reduce their tax liability by $169, on average, and 10 percent of these filers could have reduced their tax liability by over $500. One explanation for these taxpayers' choices may be the complexity of postsecondary tax provisions, which experts have commonly identified as difficult for tax filers to use. Little is known about the effectiveness of title IV aid or tax preferences in promoting, for example, postsecondary attendance or school choice, in part because of research data and methodological challenges. As a result, policymakers do not have information that would allow them to make the most efficient use of limited federal resources to help students and families. |
The HUBZone program was established by the HUBZone Act of 1997 to stimulate economic development through increased employment and capital investment by providing federal contracting preferences to small businesses in economically distressed communities. These areas, which are designated based on certain economic and census data, are known as HUBZones. As of January 2009, there were approximately 9,300 firms listed in the Central Contractor Registration database as participating in the HUBZone program. To ensure HUBZone areas receive the economic benefit from the program, SBA is responsible for determining whether firms meet HUBZone program requirements. To participate in the HUBZone program, small business firms generally must meet certain criteria established by the SBA, most notably: (1) the firm must be at least 51 percent owned and controlled by one or more U.S. citizens; (2) at least 35 percent of its employees must live in a HUBZone; (3) the principal office (i.e., the location where the greatest number of qualifying employees perform their work) must be located in a HUBZone; and (4) the firm must qualify as a small business under the size standard that corresponds with its primary industry classification. In addition, once a firm receives a HUBZone contract, the firm is required to abide by certain subcontracting limitations, which for most firms is to expend at least 50 percent of the personnel costs of a contract on their own employees or employees of other qualified HUBZone small business concerns. The SBA is legally responsible for ensuring that program participants meet program requirements. If a HUBZone firm does not meet program requirements or fails to notify the SBA of material changes that affect the firm’s HUBZone eligibility, the SBA may use a variety of enforcement tools against the firm. Depending on the severity of the infraction, SBA can (1) decertify and remove the firm from the list of qualified HUBZone firms, (2) suspend and/or debar the firm from all federal contracts, and/or (3) refer the firm to the Department of Justice for civil and/or criminal prosecution. In July 2008, we testified that SBA’s lack of controls over the HUBZone program exposed the government to fraud and abuse. Specifically, we identified substantial vulnerabilities in SBA’s application and monitoring process by demonstrating the ease of obtaining HUBZone certification. For example, by using fictitious employee information and fabricated documentation, we easily obtained HUBZone certification for four bogus firms. In addition, we also identified 10 firms from the Washington, D.C., metro area that were participating in the HUBZone program even though they clearly did not meet eligibility requirements. In June 2008, we reported that the Small Business Administration needed to take additional actions to certify and monitor HUBZone firms as well as to assess the results of the HUBZone program. Specifically, we found that the map SBA used to publicize qualified HUBZone areas was inaccurate. In addition, we found that the mechanisms that SBA used to certify and monitor HUBZone firms did not meet federal internal control standards and provided limited assurance that only eligible firms participated in the program. For example, SBA verified the information reported by firms on their application or during recertification—its process for monitoring firms—in limited instances and did not follow its own policy of recertifying all firms every 3 years. In the report, we made five recommendations designed to improve SBA’s administration and oversight of the HUBZone program. We recommended that SBA correct and update its HUBZone map, develop and implement guidance to ensure more routine verification of application data, eliminate its backlog of recertifications, formalize and adhere to a specific time frame for decertifying ineligible firms, and further assess the effectiveness of the program. In responding to a draft of this report, SBA agreed with these recommendations and outlined steps that it plans to take to address them. HUBZone program fraud and abuse continues to be problematic for the federal government. We identified 19 firms in the states of Texas, Alabama, and California participating in the HUBZone program even though they clearly do not meet program requirements. Although we cannot conclude whether this is a systemic problem based on these cases, as shown in figure 1 below, the issue of misrepresentation clearly extends beyond the Washington, D.C., metropolitan area. In fiscal years 2006 and 2007, federal agencies had obligated a total of nearly $30 million to these firms for performance as the prime contractor on federal HUBZone contracts. HUBZone regulations also place restrictions on the amount of work that can be subcontracted to non-HUBZone firms. Specifically, HUBZone regulations generally require firms to expend at least 50 percent of the personnel costs of a contract on its own employees. As part of our investigative work, we found examples of service firms that subcontracted a substantial majority of HUBZone contract work to other non-HUBZone firms and thus did not meet this program requirement. When a firm subcontracts the majority of its work to other non-HUBZone firms it is undermining the HUBZone program’s stated purpose of stimulating development in the economically distressed areas, as well as evading eligibility requirements for principal office and 35 percent residency requirement. According to HUBZone regulations, persons or firms are subject to criminal penalties for knowingly making false statements or misrepresentations in connection with the HUBZone program including failure to correct “continuing representations” that are no longer true. During the application process, applicants are not only reminded of the program eligibility requirements, but are required to agree to the statement that anyone failing to correct “continuing representations” shall be subject to fines, imprisonment, and penalties. Further, the Federal Acquisition Regulation (FAR) requires all prospective contractors to update the government’s Online Representations and Certifications Application (ORCA), which includes a statement certifying whether the firm is currently a HUBZone firm and that there have been “no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA.” Of the 19 firms that did not meet HUBZone eligibility requirements, we found that all of them continued to represent themselves as eligible HUBZone interests to SBA. Because the 19 case examples clearly are not eligible, we consider each firm’s continued representation indicative of fraud and/or abuse related to this program. Table 1 highlights 10 firms that we found to be egregiously out of compliance with HUBZone program requirements. Appendix I pro vides details on the other 9 cases that we examined. We will be referring all 19firms to SBA for further investigation and consideration for removal from the program. The following is a more detailed description of fraud and abuse from 3 of the cases that we investigated. Case Study 1: Our investigation clearly showed that this firm was being used as a front company because it was subcontracting the majority of its work to other firms. This firm is located in Fort Worth, Texas, and violated HUBZone program requirements because it did not expend at least 50 percent of personnel costs on its own employees or by using the personnel of other HUBZone firms as required by federal regulations. This firm, which consists of 8 employees, has obtained millions of dollars in HUBZone contracts to provide environmental consulting services. At the time of our investigation, company documents showed that the company was subcontracting between 71 and 89 percent of its total contract obligations to other non-HUBZone firms—in some cases, large firms. The principal admitted that her firm was not meeting contract performance requirements required by HUBZone regulations. Further, the principal stated that the firm made bids on HUBZone contracts knowing that the company would have to subcontract work to other firms after the award. The principal added that other large firms use HUBZone firms in this manner, referring to these HUBZone firms as “contract vehicles.” By subcontracting the majority of its HUBZone work to non-HUBZone firms, this firm is clearly abusing its HUBZone designation and undermining the HUBZone program’s stated purpose of stimulating small business development in economically distressed areas. Likewise, because the subcontracting is being conducted by non-HUBZone firms this firm is also evading eligibility requirements for principal office and the 35 percent residency requirement. This firm has been obligated over $2.3 million in HUBZone set-asides during fiscal years 2006 and 2007. Case Study 2: Our investigation demonstrated that this firm continued to misrepresent itself as HUBZone-eligible while failing to meet HUBZone requirements. This firm, which is a two-person—father and son—ground maintenance services company located in Jacksonville, Alabama, did not meet the principal office requirement, failed the 35 percent residency requirement, and served as a front company—subcontracting most of its HUBZone work to non-HUBZone firms. Our investigation found that the purported principal office was in fact a residential trailer in a trailer park. As shown in figure 2 below, the “suite number” of the principal office provided to SBA was actually the trailer number. The president of the company claimed that the trailer is the principal office and that an employee lived at that trailer. However, our investigation found that the president knowingly misrepresented and concealed material facts to a GAO investigator. We found that both employees live in non-HUBZone areas that are located about 90 miles from the trailer. Additionally, we verified that the trailer is occupied by someone not associated with the company. Further, our investigation found that neither employee lived in, nor worked at, the residential trailer since August 2007. Specifically, the U.S. Postal Service provided us a copy of the change of address form dated August 2007 that instructed the Postal Service to forward all mail from the trailer to another office in Birmingham, Alabama, which is not located in a HUBZone area. In addition, we obtained utility bill information that indicated that the last utility bill was paid by the firm in August 2007. According to DSBS, SBA most recently certified the firm at this address in April 2008. During the course of our investigation, this firm provided investigators with questionable documents in an attempt to make the residential trailer appear to be their actual principal office. As figure 3 shows, after our original interview with the president, we found that a new mailbox with the company name had been installed next to other mailboxes in the trailer park to give the perception that the firm resided at this trailer park. Despite the evidence that this firm had not paid utility bills or received mail at this location for over a year, the firm president also provided us with a “rental agreement” stating that their company was renting the trailer until June 2009. The authenticity of this “rental agreement” is highly suspicious given the evidence we gathered and our confirmation that an individual not related to the company was living in the trailer. For fiscal years 2006 and 2007, this firm received more than $900,000 in HUBZone set-aside obligations. Case Study 4: We determined that during the period of our investigation this firm represented itself as HUBZone certified while failing to meet both the 35 percent residency and principal office HUBZone eligibility requirements. This firm, which is located in Huntsville, Alabama, and provides information technology services, self-certified in ORCA in July 2008 that it was a HUBZone firm and that there had been “no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA.” The firm was certified by the SBA as a HUBZone firm in June 2002. Based on our review of payroll records and written correspondence that we received from the firm, we determined that the firm failed the 35 percent HUBZone residency requirement. These documents indicated that only 18 of 116 (16 percent) of the firm’s employees who were employed in December 2007, lived in HUBZone-designated areas. To have met the 35 percent residency requirement, the firm would have needed at least 41 employees residing in HUBZone-designated areas, thus, the firm did not meet this requirement by 23 employees. In addition, we investigated the location that the firm purported to the SBA as its “principal office.” Our investigation found that no employees were located at this office. Additional investigative work revealed that the firm’s primary office was not located in a HUBZone. During the interview, the firm’s president acknowledged that he “had recently become aware” that he was not in compliance with HUBZone requirements and was taking “corrective actions.” However, the firm continued to represent itself as a HUBZone firm even after the firm’s president acknowledged his company did not meet the program requirements. Based on our analysis of FPDS-NG data, between fiscal years 2006 and 2007 federal agencies obligated over $5.0 million in HUBZone awards to this firm, consisting mainly of 2 HUBZone set-aside contracts by the Department of the Navy. Our June 2008 report and July 2008 testimony clearly showed that SBA did not have effective internal controls related to the HUBZone program. In response to our findings and recommendations, SBA initiated a process of reengineering the HUBZone program. SBA officials stated that this process is intended to make improvements to the program that are necessary for making the program more effective while also minimizing fraud and abuse. To that end, SBA has hired business consultants as well as reached out to GAO in an attempt to identify control weaknesses in the HUBZone program and to strengthen its fraud prevention controls. Although SBA has initiated steps to address internal control deficiencies we identified in our June 2008 report, SBA has not yet incorporated effective controls for preventing, detecting, and investigating fraud and abuse within the HUBZone program. Internal controls comprise the plans, methods, and procedures used to meet missions, goals, and objectives and also serve as the first line of defense in safeguarding assets and preventing and detecting errors and fraud. Fraud prevention, on the other hand, requires a system of rules, which, in their aggregate, minimize the likelihood of fraud occurring while maximizing the possibility of detecting any fraudulent activity that may transpire. Fraud prevention systems set forth what actions constitute fraudulent conduct and specifically spell out who in the organization handles fraud matters under varying circumstances. The potential of being caught most often persuades likely perpetrators not to commit the fraud. Because of this principle, the existence of a thorough fraud prevention system is essential to fraud prevention and detection. As of the end of our field work, SBA does not have in place the key elements of an effective fraud prevention system. As shown in figure 4 below, a well-designed fraud prevention system (which can also be used to prevent waste and abuse) should consist of three crucial elements: (1) upfront preventive controls, (2) detection and monitoring, and (3) investigations and prosecutions. For the HUBZone program this would mean (1) front-end controls at the application stage, (2) fraud detection and monitoring of firms already in the program, and (3) the aggressive pursuit and prosecution of individuals committing fraud. In addition, as shown in figure 4, the organization should also use “lessons learned” from its detection and monitoring controls and investigation and prosecutions to design more effective preventive controls. We explain the three major fraud prevention elements in this model and how SBA is attempting to address them, in further detail below. We have previously reported that fraud prevention is the most efficient and effective means to minimize fraud, waste, and abuse. Thus, controls that prevent fraudulent firms and individuals from entering the program in the first place are the most important element in an effective fraud prevention program. The most crucial element of effective fraud prevention controls is a focus on substantially diminishing the opportunity for fraudulent access into the system through front-end controls. Preventive controls should be designed to include, at a minimum, a requirement for data validation, system edit controls, and fraud awareness training. Prior to implementing any new preventive controls, agencies must adequately field test the new controls to ensure they are operating as intended. SBA officials stated that as part of their interim process they are now requesting, from all firms that apply to the HUBZone program, documentation that demonstrates their eligibility. SBA stated that, in the past, it only requested additional information when it encountered obvious “red flags.” Although requiring additional documentation has some value as a deterrent, the most effective preventive controls involve the verification of information, such as verifying a principal office location through an unannounced site visit. If SBA verified purported principal offices by conducting unannounced site visits, such as we did for our investigation, SBA would likely find similar instances of firms attempting to defraud the HUBZone program. In addressing one of our prior recommendations, the SBA issued a Desktop Manual for processing HUBZone applications. The manual provides guidance that alerts SBA staff of circumstances that warrant the need for supporting documentation. Although the Desktop Manual provides discretion to the analyst about the need to conduct a site visit, the Desktop Manual does not provide criteria when such site visits are warranted. In addition, SBA does not screen firms or individuals to ensure that they are not affiliated with prior firms that failed program eligibility reviews. As a result, an owner can change the name of a company that was removed from the HUBZone program to a new business name and be accepted back into the HUBZone program. Further, SBA did not adequately field test its interim process for processing applications. If it had done so, SBA would have known that it did not have the resources to effectively carry out its review of applications in a timely manner. As a result, SBA had a backlog of about 800 HUBZone applications as of January 2009. At that time, SBA officials stated that it would take about 6 months to process each HUBZone application—well over the 1 month goal set forth in SBA regulations. Although preventive controls are the most effective way to prevent fraud, continual monitoring is an important component in detecting and deterring fraud. Monitoring and detection within a fraud prevention program involve actions such as data-mining for fraudulent and suspicious applicants and evaluating firms to provide reasonable assurance that they continue to meet program requirements. As demonstrated in our July 2008 testimony, SBA’s fraud control vulnerabilities in its application process make detection and monitoring particularly important for the HUBZone program. As a result of SBA’s control vulnerabilities, there are likely hundreds and possibly thousands of firms in the HUBZone program that fail to meet program requirements. Although monitoring and detection is an important component of a fraud prevention system, we reported in June 2008 that the mechanisms SBA used to monitor HUBZone firms provided limited assurance that only eligible firms participate in the program. Specifically, we reported that a firm could be in the HUBZone program for years without being examined. In addition, although a HUBZone firm is supposed to be recertified every 3 years, we reported that more than 40 percent of the firms in the program for over 3 years had not been recertified. To address these weaknesses, SBA officials stated that during this fiscal year, they will be conducting program examinations on all HUBZone firms that received contracts in fiscal year 2007 to determine whether they still meet HUBZone requirements. In addition, SBA officials stated that as of September 2008, SBA had eliminated their backlog of recertifications. Although SBA has initiated several positive steps, SBA will need to make further progress to achieve an effective fraud monitoring program. For example, SBA has not found an effective and efficient way to verify the validity of a stated principal office during its recertification and application processes. In addition, SBA officials stated that although they modified their approach for conducting program examinations of HUBZone firms this fiscal year, they have not established a streamlined and risk-based methodology for selecting firms for program examinations going forward. Further, in order to determine whether firms meet eligibility requirements, SBA needs to incorporate an “element of surprise” into its program examinations such as using random, unannounced site visits to verify a stated principal office. Finally, SBA does not evaluate all HUBZone program requirements during program examinations; specifically, SBA does not review whether HUBZone firms are expending at least 50 percent of the personnel costs of a contract on their own personnel. As a result, as shown by several of our case studies, certain firms are allowed to act as “front” companies, whereby they subcontract the large majority of their work to non-HUBZone firms. This undermines the program’s stated purpose of increasing employment opportunities, investment, and economic development in HUBZone areas. The final element of an effective fraud prevention system is the aggressive investigation and prosecution of individuals who commit fraud against the federal government. However, SBA currently does not have an effective process for investigating fraud and abuse within the HUBZone program. Although SBA’s Desktop Manual for Processing HUBZone Applications states that an analyst may refer a HUBZone application to the Office of Inspector General or the Office of General Counsel, SBA has not established specific criteria or a process for referring firms that knowingly do not meet program requirements. To date, other than the firms identified by our prior investigation, the SBA program office has never referred any firms for debarment and/or suspension proceedings based on their findings from their program eligibility reviews. By failing to hold firms accountable, SBA has sent a message to the contracting community that there is no punishment or consequences for committing fraud or abusing the intent of the HUBZone program. However, as noted below, the SBA has started the debarment process on 7 of the 10 firms we found to have fraudulently or inaccurately misrepresented its HUBZone status in our earlier work. SBA has taken some enforcement steps on the 10 firms that we found did not meet HUBZone program requirements as of July 2008. According to SBA, as of January 2009, two of the firms have been removed from the program and two others are in the process of being removed. However, SBA’s failure to examine some of the most egregious cases we previously identified has resulted in an additional $7.2 million in HUBZone obligations and about $25 million in HUBZone set-aside or price preference contracts to these firms. For example, a construction firm identified in our July 2008 testimony admitted that it did not meet HUBZone requirements and was featured in several national publications by name. It has continually represented itself as HUBZone certified and has received $2 million in HUBZone obligations and a $23 million HUBZone set-aside contract since our testimony. See figure 5 for a reproduction of the continual representation this firm makes on the top banner of its Web site. In the written statement for the July 2008 hearing, the Acting Administrator of SBA stated that the SBA would take “immediate steps to require site visits for those HUBZone firms that have received HUBZone contracts and will be instituting suspension and debarment proceedings against firms that have intentionally misrepresented their HUBZone status.” SBA has referred 7 of these firms to its General Counsel for suspension and debarment. However, as of February 2009, according to SBA’s Dynamic Small Business Web site, 7 of the 10 firms that we investigated were still HUBZone certified. Table 2 highlights the 10 firms that we noted at the July 2008 hearing that clearly did not meet the HUBZone program requirements, new HUBZone obligations and contracts these firms received, as well as the actions the SBA has taken against these firms as of January 2009. As noted in the table above, as of January 2009 SBA has conducted program evaluations on 7 of the 10 firms to determine whether the firms meet the eligibility requirements for the HUBZone program. Based on these evaluations, SBA has removed 2 firms from the HUBZone program and is in the process of providing due process to 2 additional firms to determine whether they should be removed. SBA officials stated that no action will be taken on 3 firms because SBA’s program evaluations concluded that these firms met all the eligibility requirements of the HUBZone program. We attempted to verify SBA’s work, but were not provided with the requested documentation to support their conclusion that the firms moved into compliance after our July 2008 testimony. SBA officials said they have not yet performed program evaluations for 3 of the most egregious firms because they are experiencing technical problems with SBA’s caseload system. As such, these 3 firms remain eligible to receive HUBZone set-aside contracts. SBA is also pursuing suspension and debarment actions for 7 of these firms, and the Department of Justice considering civil actions on 5 of the 10 cases. Our work on the HUBZone program to date has shown that numerous ineligible firms have taken advantage of the opportunity to commit fraud against the federal government. The SBA has initiated steps to correct internal control deficiencies, but it still falls short in developing measures to prevent, detect, and prosecute fraud within the HUBZone program. Our work demonstrates that SBA’s fraud controls lack important elements needed to screen and monitor firms which has led to HUBZone awards to firms that did not meet program requirements. For example, SBA’s failure to verify principal office locations through unannounced site visits has led to firms operating their businesses from locations that are far from economically disadvantaged. In addition, a lack of oversight for monitoring all of the program requirements has allowed HUBZone firms to subcontract large portions of HUBZone work to non-HUBZone firms thereby failing to meet the program requirement that at least 50 percent of the personnel costs of a contract be expended on its own employees. Lastly, SBA’s lack of enforcement within the HUBZone program has not had the effect of deterring fraudulent actors from entering or remaining in the program. Going forward, SBA must develop and incorporate effective fraud controls into its overall internal control process that will minimize fraud and abuse in the HUBZone program. To establish an effective fraud prevention system for the HUBZone program, the Administrator of the Small Business Administration should expeditiously implement the recommendations from our June 2008 report and take the following four actions: Consider incorporating a risk-based mechanism for conducting unannounced site visits as part of the screening and monitoring process. Consider incorporating policies and procedures into SBA’s program examinations for evaluating if a HUBZone firm is expending at least 50 percent of the personnel costs of a contract using its own employees. Ensure appropriate policies and procedures are in place for the prompt reporting and referral of fraud and abuse to SBA’s Office of Inspector General as well as SBA’s Suspension and Debarment Official. Take appropriate enforcement actions on the 19 HUBZone firms we found to violate HUBZone program requirements to include, where applicable, immediate removal or decertification from the program, and coordination with SBA’s Office of Inspector General as well as SBA’s Suspension and Debarment Official. We received written comments on a draft of this report from SBA’s Deputy Associate Administrator of the Office of Business Development and Government Contracting. In the response, SBA agreed with three of our four recommendations. SBA stated that it is in the process of re- engineering the entire HUBZone certification and eligibility process, and SBA believes that our recommendations are useful in making necessary program changes to minimize program risk and ensure that only eligible firms received HUBZone program benefits. SBA’s written comments are provided in appendix II. SBA disagreed with our recommendation to consider incorporating policies and procedures into SBA’s program examinations for evaluating if a HUBZone firm is complying with the performance-of-work requirements by expending at least 50 percent of the personnel costs of a contract using its own employees. SBA stated that although this requirement is included in SBA HUBZone regulations, it is not a criterion for HUBZone program eligibility but rather a mandatory contract term. SBA stated that contracting officers are required by the Federal Acquisition Regulations to insert such clauses regarding subcontracting limitations. If firms submit bids that indicate that they will not meet this requirement or fail to meet this requirement during performance of the contract, the contracting officer has the authority to reject a firm’s bid or terminate the contract for default. SBA stated that it will continue to work with contracting officers to ensure that this requirement is monitored. While we recognize that contracting officers have a responsibility for monitoring the subcontracting limitation, SBA also has this responsibility. In order to receive HUBZone certification, a firm must certify to SBA that it will abide by this performance requirement, and SBA is required by statute to establish procedures to verify such certifications. In addition, verification that a firm is meeting the performance-of-work requirements is one of the subjects that SBA may review during its program examinations. Since SBA is not performing this review, it is possible that many firms may be receiving the benefits of the HUBZone program while evading the program requirements. Therefore, we continue to believe that SBA should consider incorporating policies and procedures into SBA’s program examinations for evaluating if a HUBZone firm is meeting the performance-of-work requirements. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrator of the Small Business Administration and other interested parties. The report will also be available at no charge on GAO’s Web site at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix III. This appendix presents summary information on 9 of 19 firms that clearly did not meet the program eligibility requirements of the HUBZone program. Table 3 shows the remaining case studies that we investigated. As with the 10 cases discussed in the body of this report, these 9 firms continued to represent themselves as eligible HUBZone interests to SBA. Because these 9 case examples clearly are not eligible, we consider each firm’s continued representation indicative of fraud and/or abuse related to this program. In addition to the individual named above, Erika Axelson, Gary Bianchi, Donald Brown, Bruce Causseaux, Eric Eskew, Dennis Fauber, Craig Fischer, Robert Graves, Betsy Isom, Jason Kelly, Julia Kennon, Barbara Lewis, Olivia Lopez, Jeff McDermott, Andrew McIntosh, John Mingus, Andy O’Connell, Mary Osorno, Chris Rodgers, and Matt Valenta also provided assistance on this report. | The Small Business Administration's (SBA) Historically Underutilized Business Zone (HUBZone) program provides federal contracting assistance to small firms located in economically distressed areas, with the intent of stimulating economic development. In July 2008, GAO identified substantial vulnerabilities in SBA's application and monitoring process that demonstrated the HUBZone program is vulnerable to fraud and abuse. GAO also investigated 10 case studies of HUBZone firms in the Washington, D.C., area that misrepresented their eligibility. GAO was asked to determine (1) whether additional cases of fraud and abuse exist outside of the Washington, D.C., area; (2) what actions, if any, SBA has taken to establish an effective fraud prevention program for the HUBZone program; and (3) what actions, if any, SBA took against the 10 case study firms in GAO's July 2008 testimony. To meet these objectives, GAO identified selected HUBZone firms based on certain criteria, such as magnitude of HUBZone contracts and firm location. GAO also interviewed SBA officials and reviewed SBA data. GAO found that fraud and abuse in the HUBZone program extends beyond the Washington, D.C., area. GAO identified 19 firms in Texas, Alabama, and California participating in the HUBZone program that clearly do not meet program requirements (i.e., principal office location or percentage of employees in HUBZone and subcontracting limitations). For example, one Alabama firm listed its principal office as "Suite 19," but when GAO investigators performed a site visit they found the office was in fact trailer 19 in a residential trailer park. The individual living in the trailer had no relationship to the HUBZone firm. In fiscal years 2006 and 2007, federal agencies obligated nearly $30 million to these 19 firms for performance as the prime contractor on HUBZone contracts and a total of $187 million on all federal contracts. Although SBA has initiated steps in strengthening its internal controls as a result of GAO's 2008 testimonies and report, substantial work remains for incorporating a fraud prevention system that includes effective fraud controls consisting of (1) front-end controls at the application stage, (2) fraud detection and monitoring of firms already in the program, and (3) the aggressive pursuit and prosecution of individuals committing fraud. In addition, SBA did not adequately field test its interim process for processing applications. If it had done so, SBA would have known that it did not have the resources to effectively carry out its review of applications in a timely manner. As a result, SBA had a backlog of about 800 HUBZone applications as of January 2009. At that time, SBA's interim application process was taking about 6 months--well over its 1-month goal set forth in SBA regulations. SBA has taken some enforcement steps on the 10 firms previously identified by GAO that knowingly did not meet HUBZone program requirements. However, SBA's failure to promptly remove firms from the HUBZone program and examine some of the most egregious cases from GAO's July 2008 testimony has resulted in an additional $7.2 million in HUBZone obligations and about $25 million in HUBZone contracts to these firms. For example, a construction firm from the July 2008 testimony admitted that it did not meet HUBZone requirements and was featured in several national publications by name. It has continually represented itself as HUBZone certified and has received $2 million in HUBZone obligations and a $23 million HUBZone set-aside contract since the July 2008 testimony. |
DOD strategic guidance and joint doctrine documents state that homeland defense is the department’s highest priority. Joint doctrine identifies defense of the maritime domain as an essential component of the broader homeland defense mission. In furtherance of this mission, DOD employs a layered defense approach in which it attempts to mitigate threats across three areas, or layers, where maritime operations may be conducted. The first layer, referred to as the “forward regions,” includes foreign land areas and sovereign waters outside the homeland. In this layer, the objective is to mitigate or prevent those threats from reaching the homeland. The second layer, referred to as the approaches, includes the waters extending from the limits of the homeland to the forward regions. The third layer, the homeland itself, includes the United States, its territories and possessions, and the surrounding territorial waters. Joint doctrine on homeland defense operations notes that DOD components maintain a high state of readiness and the flexible capabilities necessary for responding to threats of varying scale in the maritime approaches and the maritime homeland domain. In addition, DOD components must coordinate with interagency partners— such as the Coast Guard and U.S. Customs and Border Protection—who also have responsibility for ensuring the protection of the homeland from threats in the maritime domain. The principal means by which the U.S. government facilitates interagency coordination in determining primary and supporting agency responsibilities for maritime operations, including maritime homeland defense, is contained in the Maritime Operational Threat Response plan. The Maritime Operational Threat Response process is generally required to be used as maritime threats arise and provides a forum in which agency stakeholders can share information and coordinate an effective response that reflects the desired national outcome. Northern Command is the unified military command responsible for planning, organizing, and executing DOD’s homeland defense mission within the continental United States, Alaska, Puerto Rico, U.S. Virgin Islands, and U.S. territorial waters. Pacific Command has similar responsibilities in the Hawaiian Islands and U.S. territories in the Pacific. Both combatant commands receive support from a variety of commands and organizations in their direct chain of command and throughout DOD. Given that the area of responsibility of Northern Command includes the continental United States and many of its maritime approaches, this command plays a key role in defending the homeland by conducting operations to deter, prevent, and defeat threats and aggression aimed at the United States. Northern Command does not have an assigned Navy service component or naval forces routinely under its operational control, but the commander of U.S. Fleet Forces Command is a supporting commander and is designated as the joint force maritime component commander for Northern Command. Further, Northern Command must coordinate response operations with a number of other DOD and interagency stakeholders—such as Pacific Command and the Coast Guard. DOD identifies and develops capabilities needed by combatant commanders through the Joint Capabilities Integration and Development System process. This system was established to provide the department with an integrated, collaborative process to identify and guide development of new capabilities that address the current and emerging security environment. One method by which this process starts is with the development of a capabilities-based assessment. Such an assessme nt identifies the capabilities required to successfully execute missions such as the homeland defense mission, the shortfalls in existing systems to deliver those capabilities, and the possible solutions for the capability shortfalls. Next, the Joint Requirements Oversight Council—the body responsible for overseeing the military requirements system—may validate the findings from such assessments and direct relevant DOD organizations to undertake actions to close any capability gaps that are identified. After the validation of the findings from a capabilities-based assessment, the council may determine that (1) an identified gap presents an acceptable level of risk to operations and no further action is needed to address it, (2) the risk presented by a capability gap requires the development of a nonmateriel solution, such as changes to DOD doctrine; or (3) the risk presented by a capability gap requires a materiel solution—such as a new acquisition program. If materiel solutions are to be pursued, an initial capabilities document is produced. If only nonmateriel solutions are recommended or a nonmateriel solution can be implemented independent of proposed materiel needs, a joint doctrine, organization, training, materiel, leadership and education, personnel, or facilities Change Recommendation is produced. which directed the Secretaries of Defense and Homeland Security to jointly lead an interagency effort to prepare a National Strategy for Maritime Security. In 2005 the National Strategy for Maritime Security provided broad strategic objectives and identified strategic actions to be taken to enhance maritime domain awareness efforts. The strategy required DOD and the Departments of Homeland Security, Justice, and State to lead U.S. efforts to integrate and align all U.S. maritime security programs into a comprehensive, cohesive national effort that includes the appropriate state and local agencies, the private sector, and other nations. The Departments of Defense, Homeland Security, and Transportation each appointed an executive agent for maritime domain awareness to assist in coordinating efforts and informing maritime policy within and among federal agencies in order to enhance national maritime domain awareness efforts. Building on national guidance, DOD policy has established broad roles and responsibilities for maritime domain awareness efforts within the department but recognizes, as does national guidance, that enhancing maritime domain awareness must be a combined effort. DOD established some roles and responsibilities for departmental maritime domain awareness efforts in DOD Directive 2005.02E. This directive designates the Secretary of the Navy as the DOD Executive Agent for Maritime Domain Awareness and designates the Under Secretary of Defense for Policy to oversee the activities of the DOD Executive Agent for Maritime Domain Awareness. The directive also establishes several management functions for the Executive Agent for Maritime Domain Awareness to conduct in coordination with relevant partners, such as the Under Secretary of Defense for Policy and the Under Secretary of Defense for Intelligence. Required management functions outlined in the directive include overseeing the execution of DOD maritime domain awareness initiatives; developing and distributing goals, objectives, and desired effects for maritime domain awareness; identifying and updating maritime domain awareness requirements and resources; and recommending DOD-wide maritime domain awareness planning and programming guidance. An additional DOD instruction on maritime domain awareness from the Secretary of the Navy, in 2009, assigned the Chief of Naval Operations responsibility for achieving maritime domain awareness within the Navy. This responsibility includes aligning Navy guidance with DOD policy guidance and coordinating with the Joint Staff to ensure that combatant commands have the necessary Navy resources to support their respective maritime domain awareness requirements. DOD has made efforts to enhance maritime domain awareness within the department, but recognizes that no single department, agency, or entity holds all of the authorities and capabilities necessary to fully achieve effective maritime domain awareness. For example the process of allocating sufficient resources to maritime domain awareness efforts is complicated because the cost associated with maritime domain awareness efforts is spread across multiple agencies; this also makes the total cost of maritime domain awareness efforts difficult to determine. Resources and funding for maritime capabilities can come from a number of sources, including national intelligence funding, military intelligence funding, military service funding, and funding from other interagency partners such as the Coast Guard, Customs and Border Protection, and the Maritime Administration. Coordination challenges such as resource allocation among agencies are common for interagency efforts like maritime domain awareness. DOD faces challenges unique to the maritime domain as well as challenges common to interagency coordination efforts in general. Challenges unique to the maritime domain include the need for international cooperation to ensure improved transparency in the registration of vessels and identification of ownership, cargoes, and crew of the world’s multinational, multiflag merchant marine. Environmental factors unique to the maritime domain also contribute to maritime domain awareness challenges, such as the vastness of the oceans, the great length of shorelines, and the size of port areas that can provide concealment and numerous access points to the land. Additionally, the fluid nature of crewing and operational activities of most vessels offers additional opportunities for concealment and challenges for those attempting to maintain maritime security. In addition to challenges unique to the maritime domain are the challenges DOD faces that are common to other interagency coordination efforts. In 2009 we reported on common interagency coordination challenges for efforts such as achieving maritime domain awareness that included agencies not always sharing relevant information and challenges inherent in managing and integrating information drawn from multiple sources. As we previously reported, agencies may not always share information because of concerns about another agency’s ability to protect shared information or to use the information properly; cultural factors or political concerns; a lack of clear guidelines, policies, or agreements with other agencies; or security clearance issues. Challenges posed by managing and integrating information drawn from multiple sources include managing redundancies in the information after it is integrated; unclear roles and responsibilities; and data not being comparable across agencies. We have previously recommended that agencies involved in interagency collaboration efforts need to enhance efforts to develop and implement overarching strategies, create collaborative organizations, develop a well-trained workforce, and share and integrate national security information across agencies. Agencies generally agreed with our recommendations and, in some cases, identified planned actions or actions that were under way to address the recommendations. In a recent report, we reviewed DOD efforts to enhance maritime domain awareness and determined that DOD did not have a departmentwide strategy for maritime domain awareness. We concluded that in the absence of such a comprehensive strategy, DOD may not be effectively managing its maritime domain awareness efforts. In order to improve DOD’s ability to manage implementation of maritime domain awareness across DOD, we recommended that DOD develop and implement a departmentwide strategy for maritime domain awareness that identifies DOD objectives and roles and responsibilities within DOD for achieving maritime domain awareness and aligns efforts and objectives with DOD’s corporate process for determining requirements and allocating resources. Additionally, we recommended that the strategy identify responsibilities for resourcing capability areas and include performance measures for assessing the progress of the overall strategy that will assist in the implementation of maritime domain awareness efforts. An overarching maritime domain awareness strategy would also enhance interagency collaboration efforts. DOD concurred with our recommendation for an overarching maritime domain awareness strategy and has notified us that it is working on producing such a strategy. Northern Command, as the command responsible for homeland defense for the continental United States, has undertaken a number of homeland defense planning efforts, but it does not have a key detailed supporting plan for responding to maritime threats. Northern Command routinely conducts planning and exercises to prepare for execution of its maritime homeland defense mission. As part of its planning efforts, Northern Command requires supporting DOD organizations and subordinate commands to develop supporting plans to its homeland defense plan. The current, 2008 version of the Northern Command homeland defense plan requires such a supporting plan from a number of supporting commands, including the commander of Fleet Forces Command, who is Northern Command’s supporting commander and also Northern Command’s joint force maritime component commander. Fleet Forces Command has developed an execute order that contains some elements that would be addressed in a supporting plan. This execute order also provides general details about types and numbers of forces that would be made available to Northern Command to execute the maritime homeland defense mission. Nonetheless, without a complete supporting plan, Northern Command faces increased uncertainty about its ability to execute its maritime homeland defense responsibilities. DOD provides guidance for developing contingency plans and establishing objectives, and identifying capabilities needed to achieve the objectives in a given environment. The planning process is meant to ensure mission success and to reduce the risks inherent in military operations. Contingency plans receive extensive DOD review and can take several forms, from very detailed operation plans to broad and less detailed concept plans. For example, operation plans are developed for possible contingencies across the range of military operations. Such plans may be developed for military operations dictated by a specific foreign threat or scenario, such as a scenario in which it is necessary to oppose a landward invasion of the territory of a U.S. ally by a hostile nation, while concept plans are prepared for less specific threat scenarios, such as disaster relief, humanitarian assistance, or peace operations. Operation plans identify the specific forces, functional support, and resources required to execute the plan. Some concept plans may similarly provide detailed lists of military forces that would provide required capabilities; however, not all concept plans must include such information. DOD guidance requires Northern Command to develop a homeland defense plan that prepares it to employ military force in response to unforeseen events, such as terrorist threats. The specific contingencies for which Northern Command should plan are directed by the President and the Secretary of Defense. Northern Command follows several sets of strategies and guidance when developing homeland defense plans—such as the National Defense Strategy of the United States of America, the Unified Command Plan, and Contingency Planning Guidance. Given that the potential threats to the homeland are broad, the Northern Command homeland defense plan is a general concept plan—as opposed to a detailed operation plan developed based on a specific threat or scenario. The current version of Northern Command’s homeland defense plan, which was approved by DOD in 2008, contains a discussion of the maritime homeland defense mission area. The current version of the homeland defense concept plan does not contain detailed lists of military forces that would provide required capabilities in order to execute the plan. The Northern Command homeland defense plan requires supporting DOD organizations and subordinate commands to develop supporting plans to assist Northern Command in responding to homeland defense events. These organizations include Northern Command’s subordinate commands, such as Joint Task Force Alaska and Joint Force Headquarters National Capitol Region; component commands, such as Army Forces North, Air Forces North, and Marine Forces North; supporting commands, such as Fleet Forces Command and U.S. Transportation Command; and DOD agencies, such as the Defense Threat Reduction Agency and the Defense Intelligence Agency. The homeland defense plan provides its subordinate, component, and supporting commands and agencies with planning guidance, including types of incidents to prepare for and what kinds of plans to prepare to support Northern Command’s homeland defense plan. Because the Northern Command homeland defense plan is a concept plan, which are by definition less detailed than operation plans, and because the command does not have naval forces routinely under its operational control, these supporting plans provide critical details on how operations are to be conducted and allow Northern Command to assess the extent to which these organizations and subordinate commands are prepared to support the homeland defense mission. For example, the supporting plan allows the supported commander to assess the extent to which the supporting command is prepared to address all appropriate areas of the broader plan. Supporting plans must adhere to the same joint doctrine standards as the base plans and should contain objectives, assumptions and constraints, and sections on such areas as command and control, task organization, intelligence, and logistics. Further, supporting plans can help guide subsequent specific actions that can enhance preparedness—such as the development of execute orders and training and readiness measures. Collectively, these supporting plans should help to facilitate preparedness for and adequate response to an incident in the homeland. Additional means by which Northern Command and DOD plan for executing maritime homeland defense operations include the use of standing execute orders and exercises to test the maritime component of the Northern Command homeland defense plan. DOD has developed standing execute orders in the homeland defense area to identify the general types and numbers of forces necessary to execute missions, including maritime homeland defense. According to DOD officials, these execute orders provide the authority for Northern Command to request allocation of additional forces needed to conduct maritime homeland defense missions. Additionally, Fleet Forces Command tracks and provides information to Northern Command on the ability of naval forces to satisfy requirements identified in the specific execute order. Exercises play an instrumental role in preparing for maritime homeland defense operations by providing opportunities to test plans, improve proficiency, assess capabilities and readiness, and clarify roles and responsibilities. Short of performance in actual operations, exercises provide the best means to assess the effectiveness of organizations in achieving mission preparedness. Exercises also provide an ideal opportunity to enhance preparedness by collecting, developing, implementing, and disseminating lessons learned and verifying corrective actions that have been taken to resolve previously identified issues. Northern Command established a maritime exercise branch in 2009, which focuses on exercising maritime homeland defense, maritime security, and maritime events related to defense support to civil authorities. Northern Command conducts maritime exercises in conjunction with other, larger- scale exercises. The 2008 Northern Command homeland defense plan requires a number of supporting entities—including the commander of Fleet Forces Command in his role as the joint force maritime component commander—to develop supporting plans within 60 days of the completion of Northern Command’s 2008 plan. Fleet Forces Command did not provide such a supporting plan. The command developed a maritime homeland defense execute order, which in the view of Fleet Forces officials outlines a robust command and control structure for maritime operations and enables the execution of the maritime homeland defense mission in Northern Command’s area of responsibility. The execute order addresses some elements that would be included in a supporting plan, such as reflecting the command relationships and concept of operations in Northern Command’s homeland defense concept plan. The execute order also identifies the types of naval units that would respond to a maritime homeland defense threat and provides the authorities for these forces to be transferred to Northern Command control when needed. A revision to the Northern Command concept plan for homeland defense is currently under review and, according to Northern Command officials, a similar requirement for a supporting plan from Fleet Forces Command is expected to be included. A complete supporting plan would provide additional details that are not generally present in execute orders. For example, according to DOD planning guidance, execute orders focus specifically on allocating forces and directing the initiation of military operations—whereas supporting plans contain information on objectives; assumptions and constraints; sections on such areas as command and control, task organization, intelligence, and logistics; and other details requested and required by the combatant commander. By completing a supporting plan, Fleet Forces Command would expand on the operations planning already done for the maritime homeland defense execute order and help Northern Command further mitigate planning, operations, and command and control challenges to the maritime homeland defense mission. DOD identifies and develops capabilities needed by combatant commanders through the Joint Capabilities Integration and Development System process. One method by which this process starts is the development of a systematic study—referred to as a capabilities-based assessment—that identifies the capabilities required to successfully execute a mission, capability gaps and associated operational risks, and possible solutions for the capability shortfalls. The Joint Requirements Oversight Council—the body responsible for overseeing the military requirements process—may validate the findings from such assessments and direct relevant DOD organizations to undertake actions to close any capability gaps that are identified. At the direction of the Deputy Secretary of Defense and in response to a request from the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, Northern Command agreed to lead a departmentwide, capabilities-based assessment for DOD’s homeland defense and civil support missions. The strategic goals of the effort were to enable improvement in DOD homeland defense and civil support policy, evaluate existing DOD capabilities and identify DOD capability gaps, improve DOD’s integration with interagency mission partners, and recommend further action to promote future capability development for the homeland defense and civil support missions. The Deputy Secretary of Defense identified this capabilities-based assessment as one of DOD’s top 25 transformational priorities to be completed or advanced to a major milestone by December 2008 and an important effort for determining future resource allocation. DOD conducted the capabilities-based assessment between September 2007 and October 2008, in accordance with DOD processes. DOD agencies, the combatant commands, the military services, the National Guard Bureau, the Department of Homeland Security, and other key federal interagency partners participated in the assessment, which identified 31 capability gaps for DOD’s homeland defense and civil support missions. According to our analysis, the assessment identified three gaps specific to the maritime homeland defense mission area—such as engaging and defeating maritime threats—and eight gaps—such as information management and sharing—in capabilities that enable a number of missions, including maritime homeland defense. The three maritime homeland defense capability gaps may affect DOD’s ability to coordinate maritime operations with relevant interagency stakeholders and respond to the full range of potential threats in the Northern Command maritime area of responsibility. For example, the assessment noted that the command lacked a robust understanding of the roles and responsibilities of its interagency partners, thus limiting the extent to which it could effectively coordinate interagency operations in response to maritime threats. Further, the assessment noted that the command’s ability to respond to certain threats without timely warning might be inadequate. In 2009, the Joint Requirements Oversight Council reviewed the capabilities-based assessment and requested relevant DOD organizations—including the Navy; the Office of the Under Secretary of Defense for Policy; the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; DOD’s Biometrics Task Force; and the Defense Threat Reduction Agency—to undertake specific actions to address the identified capability gaps. Thirteen recommendations were directed at addressing the three capability gaps in the maritime homeland defense mission area. For example, Northern Command, with the support of Joint Forces Command, the U.S. Navy, and Joint Staff, was to review the reorganization of forces to assign a permanent naval component to Northern Command. In addition, the Defense Threat Reduction Agency, with the support of Strategic Command and the Domestic Nuclear Detection Office, was to integrate some nuclear detection efforts. The council requested that each organization responsible for undertaking recommended actions provide an implementation plan to Northern Command—thus facilitating the efforts of Northern Command and the council to track organizations’ progress in implementing recommendations. However, the responsible organizations did not provide Northern Command with implementation plans or other forms of documentation regarding actions taken or under way. Northern Command officials informed us that they requested information from these organizations to assess their progress and stated that Northern Command does not have the authority to compel those organizations to provide implementation plans. They noted that in the absence of implementation plans they relied on self-reported progress updates to document—where possible—the extent to which responsible organizations had taken the recommended actions. A Northern Command document used to track progress in implementing the recommended actions noted that of the 13 recommendations focused on maritime homeland defense, 2 had been implemented, 6 were in the process of being implemented, 4 had not yet been addressed, and there was no information available on the progress of the remaining recommendation. For example, one of the recommendations that had not yet been addressed related to assessing Navy and Coast Guard roles and responsibilities to ensure DOD’s ability to respond to the full spectrum of homeland defense threats in the maritime domain. Without implementation plans or other forms of documentation on progress in implementing recommended actions, Northern Command cannot be assured that it has full and accurate information about the extent to which the responsible organizations have implemented actions to address maritime homeland defense capability gaps. Without such documentation, DOD’s efforts to effectively identify and direct necessary resources to meet maritime homeland defense needs may be further complicated. Because of its dedicated resources and presence in the maritime domain, DOD plays a key role in leading efforts to enhance maritime domain awareness and has identified challenges and initiated efforts to address these challenges in the domain. The 2005 National Plan to Achieve Maritime Domain Awareness, a national strategy document, states in its guiding principles that maritime domain awareness depends on extensive information sharing among government agencies, international partners (such as foreign governments and the International Maritime Organization), and private-sector stakeholders (such as the Customs-Trade Partnership Against Terrorism). Improved information sharing would enable DOD and its interagency partners, such as the Coast Guard, Customs and Border Protection, and the Maritime Administration, to better leverage existing data that have already been collected within the federal government, promote a shared awareness of potential threats, and facilitate a coordinated response to any identified national security threat. To improve information sharing, DOD has identified the need to adopt shared data standards that can translate legacy maritime data sources into a common information pool, making currently inaccessible data available. One effort, the National Maritime Domain Awareness Architecture, is focusing on creating a common pool of data and establishing data standards. The National Maritime Domain Awareness Architecture, an effort led by the DOD Executive Agent for Maritime Domain Awareness, is intended to improve data management and integration through establishing data standards, providing a common maritime language, and developing supporting technology. This effort is expected to leverage the existing National Information Exchange Model—an effort under way at DOD and the Departments of Homeland Security and Justice to establish data standards including some applicable to the maritime domain—and provide supporting standards and guidance at a more detailed level. The National Information Exchange Model has defined terms. For example, it defines “length,” as a numeric determination of measure that is recorded as six digits. The National Maritime Domain Awareness Architecture is intended to go beyond the National Information Exchange Model effort by determining which partners will have access to what information and defining how to query for automated responses—for example, by naming a port of interest, vessel type, and estimated time of arrival to attain specific information on what vessels are arriving at a particular port. Interagency participation in this effort is robust; the coordination office for Maritime Operational Threat Response has already agreed to adopt the standards. DOD officials told us that a number of countries—including France, England, and Canada—and organizations such as the North Atlantic Treaty Organization already are considering adopting the standards once they are developed. The first version of the standards has been published and is expected to be tested through summer exercises. As a result of this effort, access to information is expected to improve, and the amount of information available to inform analysts and operational commanders is expected to increase as information becomes easier to develop and share. One DOD official equated the anticipated expansion of available, displayable data with that of smart phone applications: once the infrastructure is in place, smart phone applications become easy to create and subscribe to. In this analogy, the infrastructure could potentially be provided through the National Maritime Domain Awareness Architecture effort. Rather than focusing on the development of one national common operational picture—presenting a single, unified display of maritime information—the National Maritime Domain Awareness Architecture may facilitate the accessibility of common data across the maritime community and allow stakeholders to focus on configuring the display of information to best meet their specific missions, whether through data analysis capabilities or geographic displays. DOD officials involved in the National Maritime Domain Awareness Architecture believe that if the effort is successful, maritime domain awareness for the nation and our partners would be enhanced. However, challenges would remain. National and DOD documents identify challenges affecting the sharing of maritime domain information, such as international coordination, policy and processes, technology, legal restrictions, and cultural barriers. DOD and interagency partners have efforts under way to address many of these challenges. International coordination: A DOD and interagency working group has noted that the sharing of passenger, crew, and cargo information is inhibited by a lack of international policy agreements. The existing information sharing environment, made up of a collection of networks, limits situational awareness and collaboration among maritime partners. To address these challenges, DOD is working with other international partners such as Singapore to improve vessel-tracking procedures and Micronesia, Malaysia, and Indonesia to improve sharing of relevant vessel tracking data; DOD is also supporting the Maritime Safety and Security Information System—a ship-tracking information sharing capability with over 60 participating nations. Policies and processes: DOD recognizes that multiple agencies and organizations have been collecting and storing identical information— either because agencies have been unaware of others’ efforts or because agencies have been unable to share relevant information with other organizations in the absence of information sharing standards, agreements, policies, or processes to facilitate such sharing. Challenges such as these may be addressed through efforts like the Joint Integration of Maritime Domain Awareness, a 3-year joint test at Northern Command. This effort will identify policy and procedural improvements that could enhance information sharing between Northern Command and its supporting operational commands and is expected to be expandable to all combatant commands. Technology: DOD has identified uncoordinated data and incompatible technology systems as technological challenges to efforts to enhance maritime domain awareness. Without data standards, data such as the date an event occurred can be difficult to communicate, because this information can be recorded in several different ways depending on agency and personal preferences. The National Information Exchange Model is one effort under way to address data standardization. Another effort, the National Maritime Domain Awareness Architecture, is to establish a technology architecture that will allow currently incompatible technology systems to communicate and access common data. Legal restrictions: The National Concept of Operations for Maritime Domain Awareness notes that there are legal restrictions on the sharing of public-private information, classified material, protected critical infrastructure information, and sensitive industry or government data. There are also privacy concerns that arise regarding the sharing of information, such as the sharing of certain information from passenger lists. Cultural barriers: DOD recognizes that the culture of overprotecting information impedes the transfer and sharing of information in a lawful manner. For example, some data providers are reluctant to share detailed information due to concerns that the information will not be appropriately protected. Building relationships—such as the colocation of defense, law enforcement, and international partners at Joint Interagency Task Force- South—and direct, real-time communication help to alleviate this cultural challenge. The Maritime Operational Threat Response process is another good example of overcoming cultural barriers; it provides a venue for direct, real-time communication among key decision makers during specific maritime threat events in order to quickly coordinate a national response to a maritime threat. While efforts under way may enhance national maritime domain awareness, DOD recognizes that opportunities for improvement remain. For example, the Office of the DOD Executive Agent for Maritime Domain Awareness noted that DOD lacks the ability to assess progress and investments in maritime domain awareness as a whole, align maritime domain awareness initiatives and advancements across DOD components and with other interagency efforts, and make informed planning and programming recommendations to align resources to requirements and priorities. We recommended in a prior report, and DOD agreed, that DOD should develop and implement a strategy for maritime domain awareness that establishes objectives, roles, and responsibilities for maritime domain awareness and includes performance measures. Such a strategy would enhance interagency coordination and assist in leveraging and aligning existing and ongoing information sharing and dissemination efforts in the maritime domain. DOD has recognized defense of the homeland as one of its key responsibilities. In meeting this responsibility with regard to the maritime domain—which presents a range of threats—DOD must work with interagency partners to both improve the awareness of these threats and effectively coordinate an appropriate response. Northern Command has a unique role in preparing for and conducting homeland defense missions and the command has worked to improve its coordination with its interagency, state, local, and international partners. As Northern Command’s command and control relationships may rely on increased coordination with these partners and other DOD supporting components, efforts to improve its preparedness through planning and exercising with these other organizations and working together to address identified capability gaps are important to ensure that the command can effectively deal with maritime threats as they occur. DOD uses its planning and exercising processes to increase the level of assuredness that threats can be neutralized should they arise. These processes allow the department to assess its preparedness to address various contingencies. Northern Command and its partners inside and outside of DOD continue to improve planning and preparedness for maritime homeland defense. With the completion of the joint force maritime component commander’s supporting plan, Northern Command and its partners can further capitalize on these efforts and better inform each other and decision makers about their preparedness for this mission. As DOD and the rest of government face increasing demand and competition for resources, policymakers will confront difficult decisions on funding priorities. Planning undertaken by Northern Command and its supporting commands also informs the department’s resourcing and investment decisions by identifying the types and numbers of forces, as well as other capabilities, necessary to meet a variety of threats. DOD’s identification of capability gaps affecting its homeland defense mission, as well as subsequent actions to address these gaps, helps the department understand its preparedness to conduct this mission. However, without completed implementation plans, the department does not have a means of verifying that these actions have been taken and these gaps have been addressed. The completion of these implementation plans would provide Northern Command and the Joint Requirements Oversight Council with the ability to monitor progress made in addressing these gaps and would serve as an additional source of information to inform resourcing and investment decisions and assist DOD in making the best use of resources in a fiscally constrained environment. To improve DOD’s preparedness to conduct maritime homeland defense missions, we recommend that the Secretary of Defense take the following two actions: To ensure that Northern Command is sufficiently prepared to conduct maritime homeland defense operations, we recommend that the Secretary of Defense direct the commander of Fleet Forces Command to develop a complete supporting plan for the Northern Command homeland defense plan, currently under review, once it is approved. To enable Northern Command to monitor progress toward addressing maritime homeland defense capability gaps—including the three specific to maritime homeland defense as well as the others that affect the mission—identified in the Northern Command homeland defense and civil support capabilities-based assessment, we recommend that the Secretary of Defense direct responsible DOD organizations to provide Northern Command with implementation plans for undertaking the actions identified by the Joint Requirements Oversight Council. In written comments on a draft of this report, DOD partially concurred with our recommendations and discussed actions it is taking—or plans to take—related to the issues raised by our recommendations. Regarding our recommendation that the Secretary of Defense direct the commander of Fleet Forces Command to develop a complete supporting plan to the revised Northern Command homeland defense plan as soon as the revision is approved, DOD indicated that in addition to participating in the development of the current draft of Northern Command’s homeland defense concept plan, Fleet Forces Command will prepare a supporting plan in accordance with the requirement. DOD stated that further direction from the Secretary of Defense to a service subordinate command was neither appropriate nor required. In this report we cite the importance of complete supporting plans to DOD’s joint operation planning process. The completion of a supporting plan from the joint force maritime component commander, as was requested in the 2008 homeland defense plan and is expected to be requested again in the new version of the plan, will further aid Northern Command and DOD in capitalizing on other important prior and ongoing efforts by Fleet Forces Command and others. If Fleet Forces Command—as the joint force maritime component commander for Northern Command—develops a complete homeland defense supporting plan, this will satisfy the recommendation and we believe this will improve the department’s overall preparedness to conduct maritime homeland defense. DOD also partially concurred with our recommendation that the responsible department organizations provide Northern Command with implementation plans for undertaking the actions identified by the Joint Requirements Oversight Council. In its comments, DOD stated that Northern Command will identify actions yet to be completed, ascertain the utility in completing those actions, and close out recommendations that may no longer be required. The department also stated that Northern Command had diligently tracked the implementation of the identified actions, although implementation plans were not received from the myriad organizations responsible for these actions. According to DOD, Northern Command suspended its follow-up on these recommended actions when a substantial portion of the total recommended actions had been completed, were on track for completion, or where the remaining actions were unlikely to result in further progress. The department indicated that Northern Command would now assess the utility of completing outstanding actions. In our report, we discuss the fact that Northern Command did not have implementation plans or other documentation to assess the extent to which the responsible organizations have implemented the recommended actions. Given that (1) these actions were recommended to address identified gaps in the department’s ability to conduct civil support and homeland defense missions and (2) not taking actions to close these gaps may present significant operational risks to DOD, we continue to believe assessing whether the recommended actions related to maritime homeland defense capability gaps have been fully implemented would be an important step in minimizing risk to such operations. If—as indicated by DOD’s response—Northern Command assesses the utility of completing actions identified by the Joint Requirements Oversight Council and fully assesses progress toward those actions, that would satisfy our recommendation. DOD’s written comments are reprinted in their entirety in appendix II. The Department of Homeland Security also provided written comments on the draft in which the department highlighted some of its continuing efforts to improve the awareness and response to maritime-related threats in coordination with DOD and other interagency partners. These comments are reprinted in their entirety in appendix III. DOD and the Department of Homeland Security also provided separate technical comments, which we have incorporated into the report where appropriate. We are sending copies to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or dagostinod@gao.gov. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has conducted maritime homeland defense planning, we examined DOD’s Strategy for Homeland Defense and Civil Support as well as joint doctrine on contingency planning, operational exercises, and the execution of maritime homeland defense operations. We also interviewed officials of the Office of the Under Secretary of Defense for Policy, the Joint Staff, U.S. Joint Forces Command, North American Aerospace Defense Command/U.S. Northern Command, U.S. Fleet Forces Command, and the U.S. Coast Guard. Further, we received written responses from U.S. Pacific Command and U.S. Pacific Fleet related to maritime homeland defense planning efforts in the Pacific Command area of responsibility. For the purposes of this report, we focused on the extent to which required maritime homeland defense planning documents had been developed by Northern Command and other DOD organizations. We compared these planning documents to joint doctrine and other DOD planning guidance. To assess the extent to which DOD has identified and addressed maritime homeland defense capability gaps, we analyzed maritime homeland defense-related gaps identified in DOD’s Homeland Defense and Civil Support Capabilities-Based Assessment and a 2009 DOD Joint Requirements Oversight Council Memorandum on the assessment. We also interviewed officials in the Office of the Under Secretary of Defense for Policy, the Joint Staff, and Northern Command to discuss the maritime homeland defense-related components of the study and the status of actions taken to address relevant capability gaps. To evaluate progress DOD has made with its interagency partners in addressing information sharing challenges related to maritime domain awareness, we obtained and analyzed relevant national, interagency, and DOD-level documentation—such as National Security Presidential Directive-41/Homeland Security Presidential Directive-13, Maritime Security Policy, National Strategy for Maritime Security, National Plan to Achieve Maritime Domain Awareness, Maritime Domain Awareness Interagency Solutions Analysis Current State Report, and the 2010 assessment of maritime domain awareness plans conducted by the DOD Executive Agent for Maritime Domain Awareness. Given our previous work on DOD’s management of maritime domain awareness, we relied on, and updated where available, information on identified capability gaps in DOD’s information sharing and situational awareness efforts. In addition, we interviewed officials from the following DOD components and interagency partners to discuss these capability gaps as well as other issues related to maritime domain awareness information sharing: Office of the DOD Executive Agent for Maritime Domain Awareness, Office of the Assistant Secretary of Defense for Networks and Information Integration / DOD Chief Information Officer, Office of the Under Secretary of Defense for Policy, Joint Staff, Combatant Commands, North American Aerospace Defense Command /U.S. Northern U.S. Pacific Command, and U.S. Strategic Command, U.S. Department of the Navy, Office of the Chief of Naval Operations, Office of the Chief Information Officer, U.S. Pacific Fleet, and U.S. Fleet Forces Command, Global Maritime Operational Threat Response Coordination National Maritime Domain Awareness Coordination Office. We conducted this performance audit from August 2010 through June 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Joseph Kirschbaum (Assistant Director), Alisa Beyninson, Christy Bilardo, John Dell’Osso, Gina Flacco, Brent Helt, Joanne Landesman, Katherine Lenane, Gregory Marchand, and Kendal Robinson made key contributions to this report. Intelligence, Surveillance, and Reconnaissance: DOD Needs a Strategic, Risk-Based Approach to Enhance Its Maritime Domain Awareness. GAO-11-621. Washington, D.C.: June 20, 2011. Homeland Defense: DOD Needs to Take Actions to Enhance Interagency Coordination for Its Homeland Defense and Civil Support Missions. GAO-10-364. Washington, D.C.: March 30, 2010. Homeland Defense: U.S. Northern Command Has a Strong Exercise Program, but Involvement of Interagency Partners and States Can Be Improved. GAO-09-849. Washington, D.C.: September 9, 2009. Maritime Security: Vessel Tracking Systems Provide Key Information, but the Need for Duplicate Data Should Be Reviewed. GAO-09-337. Washington, D.C.: March 17, 2009. Maritime Security: National Strategy and Supporting Plans Were Generally Well- Developed and Are Being Implemented. GAO-08-672. Washington, D.C.: June 20, 2008. Homeland Defense: U.S. Northern Command Has Made Progress but Needs to Address Force Allocation, Readiness Tracking Gaps, and Other Issues. GAO-08-251. Washington, D.C.: April 16, 2008. Homeland Defense: DOD Needs to Assess the Structure of U.S. Forces for Domestic Military Missions. GAO-03-670. Washington, D.C.: July 11, 2003. | Recent events, such as the seaborne terrorist attack on Mumbai in 2008 and the pirate attack on the Quest in February 2011, highlight maritime threats to the United States. The maritime domain presents a range of potential security threats--including naval forces of adversary nations, piracy, and the use of vessels to smuggle people, drugs, and weapons--which could harm the United States and its interests. The Department of Defense (DOD) has also identified homeland defense as one of its highest priorities. GAO was asked to determine the extent to which DOD has (1) planned to conduct maritime homeland defense operations, (2) identified and addressed capability gaps in maritime homeland defense, and (3) made progress with interagency partners, such as the U.S. Coast Guard, in addressing information sharing challenges related to maritime domain awareness. To conduct this work, GAO examined national and DOD guidance and interviewed officials from DOD, Joint Staff, combatant commands, the military services, and others. U.S. Northern Command, as the command responsible for homeland defense for the continental United States, has undertaken a number of homeland defense planning efforts, but it does not have a key detailed supporting plan for responding to maritime threats. Northern Command requires supporting DOD organizations to develop plans to support its homeland defense plan. The current, 2008 version of the plan requires a supporting plan from the commander of U.S. Fleet Forces Command, who is designated as the joint force maritime component commander for Northern Command. Fleet Forces Command has undertaken some planning efforts, but has not developed a supporting plan. Because the Northern Command homeland defense plan is a concept plan, which are less detailed than operation plans, and because the command does not have naval forces routinely under its operational control, supporting plans provide critical details on how operations are to be conducted and allow Northern Command to assess the extent to which subordinate commands are prepared to support the maritime homeland defense mission. DOD has identified maritime homeland defense capability gaps and determined actions necessary to address them, but it has not adequately assessed the extent to which those actions have been implemented. One way DOD identifies capability gaps that affect mission execution is through capabilities-based assessments. A 2008 assessment identified three capability gaps specific to the maritime homeland defense mission--such as engaging and defeating maritime threats--and eight other gaps that affect a number of missions, including maritime homeland defense--such as information management and sharing. The Joint Requirements Oversight Council reviewed the findings and requested relevant DOD organizations to take action to close identified gaps. However, the responsible organizations did not provide implementation plans or other documentation of actions taken or under way to address these gaps. Without documentation on progress in implementing recommended actions, Northern Command cannot be assured that it has full and accurate information about the extent to which other organizations have taken action to close these gaps. National and DOD documents have identified challenges to the sharing of maritime domain information, such as international coordination, policy and processes, technology, legal restrictions, and cultural barriers. DOD and interagency partners, such as the Coast Guard, have efforts under way to address many of these challenges. One effort, the interagency National Maritime Domain Awareness Architecture, is intended to improve data management by establishing data standards, providing common terminology, and developing supporting technology. It is intended to leverage the interagency National Information Exchange Model, an effort currently under way to establish data standards, facilitate the accessibility of common data across the maritime community, and allow stakeholders to focus on configuring the display of information to best meet their specific missions, whether through data analysis capabilities or geographic displays. GAO recommends that Fleet Forces Command develop a plan to support Northern Command and that responsible DOD organizations provide Northern Command with implementation plans for the actions identified by the Joint Requirements Oversight Council. DOD partially concurred and agreed to take actions on each recommendation. |
Mercury enters the environment through natural and man-made sources, including volcanoes, chemical manufacturing, and coal combustion, and poses ecological threats when it enters water bodies, where small aquatic organisms convert it into its highly toxic form—methylmercury. This form of mercury may then migrate up the food chain as predator species consume the smaller organisms. Through a process known as bio-accumulation, predator species may develop high mercury concentrations in their tissue as they take in more mercury than they can metabolize or excrete. Fish contaminated with methylmercury may pose health threats to those that rely on fish as part of their diet. According to EPA, mercury harms fetuses and can cause neurological disorders in children, including poor performance on behavioral tests, such as those measuring attention, motor and language skills, and visual-spatial abilities (such as drawing). In addition, populations that consume larger amounts of fish than the general population—including subsistence fishers, as well as certain Native Americans and Southeast Asian Americans—may face higher risk of exposure to contaminated fish, according to EPA. The Food and Drug Administration (FDA) and EPA recommend that expectant mothers, young children, and those nursing children avoid eating swordfish, king mackerel, shark, and tilefish and limit consumption of other potentially contaminated fish, such as tuna. These agencies also recommend checking local advisories for recreationally caught freshwater and saltwater fish. According to EPA, 45 states issued mercury advisories in 2003 (the most recent data available). Because mercury released to the atmosphere can circulate for long periods of time and be transported thousands of miles before it gets deposited, it is difficult to link mercury accumulation in the food chain with sources of mercury emissions. EPA estimates that about half of the mercury deposited in the United States is emitted by sources within this country. In 1999, the most recent year for which data were available, EPA estimated that man-made sources within the United States emitted about 115 tons of mercury. Of these emissions, the agency estimates that about 48 tons, 42 percent of the total, came from coal-fired power plants. While power plants are not required to limit their mercury emissions, EPA estimates that the plants currently capture about 27 tons of mercury each year, primarily through the use of controls for other pollutants, such as those used to control nitrogen oxides, particles, and sulfur dioxide. EPA estimates that power plants would otherwise emit about 75 tons of mercury per year. The Clean Air Act (CAA) Amendments of 1990 required EPA to study the environmental and health effects of hazardous air pollutants from coal-fired power plants and determine whether it was “appropriate and necessary” to regulate these pollutants. In 2000, EPA determined that mercury was a hazardous air pollutant and that it was appropriate and necessary to regulate mercury using the technology-based option. Under this section of the act, the emissions limit had to be at least as strict as the average emissions of the facilities with the best-controlled emissions. Because power plants did not already use controls specifically intended to control mercury, EPA analyzed the effectiveness of controls for other pollutants that capture mercury as a side benefit. This effort culminated in EPA’s January 2004 proposal for a technology-based option that would reduce mercury emissions from a current level of 48 tons per year to a projected 34 tons per year (a 29 percent reduction) by 2008. At the same time, however, EPA proposed an alternate policy option that would limit mercury emissions in two phases: to 15 tons in 2018 (a 69 percent reduction from current levels), preceded by an as-yet-unspecified interim cap starting in 2010. The alternate policy option, which would rely on a cap-and-trade system similar to that currently used to control emissions that cause acid rain, differs from the technology-based option in that it would not require each facility to meet emission standards based on control technology. Instead, EPA would set a nationwide “cap” for mercury emissions from coal-fired power plants and then distribute tradable emissions allowances that represent a certain amount of the total cap. At the end of each year, each power plant would have to hold sufficient allowances for the mercury it emitted that year. Plants that reduced their emissions below the levels represented by their allowances could sell their extra allowances to other plants. In addition to its proposed mercury rule, EPA has proposed another rule for power plants, the Clean Air Interstate Rule, which is intended to reduce emissions of nitrogen oxides and sulfur dioxide beginning in 2010. EPA expects that this proposed rule would result in the installation of pollution controls that capture mercury as a side benefit, and thereby decrease mercury emissions to 34 tons per year by 2010, the same level of reduction as the technology-based option. Under the cap-and-trade option, EPA has indicated that it may establish a mercury cap for 2010 equal to the control level expected through the interstate rule. EPA postponed its decision on finalizing the interstate rule until March 2005 while the agency awaits congressional action on pending legislation, known as the Clear Skies Act, that would establish emissions caps and an allowance system similar to those in the interstate rule and the cap-and-trade mercury control option. EPA has stated a preference for achieving reductions of mercury, nitrogen oxides, and sulfur dioxide simultaneously through legislation rather than regulations. Responsibility for analyzing the economic impacts—including costs to industry and expected public health effects—of air pollution control policies rests with EPA’s Office of Air and Radiation. EPA provided documentation of its economic analysis for the proposed mercury rule in three primary documents, some of which refer readers to additional documentation on the agency’s Web site or in the public rule-making docket. According to EPA, the agency did not have time to assemble its economic assessment of the proposed rule in a single document prior to issuing the proposed rule. To assist in estimating costs that air quality regulations will impose on the power industry, EPA uses the Integrated Planning Model (IPM), which estimates how power plants would respond to various environmental policies. The assumptions underlying this model, such as those regarding fuel costs, the costs of pollution controls, and future electricity demand, can affect the modeling results, according to EPA officials responsible for the modeling. We identified four major shortcomings in the economic analysis underlying EPA’s proposed mercury rule that limit its usefulness for informing decision makers and the public about the economic trade-offs of the two options. First, EPA did not consistently analyze each of its two mercury policy options or provide estimates of the total costs and benefits of the two options, making it difficult to ascertain which policy option would provide the greatest net benefits. Second, EPA did not document some of its analysis or provide consistent information on the anticipated economic effects of different mercury control levels under the two options. Third, the agency did not estimate the economic benefits directly related to decreased mercury emissions. Finally, the agency did not analyze some of the key uncertainties underlying its cost-and-benefit estimates. EPA’s estimates of the costs and benefits of its two proposed policy options are not comparable because the agency used inconsistent approaches in analyzing the two options. As shown in table 1, EPA analyzed the technology-based option alone, while it analyzed the cap-and-trade option in combination with the interstate rule. In analyzing the technology-based option by itself, EPA estimated the rule would cost about $2 billion annually, and achieve benefits of $15 billion or more annually, yielding net benefits (benefits minus costs) of $13 billion or more annually. In contrast, EPA analyzed the effects of the cap-and-trade option in combination with the proposed interstate rule by combining the costs and benefits of the two proposed rules without separately identifying and documenting those associated with the cap-and-trade option alone. This analysis found that the two proposed rules together would impose costs of $3 billion to $5 billion or more annually, while generating annual benefits of $58 billion to $73 billion or more and annual net benefits of $55 billion to $68 billion or more. Because the estimates for the two options are not comparable, however, it is not clear which option would provide the greatest net benefits. This is particularly important in light of EPA’s decision to delay finalization of the interstate rule. EPA officials responsible for the rule acknowledged the lack of comparability with its analysis of the two proposed options. These officials said the agency analyzed the cap-and-trade option alongside the interstate rule because it viewed these two proposed policies as complementary. They also said it would have been useful to analyze the technology-based option alongside the interstate rule, but the agency did not do so because of time constraints. Nonetheless, it is important for EPA to consistently analyze each policy option and provide decision makers with comparable estimates of net economic benefits. The comparability of EPA’s analysis is further limited because the agency did not provide consistent information on the total costs and benefits of the two options over their entire implementation periods. Specifically, EPA provided cost-and-benefit estimates for 2010, rather than estimates of the total costs and benefits over the entire implementation period. This is important because the economic impact of the policy options could vary from year to year and because the two options have different implementation timelines. For example, under the proposed cap-and-trade option, a second level of mercury reductions would take effect in 2018, which would likely generate additional costs and benefits at that time. Thus, the estimates EPA provided for 2010 did not fully account for the expected costs and benefits over the implementation period for this option. In contrast, EPA officials said that its estimate of the technology-based option in 2010 reflects the full implementation cost because its analysis assumes that power plants would achieve compliance with the technology-based option by that date. However, without estimates of the total value of benefits and costs of each option over the entire implementation period, it is difficult to ascertain which option would generate the greatest net benefits. The economic analysis underlying the proposed mercury rule does not consistently reflect OMB’s guidance to agencies in terms of adhering to the principles of full disclosure and transparency when analyzing the economic effects of regulations. Specifically, we identified two primary cases where EPA’s analysis does not adhere to these principles, further limiting the usefulness of the agency’s analysis in decision making and diminishing the transparency of the analysis to the public. First, while EPA provides substantial information on its analysis of the technology-based option in the documents supporting its economic analysis of the proposed rule, the agency does not do so for the cap-and-trade option. For the technology-based option, EPA provides documents that describe its findings. In contrast, the agency provides only a summary of its findings for the cap-and-trade option in the rule’s preamble and refers to its findings as “rough estimates” that are based on consideration of available analysis of the interstate rule, the technology-based option, and the proposed Clear Skies legislation. EPA does not describe specifically how the agency used this analysis of other proposed rules and legislation to estimate the costs and benefits of the cap-and-trade option, and it does not identify the key analytical assumptions underlying its cost-and-benefit estimates. This lack of documentation and transparency leaves decision makers and the public with limited information on EPA’s analysis of the cap-and-trade option. Second, EPA officials responsible for the economic analysis told us that they analyzed two variations of the proposed technology-based option with more stringent mercury limits than the option included in the proposal, but the agency did not include this analysis in the documents supporting its economic analysis or in the public rule-making docket. This is inconsistent with EPA’s analysis of the cap-and-trade option, in which it provided a range of costs and benefits associated with different levels of stringency. This omission is also at odds with OMB guidance directing agencies to conduct their economic analysis in accordance with the principles of full disclosure and transparency. With respect to the analysis of the technology-based scenarios that the agency did not make publicly available, EPA officials said the additional modeling showed that the more stringent scenarios were not as cost-effective as the proposed technology-based option. However, EPA did not estimate the benefits of these two scenarios, thereby precluding a comparison of the net economic benefits under the proposed mercury policy options. As a result, it is unclear whether the reduction levels and implementation timelines under either proposed option represent the regulatory scenario that would provide the greatest net benefits. In January 2005, EPA officials responsible for the mercury rule said the agency does not have an obligation to analyze and document every control scenario. We recognize that OMB guidance gives agencies latitude in determining the number of regulatory alternatives to consider and that agencies must balance the thoroughness of their analysis with the practical limits of their ability to carry out analysis. Nonetheless, providing information on the costs and benefits of even a limited range of control scenarios under both proposed options would help decision makers and the public in assessing how different levels of stringency would affect overall estimates of costs and benefits. In December 2004, EPA solicited public comment on additional economic analyses the agency received from commenters on the January 2004 proposed rule, including some that relied on models, assumptions, and levels of stringency that were different from the scenarios EPA analyzed. Although EPA’s analysis states that a mercury regulation would generate a variety of benefits, the agency did not estimate in monetary terms all of the benefits expected from reducing mercury emissions. Most notably, EPA did not quantify the human health benefits of decreased exposure to mercury, such as reduced incidence of developmental delays, learning disabilities, and neurological disorders. Instead, EPA estimated only some of the health benefits it anticipates would occur from decreased exposure to fine particles and discussed other impacts qualitatively. Because the two options in the proposed rule differed significantly in both the amount of mercury emission reductions and the time frames in which these reductions would occur, the lack of estimates of the mercury-specific benefits of each policy option represents a significant limitation of EPA’s economic analysis. That is, to the extent that each proposed option would yield measurable mercury-specific health benefits, EPA’s analysis may underestimate the total expected benefits of both options. Moreover, because the options may yield different mercury-related health benefits, the lack of estimates of these benefits makes it difficult to weigh the relative merits of the two proposed options. According to EPA, its analysis did not estimate key mercury-related health benefits because of technical, time, and resource limitations. Specifically, agency officials responsible for the analysis said the agency did not have a method for determining the extent to which mercury reductions from power plants would translate into decreased incidence of mercury-related health problems. According to EPA, estimating these benefits involves a number of complex chemical, physical, and biological processes, as well as a wide variety of human behaviors, such as fish consumption practices. Although EPA did not estimate the expected human health and other benefits of decreased exposure to mercury emissions in the analysis supporting the proposed rule, the agency did list the various human health and other benefits it expects would stem from a mercury rule. Importantly, in December 2004, the agency announced that it was revising its benefit estimates and solicited public comment on a proposed method for estimating mercury-specific benefits. According to EPA, this method would focus on (1) quantifying projected emissions from coal-fired power plants relative to other sources, (2) modeling the dispersion and deposition of mercury, (3) modeling the link between changes in mercury deposition and changes in the methylmercury concentrations in fish, (4) assessing the methylmercury exposure from consuming fish, and (5) assessing how reductions in methylmercury exposure affect human health. According to EPA officials responsible for analyzing the proposed rule’s effects, the agency will consider public comments on this approach and revise its analysis before finalizing a rule. In January 2005, EPA officials responsible for the analysis agreed that providing monetary estimates of mercury-specific benefits would enhance their analysis, and said that the agency might have sufficient information to estimate some, but not all, of the expected human health benefits of reducing mercury emissions. OMB guidance under Executive Order 12866 stipulates that agencies should analyze and present information on uncertainties with their cost-and-benefit estimates. According to EPA officials responsible for the economic analysis, the agency’s cost model is generally sensitive to assumptions about future electricity demand and fuel prices, as well as the availability, cost, and performance of pollution controls. Because these assumptions involve long-term projections, they also involve a substantial amount of uncertainty. EPA conducted a limited uncertainty analysis of natural gas prices and electricity demand growth on the cost estimates by examining the impact of alternative projections and concluded that its cost estimates were not particularly sensitive to changes in these variables. However, EPA did not assess how the distribution of estimated benefits and costs would differ given changes in its assumptions about the availability, cost, and performance of mercury control technologies, even though the agency believes that these assumptions could affect its economic modeling. Furthermore, EPA’s December 2004 notice for additional public comment on the mercury proposal highlighted the uncertainty surrounding the ability of its computer model to estimate mercury control costs, primarily because of the power industry’s limited experience with implementing mercury controls. This notice solicited public comment on, among other things, the assumptions in its economic modeling related to the cost, availability, and performance of mercury control technologies. According to senior EPA officials responsible for analyzing the mercury proposal, changes in these assumptions could have a sizable impact on the agency’s cost-and-benefit estimates. This acknowledgment of key uncertainties in its economic modeling highlights the need to determine how they could affect the overall cost-and-benefit estimates for each proposed option. In addition, EPA did not analyze the key uncertainties surrounding its benefit estimates. For example, EPA used economic data from its earlier assessment of the proposed Clear Skies legislation to approximate the impact of emissions reductions that would be expected under the mercury rule. According to EPA, the agency used this approach—referred to as a “benefits-transfer approach”—because time and resource constraints prevented it from performing new research to measure the value of health impacts under a mercury rule. OMB’s September 2003 guidance, which applies to economically significant final rules issued after January 1, 2005, states that although such an approach can provide a quick and low-cost means of obtaining monetary values, the method may be characterized by uncertainty and potential biases of unknown magnitude and should be treated as a last-resort option. Furthermore, EPA’s economic analysis states that the benefits analysis has many sources of uncertainty, including those associated with emissions data, air quality modeling, and the effect of emissions on human health. The agency did not, however, formally assess the impact of these uncertainties. In January 2005, EPA officials responsible for the proposed mercury rule acknowledged this limited analysis of key uncertainties and said that the agency plans to conduct a more formal assessment of these uncertainties prior to issuing a final rule, as directed by OMB’s September 2003 guidance. This guidance directs agencies to assess the sources of uncertainty in their regulatory analyses and the way in which cost-and-benefit estimates may be affected under plausible assumptions. Furthermore, in cases where the annual economic effects total $1 billion or more, the guidance states that agencies should provide a formal quantitative assessment of the key uncertainties about costs and benefits. Because EPA estimates that regulating mercury emissions would have significant economic impacts totaling billions of dollars per year, it is important for the agency to have a credible basis for selecting a policy that will maximize the return on this investment. However, EPA’s initial economic analysis of the two policies it is considering has a number of shortcomings. Specifically, because EPA did not analyze and document the economic effects of each policy option by itself—as well as in combination with the interstate rule—over their varying full implementation periods, the results cannot be meaningfully compared. In addition, EPA did not document the analysis supporting the cap-and-trade option or provide consistent information on the economic impacts of different mercury control levels for the two options, limiting the transparency and usefulness of the analysis. Further, without monetary estimates of the human health benefits of mercury emissions reductions—a primary purpose of a mercury regulation—over the full implementation period of each option or, at a minimum, a qualitative comparison of these benefits, EPA’s analysis does not provide decision makers with a strong basis for comparing the net benefits under each option. Finally, because EPA did not analyze some of the key analytical uncertainties that could affect its estimates of net benefits, the agency could enhance its economic analysis by further evaluating these uncertainties and how they could affect its overall findings. Unless EPA conducts and documents further economic analysis, decision makers and the public may lack assurance that the agency has evaluated the economic trade-offs of each option and taken the appropriate steps to identify which mercury control option would provide the greatest net benefits. To improve the usefulness of the agency’s economic analysis for informing decision makers and the public, and to help ensure consistency with OMB guidance for economic analysis, we recommend that, as the agency revises its economic analysis prior to selecting a mercury control option, the EPA Administrator take the following four actions: Analyze and fully document the economic effects of each policy option by itself, as well as in combination with the interstate rule, over their full implementation periods. Ensure that the agency documents its analysis supporting the final rule and consistently analyzes the effect that different levels of mercury control would have on cost-and-benefit estimates under each policy option. Include monetary estimates, where possible, of the human health benefits of reductions in mercury emissions from power plants or, at a minimum, provide qualitative information on how these benefits are likely to compare under the two options over a consistent time frame, reflecting full implementation of both options. Further analyze uncertainties surrounding estimates of costs and benefits, as directed by OMB guidance, and evaluate how these uncertainties could affect overall estimates of the rule’s impacts. We provided EPA with a draft of this report for review and comment. In commenting on the draft report, the Assistant Administrator for Air and Radiation said that, prior to issuing a final mercury regulation by March 15, 2005, EPA will conduct additional analysis that will largely address the findings and recommendations identified in our report. EPA’s letter is included as appendix II. As agreed with your offices, unless you publicly announce the contents of this letter earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of the report to the EPA Administrator and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Key contributors to this report are listed in appendix III. Congressional requesters asked us to assess the usefulness of the economic analysis underlying EPA’s proposed mercury rule for decision making. To respond to this objective, we, among other things, reviewed EPA’s analysis of the proposed rule’s economic effects using standard economic principles, OMB guidance, Executive Order 12866, and the Unfunded Mandates Reform Act of 1995. We also discussed the analysis with senior officials within EPA’s Office of Air and Radiation responsible for developing the proposed rule and analyzing its economic effects. In doing this work, we did not independently estimate the costs or benefits of the mercury control options, evaluate EPA’s process for developing the options, or assess legal issues surrounding the extent to which the options comply with the provisions of the Clean Air Act or its amendments. We took several steps to assess the validity and reliability of computer data underlying EPA’s estimates of economic impacts discussed in our findings, including reviewing the documentation and assumptions underlying EPA’s economic model and assessing the agency’s process for ensuring that the model data are sufficient, competent, and relevant. We also discussed these assumptions and procedures with agency officials responsible for the modeling data. (For the background section of this report, we obtained data on mercury emissions. Because they are used for background purposes only, we did not assess their reliability.) We assessed compliance with internal controls related to the availability of timely, relevant, and reliable information. Our concerns about EPA data and analysis are discussed in the body of this report. We performed our work between May 2004 and February 2005 in accordance with generally accepted government auditing standards. In addition to the individuals named above, Tim Guinane and Michael Hix made key contributions to this report. Kate Cardamone, Jessica Fast, Cynthia Norris, Judy Pagano, Janice Poling, and Amy Webbink also made important contributions. | Mercury is a toxic element that can cause neurological disorders in children. In January 2004, the Environmental Protection Agency (EPA) proposed two options for limiting mercury from power plants, and plans to finalize a rule in March 2005. The first would require each plant to meet emissions standards reflecting the application of control technology (the technology-based option), while the second would enable plants to either reduce emissions or buy excess credits from other plants (the cap-and-trade option). EPA received over 680,000 written comments on the proposal. EPA is directed by statute and executive order to analyze the costs and benefits of proposed rules, and the agency summarized its analysis underlying the two options in the proposal. In this context, GAO was asked to assess the usefulness of EPA's economic analysis for decision making. In doing so, GAO neither independently estimated the options' costs and benefits nor evaluated the process for developing the options or their consistency with the Clean Air Act, as amended. GAO identified four major shortcomings in the economic analysis underlying EPA's proposed mercury control options that limit its usefulness for informing decision makers about the economic trade-offs of the different policy options. First, while Office of Management and Budget (OMB) guidance directs agencies to identify a policy that produces the greatest net benefits, EPA's analysis is of limited use in doing so because the agency did not consistently analyze the options or provide an estimate of the total costs and benefits of each option. For example, EPA analyzed the effects of the technology-based option by itself, but analyzed the effects of the cap-and-trade option alongside those of another proposed rule affecting power plants, the Clean Air Interstate Rule (the interstate rule), without separately identifying the effects of the cap-and-trade option. As a result, EPA's estimates are not comparable and are of limited use for assessing economic trade-offs. EPA officials said they analyzed the cap-and-trade option alongside the interstate rule because the agency views the two proposed rules as complementary. Nonetheless, to provide comparable estimates, EPA would have to analyze each option alone and in combination with the interstate rule. Second, EPA did not document some of its analysis or provide information on how changes in the proposed level of mercury control would affect the cost-and-benefit estimates for the technology-based option, as it did for the cap-and-trade option. Third, EPA did not estimate the value of the health benefits directly related to decreased mercury emissions and instead estimated only some secondary benefits, such as decreased exposure to harmful fine particles. However, EPA has asked for comments on a methodology to estimate the benefits directly related to mercury. Fourth, EPA did not analyze some of the key uncertainties underlying its cost-and-benefit estimates. |
With the increased focus on health care fraud and abuse in recent years, the government has identified widespread improper billing by Medicare providers. While in the past the government might have simply sought repayment, it has begun to invoke the penalties and damages prescribed in the False Claims Act in some cases. The False Claims Act has become one of the government’s primary enforcement tools because it allows recovery of losses to federal health care programs, and the damages and penalty provisions provide a deterrent effect. The act provides that anyone who knowingly submits false claims to the government is liable for three times the amount of damages plus a mandatory penalty of $5,000 to $10,000 for each false claim. The term “knowingly” is broadly defined to mean that a person (1) has actual knowledge of the false claim, or (2) acts in deliberate ignorance of the truth or falsity of the information, or (3) acts in reckless disregard for the truth or falsity of the information. In the health care setting, where providers submit thousands of claims each year, the potential damages and penalties provided under the False Claims Act can be quite large. The widespread application of the False Claims Act to improper Medicare billings has heightened providers’ attention to the importance of compliance with Medicare program requirements. In February 1997, HHS-OIG released its first guidance for compliance programs in the health care industry—Model Compliance Plan for Clinical Laboratories. Since then, HHS-OIG has issued three additional provider-specific compliance guides and revised the laboratory model. Through these guides HHS-OIG encourages providers to improve and enhance their internal controls so that their billing practices are in compliance with Medicare’s rules and regulations. However, use of the guides remains voluntary. Table 1 shows the current HHS-OIG compliance guides and the dates they were issued. All of the HHS-OIG compliance guides provide for seven components of comprehensive compliance programs: 1. Written policies and procedures, including standards of conduct. 2. Designation of a compliance officer responsible for operating and monitoring the compliance program. 3. Regular employee education and training programs. 4. A reporting mechanism to receive complaints anonymously. 5. Corrective action policies and procedures, including disciplinary policies, to respond to allegations of noncompliance. 6. Periodic audits to monitor compliance. 7. Investigation and correction of identified systemic problems, including policies addressing the nonemployment of sanctioned individuals. Each of the compliance guides also highlights what HHS-OIG calls “risk areas,” or areas of special concern, which HHS-OIG has identified through its investigative and audit activities, and which it believes the internal policies and procedures of compliance programs should address. While the risk areas are generally specific to a type of provider, several of the risk areas are included in more than one guide. Risk areas identified by HHS-OIG include potential Medicare billing infractions such as billing for items or services not actually provided and billing for a more expensive item or service than provided. HHS-OIG cites other Medicare rules and regulations as risk areas as well, including the Stark physician self-referral law and the antikickback statute. HHS-OIG believes that the compliance guides have significantly advanced the cause of corporate compliance with federal health care program requirements and is planning to issue guides for other health care providers serving Medicare beneficiaries. These include durable medical equipment companies, Medicare+Choice organizations offering coordinated care plans, nursing homes, and hospices. (The regulations implementing the Medicare+Choice program require Medicare+Choice organizations to implement compliance plans.) Providers’ compliance programs, among other things, are to be considered by Justice attorneys in determining whether the provider “knowingly” submitted a false claim, according to detailed guidance on the use of the False Claims Act in health care matters which was issued by the Deputy Attorney General in June 1998. While this guidance primarily addresses national health care initiatives, such as the 72-hour Window Project, it also directs Justice attorneys to consider prior remedial efforts such as self-disclosure of potential wrongdoing. We recently issued the first of two legislatively mandated reports on Justice efforts to implement its new False Claims Act guidance. According to the results of two hospital surveys, our interviews with observers in the health care field, and our study of 25 hospitals, it is apparent that many hospitals are implementing formal compliance programs. However, the actual prevalence of such programs is difficult to determine precisely. Often hospitals are driven in their compliance efforts, at least in part, by the requirements of agreements with the government resolving allegations of provider misconduct. Hospitals that agreed to implement compliance procedures to resolve billing or fraud issues told us they are implementing compliance programs that go well beyond the requirements of the agreements. Because their programs are relatively new, only a few of the hospitals in our study have completely implemented all of the policies and procedures that they have identified as being part of their compliance program. Medicare providers are generally not required to report on their compliance programs to federal agencies or other entities so there are no readily available data on their prevalence. Even if providers were required to report this information, the task of measuring the prevalence and composition of compliance programs would still be complicated by several factors. Most important, the lack of an accepted definition of a compliance program would make any tabulation problematic. HHS-OIG’s hospital compliance guide itself states that “there is no single ‘best’ hospital compliance program, given the diversity within the industry.” In addition, determining whether or not the components of a compliance program have been meaningfully implemented is inherently subjective. For example, whether or not a provider is conducting billing audits is subject to interpretation. While two compliance programs may each call for a sampling of all claims, their sampling methodologies may differ significantly. Further, one provider may review past claims when a problem is identified, and another provider may audit only current claims. Despite these inherent measurement difficulties, there are indications that compliance programs are being implemented, in some fashion, by many hospitals. We spoke with members of hospital groups, federal agency representatives, and other observers in the health care and compliance fields who all said that compliance programs are increasingly prevalent. A few hospitals in our study told us that they believe compliance programs are becoming an industry standard. In addition, two recent hospital surveys indicate that compliance programs are being implemented. First, a February 1998 copyrighted survey by UHC (which has 84 academic health center members) found that 97 percent of the 64 respondents either had a compliance program in place or planned to implement one soon. Also, a recent survey of 4,300 hospitals by AHA found that 96 percent of the 1,902 respondents indicated that they have a formal compliance program in place or plan to implement one within the coming year. About 2,000 hospitals have agreed to implement certain compliance procedures—in some cases a full compliance program covering all Medicare risk areas—as part of an agreement with the government to settle billing issues under the False Claims Act. Nearly all of the 25 hospitals in our study had agreed to implement compliance procedures as part of a settlement agreement for at least some part of their operations. Seventeen of the hospitals in our study agreed to implement compliance procedures as part of a settlement under Justice’s 72-Hour Window Project. The 72-Hour Window Project investigates whether hospitals have separately billed Medicare for outpatient services, which are already covered by a Medicare inpatient payment, such as preadmission tests provided within 72 hours of admission. The compliance procedures required under this project include installing and maintaining computer systems to identify such outpatient services before the hospital bills Medicare as well as training billing personnel on the 72-hour rule. These settlements do not cover any risk area other than the 72-hour rule, do not require ongoing monitoring, do not require the appointment of a compliance officer, and do not impose any obligations on the hospital to report any potential violations uncovered. At least 6 of the 25 hospitals in our study agreed to implement more comprehensive corporate integrity agreements (CIA) to settle charges of misconduct in their Medicare operations. A CIA is an agreement between a health care provider and HHS-OIG in conjunction with the settlement of a case alleging health care fraud or abuse. CIAs are generally specific to the provider and case, set requirements for a term of 3 to 5 years, and are a condition of the provider’s continued participation in Medicare and other federal health care programs. While CIA requirements vary, they generally include (1) the appointment of a Compliance Officer; (2) mandatory compliance training; (3) internal and/or independent external reviews of either specified risk areas, the implementation of the agreement provisions, or both; (4) notice to HHS-OIG of material violations when identified; (5) annual reporting to HHS-OIG; and (6) continuing CIA responsibilities after organizational changes such as mergers and acquisitions. If a provider fails to comply with the CIA, HHS-OIG reserves the right to exclude the provider from Medicare and other federal health care programs or, alternatively, impose monetary penalties. HHS-OIG has recently negotiated CIAs that require compliance procedures covering all laws, regulations, and guidelines relating to federal and state health care programs—not only those relevant to the allegations in the case. Most of the hospital officials we interviewed told us that they felt compelled to implement more extensive compliance procedures than required of them by the federal government. Twenty-two of the 25 hospitals we reviewed have government-imposed compliance procedures of some type; nearly all of the 22 told us their compliance programs go beyond the requirements of any settlement agreements they are subject to—often far beyond. For instance, as of December 31, 1998, 10 of the hospitals in our study have only the compliance procedures associated with the 72-Hour Window Project imposed upon them. Yet 9 of those 10 say they have implemented or plan to soon implement a more comprehensive compliance program with procedures covering risk areas such as medical necessity, laboratory billing, and upcoding. When asked why they felt the need to develop more rigorous compliance programs, these hospital officials mentioned the heightened enforcement environment, HHS-OIG guides and workplans showing a continued enforcement focus on hospital billing, and expectations that HCFA and accrediting bodies would soon require compliance programs. Some providers and observers in the field noted that HCFA’s requirement that managed care plans participating in the new Medicare+Choice program implement compliance programs may be an indication that compliance programs will eventually be mandated. Very few of the hospitals in our study have fully implemented their compliance programs. All 25 of them identified policies, processes, and procedures that they said were important parts of their programs. However, only five of the hospitals have implemented all of the policies, processes, and procedures identified. Seventeen hospitals have not conducted compliance program audits, to ensure that the policies, processes, and procedures of their compliance program have been carried out. Seven hospitals still need to introduce the compliance program to their employees. Six hospitals have not started doing background checks to identify sanctioned individuals, and two hospitals have yet to establish an organizational code of conduct. Figure 1 shows the implementation status and history of the various components of the compliance programs being implemented by our study’s hospital providers. According to the hospitals in our study, the implementation and operation of compliance programs entail a considerable commitment of time and money. However, among hospitals that could provide us with direct compliance program cost data, only one appears to spend more than 1 percent of total patient care revenues. All of the hospitals in our study identified direct cost components, such as salaries and fringe benefits for compliance officers and staff, consulting and legal fees, and outside audit services; but determining the costs of these and other components of compliance programs was difficult for our hospital providers. The lack of a compliance budget was the main reason for this difficulty; the hospitals could not always distinguish the costs attributable to their compliance programs from those of their normal operations. The components for which hospitals could estimate costs, as well as the actual cost estimates, varied widely among the hospitals. Hospital officials pointed out that their compliance programs also generate indirect costs, which are more difficult to measure and may be greater than the direct costs. Fifteen of the hospitals in our study did not specifically budget for compliance activities, limiting their ability to give us precise or comprehensive figures for their compliance program costs. Without a compliance budget, these officials were hard-pressed to distinguish the costs of their compliance program-related activities from the costs of their normal business operations. In addition, the compliance officials we interviewed differed as to their treatment of costs absorbed by departments other than their own. Some considered these to be costs of their compliance program, others did not. Eight hospitals in our study told us their ability to report compliance program costs was further limited because they had difficulty identifying costs they would have incurred even without their formal compliance programs. For example, officials at six hospitals said they had long audited medical records on a periodic basis and that the compliance program merely formalized their methodology. The challenges in capturing compliance program costs were borne out by UHC’s February 1998 membership survey. In addition to determining which of its members were implementing compliance programs, UHC attempted to gather comprehensive information about the cost of compliance program components. The consortium found that while members could identify some cost information, they generally could not provide cost estimates for all compliance program components. In general, the cost estimates given to us by hospitals fell under the following compliance-related categories: development of policies, processes, and procedures; oversight activities; background checks; training and education; auditing; operation of reporting mechanisms, such as a compliance hotline; and attorney fees and investigations. The hospital officials we spoke with could not address the costs associated with each of these categories because of differences in how they organized their compliance programs and how they funded these activities. In those cost categories for which we received more than one hospital’s estimates, the costs reported varied widely. The relation of these costs to the organizations’ revenues varied as well. In one case, the direct costs identified by a hospital chain with relatively comprehensive cost estimates were less than 1 percent of the chain’s revenue. In another, the compliance officer of a hospital-affiliated physician practice plan estimated the costs of its compliance program to be over 2 percent of the plan’s revenue. One direct cost figure frequently identified by hospitals was the annual salary(ies) of the compliance officer/staff. The low cost reported was $15,000 at a mid-sized hospital where the compliance officer devoted 10 percent of his time to compliance and the hospital received substantial support and guidance from its system parent. The highest estimated cost was $2.5 million at a large hospital system where the compliance staff included four full-time attorneys and support staff. Audit costs (both internal and external) were the most frequently identified direct cost component, with estimates ranging from $17,000 to about $3.8 million per year. The hospitals in our study also identified many significant indirect costs associated with their compliance programs. Foremost among these was employee and physician time spent away from regular duties while attending compliance-related training. Indirect compliance program costs were not generally estimated by the hospitals in our study, but hospital officials told us these costs might be larger than the direct costs. For example, the compliance officer from a hospital that did estimate some indirect costs told us that the organization spent approximately $2 per employee to present its compliance program training. However, he estimated the value of the time spent by the employees away from their normal duties while attending the training to be $25 per employee, over 10 times as much. Other indirect compliance program costs identified by hospitals in our study include the time of high-level executives spent on compliance program development and oversight, and lower revenues as a result of conservative billing practices. The principal objective of compliance programs, and hence the most direct measure of their effectiveness, is their performance in preventing improper Medicare payments. However, baseline data on the amount of improper payments made to providers is lacking; and the costs associated with gathering such baseline data—or comparison data for providers without compliance programs—have precluded the use of this effectiveness measure. Lacking such a direct measure, HHS-OIG plans to continue using various indirect measures, including refunds of provider-identified overpayments and self-disclosures of potential misconduct, to determine whether or not compliance programs are effective. Officials from HHS-OIG and Justice told us they anticipate that, as providers fully implement their compliance programs, provider-identified refunds and self-disclosures should increase, at least initially. Another possible indicator of effectiveness mentioned by law enforcement authorities is the frequency of disciplinary actions taken against noncompliant employees. Hospital officials in our study agreed that these measures could indicate compliance program effectiveness, but pointed to some others as well. The most frequently mentioned was increased employee awareness of proper billing rules and other compliance policies and procedures. While each of the measurement criteria mentioned has limitations that prevent conclusive proof that the elements of compliance programs reduce improper Medicare payments, there are preliminary indications that such programs can have a positive effect. For example, some Medicare contractors have reported refunds of provider-identified overpayments, although neither they nor HCFA keep track of this indicator on a systematic basis. Self-disclosures of potential misconduct by providers have been reported by HHS-OIG, Justice, and hospital officials, although the number of self-disclosures reported is small. Hospital officials also reported taking disciplinary actions against noncompliant employees and instituting corrective actions, such as remedial training of billing staff. Finally, the hospitals in our study overwhelmingly believe that the benefits of their compliance programs exceed their costs. Because compliance programs are relatively new to the health care industry, HHS-OIG and Justice officials say they have yet to come across many that led to refunds of provider-identified overpayments. These officials do acknowledge, however, that some billing errors are inevitable. Therefore, they expect that as effective compliance programs are implemented, these errors will be detected and such detection will lead to an increase in refunds of provider-identified overpayments. HHS-OIG officials think this will happen because the monitoring of compliance across the risk areas identified by their compliance guides will probably cause providers to examine billing issues that they had not examined before. HHS-OIG and Justice officials further expect that as compliance programs mature, providers’ compliance with Medicare billing rules should increase and refunds of provider-identified overpayments should then decline. Others we spoke with cautioned that a variety of factors could contribute to an increase in refunds of provider-identified overpayments—not just the effectiveness of compliance programs. For example, a change in Medicare billing rules or the institution of a new payment system might cause errors that could lead to an increase in refunds of provider-identified overpayments. Similarly, provider operational changes, such as entering a new line of business or acquiring another provider, could lead to an increase in overpayments returned. Moreover, while several hospitals in our study were hopeful that over time the billing errors detected by their compliance program would decline, a few felt that billing errors might not, in fact, decline because of the complexity of Medicare rules. Therefore, tracking refunds of provider-identified overpayments—either for an individual provider or for providers overall—may not be sufficient to determine effectiveness of compliance programs. HCFA officials and some Medicare contractors we talked with told us that although they do not routinely track refunds of provider-identified overpayments, they have noted an increase in such refunds within the last 2 years. Without extensive research, these Medicare contractors were not able to tell us the actual amount of all such refunds. Nevertheless, two of the contractors were able to identify some amounts refunded. For example, one recently received a $2.7 million refund from a home health agency that said the overpayment was identified through its compliance program. In this case, after reviewing documents provided by the agency and reviewing the actions the agency has taken to ensure future billings are correct, the contractor is now in the process of assessing the agency’s method for determining the refund amount. This contractor also received a $200,000 refund from a teaching hospital. One of the other two contractors we spoke with also reported that it had received refunds of overpayments, reportedly due to compliance programs. Several hospitals indicated their compliance program had led to refunds of overpayments or informal self-disclosures. Generally, refunds of overpayments arose pursuant to an internal audit of a specific functional area identified by HHS-OIG as high-risk. For example, one hospital told us it does quarterly audits of its compliance with physician billing rules and has refunded identified overpayments when it was too late for them to resubmit the bill. The hospitals in our study generally viewed such refunds of overpayments to Medicare’s contractors as informal self-disclosures to the government. Yet several hospitals were concerned that the contractors they deal with did not know how to process the refunds of self-identified overpayments, and a few expressed concern that the contractors would automatically refer these refunds to HHS-OIG. HHS-OIG and Justice officials told us of one hospital provider who formally self-disclosed potential misconduct after a review of its billing procedures. These officials expect to see more formal self-disclosures such as this one, because the HHS-OIG compliance guides and the Sentencing Guidelines for Organizations both say misconduct identified by a compliance program should be reported to HHS-OIG or Justice. HHS-OIG requires that providers who enter into CIAs report on the implementation of the agreement, and these reports usually include disclosures of refunds of overpayments and of potential misconduct. Both HHS-OIG and Justice officials told us they have used speaking engagements and public documents to support and encourage providers to self-disclose as part of an effective compliance program. Some hospital officials agreed that as compliance programs are implemented, self-disclosures of possible wrongdoing might increase. However, most hospitals said they expect that the increased awareness of compliance issues created by an effective compliance program will result in the prevention of misconduct that otherwise might occur. Therefore, there may be fewer instances of potential misconduct for providers to self-disclose. As a result, tracking self-disclosures of potential misconduct—either for an individual provider or for providers overall—may not be an appropriate indicator of effectiveness. HHS-OIG has operated a formal voluntary disclosure mechanism since 1995 and revised the process in October 1998. Providers who identify potential misconduct within their organizations can use this mechanism to self-report such potential misconduct. The hospitals in our study generally did not see formal disclosure as a viable option. As of December 31, 1998, only 20 providers had applied to use this mechanism, and it is not clear that those who did formally self-disclose did so as a result of a formal compliance program. (See app. II for further discussion of formal voluntary disclosure mechanisms.) Although few providers have used the formal self-disclosure mechanism, some of the hospitals in our study told us they had informally contacted HHS-OIG or Justice officials to discuss billing problems in their organization before returning an overpayment to Medicare. In some instances, the problem was identified through their compliance program. The typical informal self-disclosure that hospitals described to us involves the provider’s attorney approaching an HHS-OIG or Justice representative and describing the issue on behalf of the provider. Hospitals and hospital associations and their advisers told us self-disclosure is fraught with risk, and therefore it is a step that is taken only after careful consideration of the ramifications. Justice, HHS-OIG, and the hospitals in our study identified other possible indicators of compliance program effectiveness. For example, HHS-OIG and Justice have said they will be looking for disciplinary actions taken by providers against employees who have not followed compliance procedures. The hospitals in our study that reported overpayment refunds and self-disclosures told us that they also took additional corrective actions such as remedial training, discipline, and modification of compliance program policies and procedures. For example, some hospitals associated with physician groups told us they used special procedures to review the bills for physicians with documentation problems. A few of these hospitals make the physician either absorb this expense, foot the costs of remedial training, or pay some other type of monetary sanction in an attempt to improve that physician’s compliance. Several hospitals have had trainers teach correct billing and coding techniques to the employees who are identified by audits as having weaknesses in these areas. The major intangible indicator mentioned by hospitals is an increased corporate awareness of compliance as shown by frequent calls to compliance staff and/or hotlines for guidance. Sixteen hospitals told us that an improved employee knowledge of compliance issues, risk areas, and procedures is something they will consider in evaluating the effectiveness of their compliance efforts. Some plan to measure this knowledge in conjunction with compliance training by asking employees questions such as “What is our hotline number?” and “What risk areas does our organization face?” A few hospitals will have employees respond to hypothetical situations so the compliance officer can judge whether or not the employee knows what to do when faced with concerns regarding compliance with Medicare rules. Almost all of the hospitals in our study believe their liability under the fraud and abuse statutes will be reduced as a result of their compliance programs. For most of them the reduction of improper payments and their attendant liabilities is a benefit that exceeds the costs of their compliance programs. In addition to this benefit, hospitals expressed hope that they would receive some form of recognition of their compliance efforts if they should be the targets of an investigation by the federal government. They also believe the compliance program helps foster an improved culture for “doing the right thing.” Additionally, several hospitals said their compliance program helps them maintain their reputation in the community. These hospital officials told us that these benefits, where realized, also indicate compliance program effectiveness. Several of the hospitals we interviewed told us they received such recognition when they were the target of an investigation. One hospital, with a long-standing compliance program, told us that it was subject to an HHS-OIG Physicians at Teaching Hospitals audit. This hospital credited its compliance program with enabling it to arrange not only a less expensive method for conducting the audit but, ultimately, a written resolution of the audit without findings. Five hospitals that had entered into settlements with Justice and HHS-OIG told us that their compliance efforts were recognized in the form of nonexclusion from Medicare, less onerous future compliance requirements, or less than treble damages. However, more hospitals expressed concern about not getting such recognition from law enforcement agencies. At least one hospital system claimed that a U.S. Attorney did not give it credit for its preexisting compliance program in a settlement because the U.S. Attorney believed the hospital involved had not effectively corrected prior misconduct. Nevertheless, Justice and HHS-OIG officials told us, and have publicly stated, that they will consider the presence of an effective compliance program when settling allegations of improper billing by hospitals. During our study we attempted to determine whether U.S. Attorneys have encountered compliance programs in the course of their investigations and whether the presence of a compliance program affected the investigation. Because Justice does not track whether health care providers it investigates have compliance programs, we asked Justice officials to contact the U.S. Attorneys’ offices responsible for most of the districts where the providers in our study were located. In these 20 districts, the U.S. Attorneys reported four closed cases in which the health care provider investigated had a compliance program in place at the time of the investigation. One case involved the self-disclosure and refund of an overpayment identified in a compliance program audit. This case was closed with no action taken by Justice. In another case, the U.S. Attorney reported that a provider being investigated for billing problems had a compliance program in place that appeared to have prevented billing problems, and the investigation was dropped. In the remaining two cases, although a compliance program was in place at the time of the alleged misconduct, the U.S. Attorneys involved indicated they did not reduce damages when arriving at the settlement. U.S. Attorneys also reported that several providers under current investigation have compliance programs that were in place at the time of the alleged misconduct. However, because these cases are still open, Justice officials will not discuss whether or how the presence of a compliance program will affect the final disposition of these cases. In addition to stepping up enforcement actions, HHS-OIG, HCFA, and Justice have all encouraged the adoption of compliance programs in the hopes of reducing improper Medicare payments. The voluntary compliance of hospitals and other Medicare providers is crucial to reducing the improper payments that continue to plague the program. Although determining the prevalence of such programs is difficult, there is a consensus among providers and agencies that these programs are becoming more widespread. Furthermore, despite the investment of time and resources that compliance programs entail, many hospitals believe the benefits of these programs—particularly reduced liability under the fraud and abuse statutes—outweigh their costs. Finally, while the effectiveness of compliance programs is difficult to determine with certainty, HHS-OIG, HCFA, Justice, and providers themselves believe that compliance programs can reduce improper Medicare payments. We provided a draft of this report for comment to HHS-OIG and Justice. The following summarizes their comments and our responses. HHS-OIG expressed concern that the title of the report does not reflect its view that compliance programs are effective in promoting compliance with requirements of federal health care programs. HHS-OIG points to the consensus among the hospitals in our study that the benefits of compliance programs exceed their costs as evidence of compliance program effectiveness. Finally, HHS-OIG identified several other indicators that improper payments in the Medicare program may have declined, such as its recent review of Medicare fee-for-service payments. In this review HHS-OIG reported a decline in its estimate of improper payments, from $10.6 billion in fiscal year 1997 to $7.7 billion in fiscal year 1998. We included the views of HHS-OIG and providers regarding the benefits of compliance programs in our report. However, we continue to believe that the principal measure of compliance programs’ effectiveness is their effect on improper payments. The evidence available to date does not show that compliance programs have reduced improper Medicare payments. Indeed, HHS-OIG acknowledges that it does not have empirical evidence supporting a causal relationship between a decline in improper payments and implementation of compliance programs. HHS-OIG also provided technical comments, which we incorporated as appropriate. HHS-OIG’s comments appear in appendix III. Officials from Justice’s Executive Office for United States Attorneys reviewed the draft and offered technical comments, which we incorporated as appropriate. We are sending copies of this report to the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable June Gibbs Brown, HHS Inspector General; the Honorable Nancy-Ann Min DeParle, Administrator of HCFA; the Honorable Janet Reno, U.S. Attorney General; the organizations we visited; and other interested parties. Please call me at (312) 220-7600 or Paul Alcocer at (312) 220-7709 if you or your staffs have any questions about this report. The other major contributors are Barbara A. Mulliken and Victoria M. Smith. To determine how prevalent compliance programs are among Medicare providers, we interviewed officials at HCFA; HHS-OIG; and provider-affiliated associations, including the American Hospital Association (AHA) and the Health Care Compliance Association. We also reviewed some of the results of two 1998 compliance program surveys conducted by the University HealthSystem Consortium and AHA. In addition, we asked providers about their perspective on the prevalence of compliance programs among their peers. To determine what costs are involved with compliance programs, we interviewed 30 Medicare providers. We contacted 37 providers, and 30 of them were willing to speak with us directly about their compliance programs. We selected these providers on the basis of a variety of factors that indicated a compliance program in place at that institution. These factors included articles commenting on a compliance program, prior interviews with GAO personnel indicating a compliance program, active corporate integrity agreements, referrals by agency and association officials, and application to HHS-OIG’s Voluntary Disclosure Program. The 30 providers we interviewed represent a range of provider type, geographic service area, organizational size, religious affiliation, and profit status. Of the 30 provider organizations interviewed, 25 of them are hospitals or hospital-affiliated organizations, including physician groups. Our review focused primarily on hospital providers because they receive the largest share of Medicare funds and are the focus of several current enforcement actions. (The remaining five Medicare providers are an independent clinical laboratory, a home health organization, a durable medical equipment provider, a skilled nursing provider, and a managed care organization. We interviewed these nonhospital providers for comparison purposes only). We asked provider-affiliated association officials about their perspective on the cost of compliance programs among their member organizations. We also asked approximately 30 vendors of compliance-related products and services for the prices of their products and services, but used these for comparison purposes only. To determine how the effectiveness of compliance programs should be measured, we interviewed officals at the Department of Justice, HHS-OIG, and provider-affiliated associations; several observers in the field; and 30 Medicare providers. We also reviewed the Federal Sentencing Guidelines for Organizations, case law referencing compliance programs, HHS-OIG Compliance Guides, Model Compliance Manuals, and the marketing material of approximately 30 vendors of compliance-related products and services. To determine whether compliance programs are effective, we interviewed three Medicare contractors, Justice, HHS-OIG, and HCFA with regard to the presence of the measures that had been identified. We also interviewed provider-affiliated associations, several observers in the field, and 30 Medicare providers about their perspective on the effectiveness of compliance programs but used this information for comparison purposes only. We also reviewed the results of HHS-OIG’s Voluntary Disclosure Program. We conducted our work at HCFA, HHS-OIG, Justice, and selected provider and provider-affiliated association offices. We performed our work between May 1998 and February 1999 in accordance with generally accepted government auditing standards. In May 1995, HHS-OIG and Justice initiated a pilot Voluntary Disclosure Program (VDP) in conjunction with the Operation Restore Trust initiativefor providers to report instances of possible misconduct. In “An Open Letter to Health Care Providers,” HHS-OIG stated that the success of this and other such initiatives would be best ensured through cooperative efforts with providers. However, the VDP pilot was ostensibly open only to the providers targeted by Operation Restore Trust. Moreover, acceptance into the program was predicated on strict eligibility requirements being met. The disclosure had to be on behalf of an entity and not an individual, and the entity could not be under investigation at the time of application. During the VDP pilot period—May 1995 through May 1997—Justice was a signatory to the agreement with the self-disclosing provider and HHS-OIG upon entry into the program. However, because of the low number of applications during the pilot period, Justice chose to no longer participate in this program. After assessing the pilot program and exploring criticisms leveled at it, HHS-OIG decided to continue these efforts under a Voluntary Disclosure Protocol (Protocol). The two hospitals we spoke with that were accepted into VDP told us that despite a high level of HHS-OIG cooperation, the application process was arduous and expensive. Table II.1: reports the activity, by calendar year, in HHS-OIG’s VDP/Protocol. Not applicable. As table II.1 illustrates, the number of disclosures under VDP and the Protocol has been small. An HHS-OIG official told us he believes that with Justice no longer a formal partner in the program, it is unlikely that this Protocol will be highly utilized. However, in the belief that VDP’s strict application requirements were discouraging providers from applying, HHS-OIG removed the eligibility requirements from the Protocol. It should be noted, however, that like the VDP, the Protocol does not offer any assurances to self-disclosing providers. Alton Ochsner Medical Foundation, New Orleans, La. American Hospital Association, Washington, D.C. Beaumont Rehabilitation and Skilled Nursing Centers, Westborough, Mass. Catholic Health Initiatives, Denver, Colo. Catholic Healthcare West, San Francisco, Calif. Cook County Hospital, Chicago, Ill. Coventry Health Care, Bethesda, Md. Deborah Heart and Lung Center, Browns Mills, N.J. Ethics Officers Association, Boston, Mass. Gottlieb Memorial Hospital, Melrose Park, Ill. Health Care Compliance Association, Philadelphia, Pa. Holy Cross Health System, South Bend, Ind. Home Health Corporation of America, King of Prussia, Pa. Home Life Medical, Inc., Woburn, Mass. Huguley Memorial Medical Center, Fort Worth, Tex. Joint Commission on Accreditation of Healthcare Organizations, Oak Brook, Ill. Lewistown Hospital, Lewistown, Pa. MedCentral Health System, Mansfield, Ohio Meridia Health System, Cleveland, Ohio Montefiore Medical Center, Bronx, N.Y. Parkland Health and Hospital System, Dallas, Tex. Poudre Valley Hospital, Ft. Collins, Colo. Provena Saint Therese Medical Center, Waukegan, Ill. Quest Diagnostics, Teterboro, N.J. Quorum Health Group, Brentwood, Tenn. Reedsburg Area Medical Center, Reedsburg, Wis. Rural Wisconsin Health Cooperative, Sauk City, Wis. Southern Illinois Healthcare, Carbondale, Ill. Southern Illinois University, Springfield, Ill. Sutter Health, Sacramento, Calif. Tenet Healthcare Corporation, Santa Barbara, Calif. Texas Health Resources, Irving, Tex. UCSF Stanford Health Care, San Francisco, Calif. University HealthSystem Consortium, Oak Brook, Ill. University of Colorado Medical Services Foundation, Denver, Colo. University of Virginia Health Services Foundation, Charlottesville, Va. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the compliance programs established by health care providers to reduce improper payments by Medicare, focusing on the: (1) prevalence of compliance programs among hospitals and other Medicare providers; (2) costs involved with compliance programs; and (3) effectiveness of the programs, to the extent that could be measured. GAO noted that: (1) although there is no comprehensive data on the number of providers with compliance programs, many hospitals are implementing them; (2) two recent hospital surveys, one focusing on academic health centers and the other including a broad range of hospital types, found that most hospitals responding either had or planned to soon implement a compliance program; (3) the hospitals in GAO's study said they felt compelled to implement a compliance program for a variety of reasons, including the heightened enforcement environment, suggestions from the Department of Health and Human Services' Office of the Inspector General, and expectations that the Health Care Financing Administration and accrediting bodies would soon require compliance programs; (4) although compliance programs are apparently becoming widely accepted, most of the hospitals in GAO's study have only recently begun implementation; (5) hospitals report that compliance programs require an investment of considerable time and money; (6) however, measuring the cost of compliance programs is difficult; (7) hospitals could not always distinguish costs attributable to their compliance programs from those of their normal operations, in part because the hospitals often had existing compliance-oriented activities that were subsumed by the compliance program; (8) hospitals reported a variety of significant direct costs, such as salaries for compliance staff and professional fees for consultants and attorneys; (9) according to the information GAO was able to obtain, direct compliance program costs appear to account for a very small percentage of total patient revenues--less than 1 percent in all but one of the hospitals studied; (10) the hospitals also reported indirect costs, such as time spent by employees in compliance-related training and away from their regular duties; (11) these indirect costs are more difficult to measure and may be larger than the direct costs reported; (12) the principal measure of a compliance program's effectiveness is its ability to prevent improper Medicare payments; (13) it is difficult to measure effectiveness in this way because of the lack of comprehensive baseline data and the existence of many other factors that could affect measurement results; (14) other measures have been suggested as a proxy for measuring compliance program effectiveness; (15) Medicare contractors reported that they have received refunds of provider overpayments with more frequency; (16) GAO has also noted an increase in formal provider self-disclosures during the last few years; and (17) however, this preliminary evidence does not demonstrate that compliance programs have reduced improper Medicare payments. |
Since the 1960s, the United States has used geostationary and polar- orbiting satellites to observe the earth and its land, ocean, atmosphere, and space environments. Geostationary satellites maintain a fixed position relative to the earth from a high orbit of about 22,300 miles in space. In contrast, polar-orbiting satellites circle the earth in a nearly north-south orbit, providing global observation of conditions that affect the weather and climate. As the earth rotates beneath it, each polar-orbiting satellite views the entire earth’s surface twice a day. Both types of satellites provide a valuable perspective of the environment and allow observations in areas that may be otherwise unreachable. Used in combination with ground, sea, and airborne observing systems, satellites have become an indispensable part of monitoring and forecasting weather and climate. For example, geostationary satellites provide the graphical images used to identify current weather patterns and provide short-term warning. Polar-orbiting satellites provide the data that go into numerical weather prediction models, which are a primary tool for forecasting weather days in advance—including forecasting the path and intensity of hurricanes. These weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. Federal agencies are currently planning and executing major satellite acquisition programs to replace existing geostationary and polar satellite systems that are nearing the end of their expected life spans. However, these programs have troubled legacies of cost increases, missed milestones, technical problems, and management challenges that have resulted in reduced functionality and major delays to planned launch dates over time. We and others—including an independent review team reporting to the Department of Commerce and the department’s Inspector General—have raised concerns that problems and delays with environmental satellite acquisition programs will result in gaps in the continuity of critical satellite data used in weather forecasts and warnings. According to officials at NOAA, a polar satellite data gap would result in less accurate and timely weather forecasts and warnings of extreme events, such as hurricanes, storm surges, and floods. Such degradation in forecasts and warnings would place lives, property, and our nation’s critical infrastructures in danger. The importance of having such data available was highlighted in 2012 by the advance warnings of the path, timing, and intensity of Superstorm Sandy. Given the criticality of satellite data to weather forecasts, concerns that problems and delays on the new satellite acquisition programs will result in gaps in the continuity of critical satellite data, and the impact of such gaps on the health and safety of the U.S. population, we concluded that the potential gap in weather satellite data is a high-risk area. We added this area to our High-Risk List in 2013 and it remained on the High-Risk List in 2015. NOAA operates a two-satellite geostationary satellite system that is primarily focused on the United States (see figure 1). The GOES-R series is the next generation of satellites that NOAA is planning; the satellites are planned to replace existing weather satellites. The ability of the satellites to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. NOAA is responsible for GOES-R program funding and overall mission success, and has implemented an integrated program management structure with the National Aeronautics and Space Administration (NASA) for the GOES-R program. Within the program office, there are two project offices that manage key components of the GOES-R system. NOAA has delegated responsibility to NASA to manage the Flight Project Office, including awarding and managing the spacecraft contract and delivering flight-ready instruments to the spacecraft. The Ground Project Office, managed by NOAA, oversees the Core Ground System contract and satellite data product development and distribution. The program estimates that the development for all four satellites in the GOES-R series will cost $10.9 billion through 2036. In 2013, NOAA announced that it would delay the launch of the GOES-R and S satellites from October 2015 and February 2017 to March 2016 and May 2017, respectively. Since 2012, we have issued three reports on the GOES-R program that highlighted management challenges and the potential for a gap in backup satellite coverage. In these reports, we made 12 recommendations to NOAA to improve the management of the GOES-R program. These recommendations included improving satellite contingency plans, addressing shortfalls in defect management, and addressing weaknesses in scheduling practices. The agency agreed with these recommendations. As of October 2015, the agency implemented 4 of these recommendations and is working on the remaining 8 recommendations. For example, NOAA improved its geostationary satellite contingency plan and improved its risk management processes. Also, while NOAA has made progress by improving selected practices, it has not yet fully implemented our recommendation to address multiple weaknesses in its scheduling practices. For example, the agency included subcontractor activities in its core ground schedule, but has not yet provided details showing a realistic allocation of resources. We have ongoing efforts to assess the agency’s progress in addressing the open recommendations. In addition to the geostationary satellite constellation, for over 40 years, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. Currently, there is one operational Polar-orbiting Operational Environmental Satellite (called the Suomi National Polar- orbiting Partnership, or S-NPP) and two operational DMSP satellites that are positioned so that they cross the equator in the early morning, midmorning, and early afternoon. In addition, the government relies on data from a European satellite, called the Meteorological Operational satellite, or Metop. Figure 2 illustrates the current operational polar satellite constellation. A May 1994 Presidential Decision Directive required NOAA and the Department of Defense (DOD) to converge the two satellite programs into a single satellite program—the National Polar-orbiting Operational Environment Satellite System (NPOESS)—capable of satisfying both civilian and military requirements. However, in the years after the program was initiated, NPOESS encountered significant technical challenges in sensor development, program cost growth, and schedule delays. Faced with costs that were expected to reach about $15 billion and launch schedules that were delayed by over 5 years, in February 2010, the Director of the Office of Science and Technology Policy announced that NOAA and DOD would no longer jointly procure NPOESS; instead, each agency would plan and acquire its own satellite system. Specifically, NOAA would be responsible for the afternoon orbit, and DOD would be responsible for the early morning orbit. When this decision was announced, NOAA and NASA began planning for a new satellite program in the afternoon orbit—called JPSS. In 2010, NOAA established a program office to guide the development and launch of the S-NPP satellite as well as the two planned JPSS satellites, known as JPSS-1 and JPSS-2. NOAA’s current life cycle cost baseline for the JPSS program is $11.3 billion through fiscal year 2025. The current anticipated launch dates for JPSS-1 and JPSS-2 are March 2017 and December 2021, respectively. More recently, NOAA has also begun planning the Polar Follow-On program, which is to include the development and launch of a third and fourth satellite in the series. These satellites are planned to be nearly identical to the JPSS-2 satellite. Since 2012, we have issued three reports on the JPSS program that highlighted technical issues, component cost growth, management challenges, and key risks. In these reports, we made 11 recommendations to NOAA to improve the management of the JPSS program. These recommendations included addressing key risks and establishing a comprehensive contingency plan consistent with best practices. The agency agreed with these recommendations. As of October 2015, the agency has implemented 2 recommendations and was working to address the remaining 9 recommendations. Specifically, NOAA established contingency plans to mitigate the possibility of a polar satellite data gap and began tracking completion dates for key risk mitigation activities. NOAA also took initial steps to improve its scheduling practices, contingency plans, and assessment of the potential for a gap. We have ongoing work reviewing the agency’s efforts to fully implement these open recommendations, and plan to issue our report in spring 2016. As previously noted, we have issued a series of reports on the GOES-R program that highlighted schedule delays, management challenges, and the potential for a gap in backup satellite coverage. In these reports, we found that technical issues had caused a series of delays to major program milestones, which in turn had the potential to affect the GOES-R satellite’s launch readiness date. In 2012 and 2013, we made recommendations to NOAA to strengthen its scheduling practices. While the agency is making progress on these recommendations, they have not yet been fully implemented. Most recently, in December 2014, we reported that the GOES-R program had made significant progress in developing its first satellite, including completing testing of the satellite instruments. However, we also reported that even though NOAA had delayed the launch of the GOES-R satellite from October 2015 to March 2016, the program continued to experience schedule delays that could affect the new launch date. Specifically, the program had delayed multiple key reviews and tests, with delays ranging from 5 to 17 months. We also reported that the program’s actions to mitigate its schedule delays introduced further risks, which could increase the extent of the delays. For example, the program attempted to mitigate delays in developing detailed plans for ground-based data operations by performing system development while concurrently working on the detailed plans. In addition, the program compressed its testing schedule by performing spacecraft integration testing 24-hours-a-day, 7-days-a- week. As we reported previously, methods such as conducting planning and development work concurrently and compressing test schedules are activities that increase the risk of further delays because there could be too little time to resolve any issues that arise. At the time of our report, program officials acknowledged that they could not rule out the possibility of further delays, and that these delays could affect the planned March 2016 launch date. Other entities, including a NOAA standing review board and the Department of Commerce’s Inspector General, shared these concerns. In late 2014, NOAA’s standing review board noted that the program’s plan for the remaining integration and testing activities was very aggressive, and that additional failures and subsequent rework could threaten the then-expected planned launch date in early 2016. In May 2015, the Inspector General expressed concerns about the program’s lagging progress and reported that the program needed to proactively address testing risks in order to maintain its launch schedule. Based on information collected during our ongoing work, these prior concerns about the program schedule were warranted. The program continued to experience poor schedule performance as it moved through integration and testing. Program data show that the program lost more than 10 days of schedule reserve each month, on average, between July 2013 and July 2015. When asked about this poor schedule performance, program officials stated several reasons, including the complexity of the satellite build, the difficulties faced as part of a first-time build, and that the testing schedule was extremely aggressive. The monthly loss in margin occurred even though the program introduced steps designed to minimize a loss in reserves, such as switching to round-the-clock testing, eliminating selected tests, and implementing process and management changes. In October 2015, program officials reported that schedule performance improved for the month of September. In August 2015, NOAA decided to delay the planned launch date of the first GOES-R satellite from March 2016 to October 2016. While previously reported schedule delays contributed to this decision by decreasing the overall amount of available schedule reserves, program officials noted several other reasons for this decision. These reasons included finding debris in the solar array drive assembly that required them to replace the component, needing additional spacecraft repair and rework after testing was completed, and resolving disconnects in the expected duration of tasks at the launch site. NOAA also considered the likelihood of future delays in thermal vacuum testing, which is considered to be one of the more difficult environmental tests. NOAA officials stated that they chose the new launch date because it was the next available launch slot at the Kennedy Space Center and was consistent with expectations on when the GOES-R satellite would be ready to launch. Based on findings from our ongoing work, recent events have increased the risk of achieving the October 2016 launch date. In September 2015, NOAA identified a new technical issue in a component that helps regulate and distribute the satellite’s power supply. To try to address this issue, the GOES program replaced the component on the GOES-R satellite with the same component from GOES-S, the next satellite in the series. The program has experienced delays as a result of the need to replace and retest this component, and it is not yet clear that this switch will address the problem. According to a recent NOAA review of the program, this issue, along with several other issues discovered in testing, has put the new October 2016 launch date at risk. In late 2015, NOAA officials plan to reassess the schedule leading up to the planned launch date. Program officials stated that if GOES-R does not launch in October 2016, another launch slot would likely be available by May 2017. NOAA’s policy for geostationary satellites is to have two operational satellites and one backup satellite in orbit at all times. Three viable GOES satellites—GOES-13, GOES-14, and GOES-15—are currently in orbit. Both GOES-13 and GOES-15 are operational satellites, with GOES-13 covering the eastern United States (GOES-East in figure 1, on page 4) and GOES-15 covering the western United States (GOES-West in figure 1). GOES-14 is currently in an on-orbit storage mode and is available as a backup for the other two satellites should they experience any degradation in service. As we previously reported, this backup policy proved useful on two previous occasions when the agency experienced problems with one of its operational satellites, and was able to move its backup satellite into place until the problems had been resolved. Based on ongoing work, we found that NOAA recently decided to change its assumptions about the lifespan of the currently operational GOES satellites. The satellites were originally designed to have a 7-year life, consisting of 5 operational years and 2 years in storage. NOAA officials stated that, in April 2015, the agency revised its expectations for the total life for the GOES-13, GOES-14, and GOES-15 satellites to 10 years (including both operational and storage years). On October 21, 2015, the Deputy Assistant Administrator for Systems in NOAA’s National Environmental Satellite, Data, and Information Service informed us that the decision to change the lifespan was based on an analysis performed in 2005 that showed a 3-year extension was reasonable. At that time, NOAA chose to continue to depict the shorter lifespan due to its judgment of overall risk. The Deputy Assistant Administrator stated that in spring 2015, NOAA determined that it had sufficient history and performance on the GOES-13 and 15 satellites to begin reflecting the 10-year lifespan in its planning documents. This change had the effect of increasing the expected life of GOES-13 and GOES-15 from the previous estimate, and slightly decreasing the expected life of GOES-14. Figure 3 shows the original and extended estimates of the useful lives of the geostationary satellite constellation. If NOAA had not made the decision to extend its expectation of the useful life of GOES-15, the recent delay in the GOES-R launch could have put NOAA at risk of a coverage gap in early 2017. With the change in assumptions, NOAA officials now expect that there will be coverage of the GOES-East and West satellite positions through 2019 regardless of when the GOES-R series of satellites are available. However, the risk of a gap in backup satellite coverage remains. In December 2014, we reported that the geostationary satellite constellation was at risk of a gap in backup coverage, based on the GOES-R launch date of March 2016. This risk is increased by moving the launch date to October 2016 or later. The GOES-13 satellite, which has experienced issues with 4 of 11 subsystems and had previously been taken offline twice, is still expected to reach the end of its useful life in mid-2016. If GOES-R were to launch in October 2016, and then undergo a 6-month on-orbit checkout period, it would begin operations in April 2017, close to a year after the expected end of GOES-13’s useful life. Figure 4 shows the backup gap based on current assumptions of satellite life. Any further delays in the GOES-R launch date would increase this gap in backup coverage, which could mean a gap in coverage if one of the primary operational satellites were to fail. NOAA now faces a series of significant decisions on the development, launch, and maintenance of its GOES-R series satellites. Based on our ongoing work, these decisions include the following: Determine how to manage schedule risks to ensure GOES-R launches on schedule. NOAA and the GOES program continue to experience issues in completing integration and testing of the GOES-R satellite. NOAA officials have stated that the program was still losing about 10 reserve days per month through August 2015. As of September 2015, the program had 113 days of schedule reserve, which is 43 days more than suggested by NASA’s guidelines. Program officials expect the monthly loss of schedule reserve to decrease because they are using more realistic estimates of how long tasks will take based on past performance. However, given the potential for a gap in backup coverage leading up to the time that GOES-R is in orbit and operational, NOAA continues to look for ways to minimize remaining schedule risks on the GOES-R satellite. As previously noted, we made recommendations to NOAA in 2012 and 2013 to improve schedule management practices; these recommendations remain open today. Timely implementation of our recommendations could help to mitigate program risks. Determine when GOES-S should be launched. NOAA’s current plans to launch GOES-R in October 2016 and to launch GOES-S in May 2017 would allow 7 months between launch dates. However, NOAA officials would prefer to maintain a 14-month interval between the launch dates of these two satellites. Officials have stated that this interval is necessary due to the limited number of qualified personnel that work to develop both satellites, the need to rebuild the hardware planned for GOES-S that will now be used on GOES-R, and to allow adequate time for test and checkout of the GOES-R satellite before launching GOES-S. In late 2015 or early 2016, NOAA plans to conduct a detailed schedule analysis on GOES- S development. From this analysis, NOAA plans to decide whether to move the GOES-S planned May 2017 launch date to a later time. Decide the appropriate spacing of the GOES-T and GOES-U satellite launches to ensure satellite coverage and minimize costs. In addition to GOES-R and GOES-S, NOAA has established planned launch dates for the final two satellites in the GOES-R series. GOES- T is planned for launch in April of 2019, and GOES-U is planned for launch in October 2024. Key questions exist about the optimal timing for these later satellites. Program officials believe that it would be best to develop and launch the GOES-T satellite as soon as possible to sustain NOAA’s policy of having two operational satellites and one spare satellite on-orbit and to obtain the enhanced functionality these satellites offer. NOAA officials are considering options related to delaying the development of the GOES-U satellite or developing it and putting it into storage. Alternatively, delaying the development of GOES-T and GOES-U could result in cost efficiencies. For example, if the GOES-R and S satellites last for a minimum of 10 years, NOAA could be in the position of storing GOES-U on the ground for an extended time. NOAA officials stated that they would consider a later launch date for GOES-U depending on the health of the satellite system when it is due to launch. Storing satellites on the ground is costly and requires maintenance to ensure the satellites function once finally launched. Delaying the development of GOES-U would both reduce storage costs and delay annual costs associated with these satellites’ development. Moving forward, thoroughly assessing the relative costs and benefits of various launch scenarios will be important. In December 2014, we reported that the JPSS program had completed significant development work on the JPSS-1 satellite and had remained within its cost and schedule baselines. However, we noted that the program had encountered technical issues on a key component that led to cost growth and a very tight schedule. We also noted that while the program reduced its estimate of a near-term gap in satellite data, this gap assessment was based on incomplete data. We recommended that NOAA update its assessment of potential polar satellite data gaps to include more accurate assumptions about launch dates. We also assessed NOAA’s efforts to improve its satellite contingency plan and to implement mitigation activities. Specifically, we reported that while NOAA improved its polar satellite contingency plan by identifying mitigation strategies and actions, the contingency plan had shortfalls when compared to best practices. For example, the plan did not include an assessment of available mitigation alternatives based on their cost and impact. Moreover, NOAA was not providing consistent or comprehensive reporting of its progress on all mitigation projects. As a result, NOAA had less assurance that it was adequately prepared to deal with a gap in polar satellite coverage. We recommended that NOAA revise the polar satellite contingency plan to, among other things, include an assessment of available alternatives based on their costs and potential impacts, and ensure that the relevant entities provide monthly and quarterly updates on the progress on all mitigation projects and activities. We currently have ongoing work for your Committee assessing NOAA’s efforts to address each of these recommendations, and we plan to report our results by spring 2016. Based on our ongoing work, NOAA and the JPSS program continue to make progress towards the launch of the JPSS-1 satellite as a replacement for the currently on-orbit S-NPP satellite. Since 2013, the program’s life cycle cost baseline through 2025 has remained stable at $11.3 billion, and the launch date has remained set for March 2017. While the launch date has not changed, the JPSS program has experienced technical issues that have affected internal schedule deadlines. For example, the expected completion date of the Advanced Technology Microwave Sounder instrument was recently delayed from March 2015 to November 2015, due to foreign object debris in a key subsystem. NOAA has also experienced delays in completing a needed upgrade that will allow the JPSS ground system to provide command, telemetry, and data processing for more than one JPSS-class satellite, a capability that will become necessary when both S-NPP and JPSS-1 are in orbit. In addition to these ongoing technical issues, there is the possibility of conflicts with the GOES-R program for both resources and facilities as both programs complete testing at the NOAA Satellite Operations Facility. NOAA officials stated that they are aware of this issue and are taking steps to mitigate needs for common resources. We previously reported that NOAA is facing a potential near-term gap in polar data between the expected end of useful life of the S-NPP satellite and the launch of the JPSS-1 satellite. As of December 2014, NOAA officials stated that a 3-month gap was likely based on an analysis of the availability and robustness of the polar constellation. However, we reported that several factors could cause a gap to occur sooner and last longer—potentially up to several years. For example, if S-NPP were to fail today—exactly 4 years after its launch—the agency would face a gap of about 23 months before the JPSS-1 satellite could be launched and put into operation. Concerns about a near-term gap will remain until the JPSS-1 satellite is launched and operational. Further, if JPSS-1 fails on launch, there could be a gap until JPSS-2 is launched and operational in mid-2022. In April 2015, based on an updated analysis of its performance over time, NOAA decided to extend the expected life of the S-NPP satellite. Specifically, NOAA officials estimated that S-NPP would last as long as 9 years, up from its initial estimate of 5 years. Should S-NPP last for 9 years, it could alleviate a potential near-term gap. NOAA provided us with an assessment of the S-NPP satellite’s availability over time, and we have ongoing work analyzing the assessment. Figure 5 shows the original and extended estimates of the useful lives of the S-NPP and first two JPSS satellites. While NOAA’s changes in assumptions on how long S-NPP will last may lessen the likelihood of a near-term data gap, our ongoing work shows that the JPSS program continues to face key risks which could increase the possibility of a gap. Risks to the currently on-orbit satellite: The S-NPP satellite continues to experience isolated performance issues. For example, a mechanical component that facilitates the collection of sounding data on the S-NPP Advanced Technology Microwave Sounder instrument experienced electrical currents that were higher than expected in early 2015. While program officials believe that the issue has been addressed, the JPSS program is carrying it as a risk because it could affect the satellite’s useful life. There is also a risk that space debris could collide with S-NPP, which will not factor into NOAA availability calculations until its 2015 analysis is complete. Risks to satellites in development: As discussed above, the JPSS program is currently dealing with technical issues on both the flight and ground components of the JPSS-1 satellite which have caused schedule delays and decreased the remaining margin to launch. In addition, NOAA switched to a new spacecraft contractor beginning with the JPSS-2 satellite. With a new contractor, it may be more difficult to apply lessons learned from issues in JPSS-1 development if similar issues arise on JPSS-2. Moving forward, NOAA also faces decisions on timing the development and launch of the remaining satellites in the JPSS program. The design life of the JPSS satellites is 7 years and NOAA plans, beginning with JPSS-2, to launch a new satellite every 5 years in order to achieve a robust constellation of satellites. However, NOAA officials stated that they expect the satellites to last 10 years or more. If the satellites last that long, then there could be unnecessary redundancy. If they do not, then there is an increased potential for future gaps in polar satellite coverage, as there will be several periods in which only one satellite is on orbit. Similar to its geostationary program, evaluating the costs and benefits of different launch scenarios to ensure robust coverage while decreasing unnecessary costs will be important. In summary, we have made multiple recommendations to NOAA to improve management of the GOES-R and JPSS satellite programs and to address weaknesses in contingency plans in case of a gap in satellite coverage. NOAA has addressed about a quarter of our recommendations to date; it is important that the agency expedite its efforts to address the remaining ones in order to reduce existing risks and strengthen its programs. NOAA recently decided to delay the GOES-R satellite launch until October 2016 and to change its assumption for how long the currently operational satellites will last. Even with the new assumption that existing satellites will last longer, the risk remains that there will be a gap in backup satellite coverage that lasts for almost a year. The agency is now facing important decisions on how to achieve the new launch schedule and how to space out future satellites to ensure satellite coverage while minimizing costs. Regarding the JPSS program, NOAA continues to make progress developing and testing the JPSS-1 satellite as it moves toward a March 2017 launch date. Moreover, NOAA decided to extend its expectation for how long the current satellite will last. However, there is the potential for a coverage gap should the currently on-orbit satellite not last until the launch and calibration of the JPSS-1 satellite is completed. According to NOAA officials, it is also possible that JPSS-1 and -2 will last longer than anticipated. Moving forward, reconsidering development and launch calendars to ensure robust satellite coverage while decreasing unnecessary costs will be important. Chairmen Bridenstine and Loudermilk, Ranking Members Bonamici and Beyer, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Other contributors include Colleen Phillips (assistant director), Christopher Businsky, Shaun Byrnes, Kara Lovett Epperson, Rebecca Eyler, and Torrey Hardee. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | NOAA is procuring the next generation of polar and geostationary weather satellites to replace aging satellites that are approaching the end of their useful lives. GAO has reported that gaps in polar satellite coverage and in backup coverage for geostationary satellites are likely in the near future. Given the criticality of satellite data to weather forecasts, concerns that problems and delays on the new satellite programs will result in gaps in the continuity of critical satellite data, and the impact such gaps could have on the health and safety of the U.S. population, GAO added mitigating weather satellite gaps to its High-Risk List in 2013 and it remained on the list in 2015. GAO was asked to testify, among other things, on the cause and impact of a recent launch delay on the GOES-R program, and the status and key remaining challenges on the JPSS program. To do so, GAO relied on prior reports issued from 2012 to 2015 as well as on ongoing work on both programs. That work included analyzing progress reports and interviewing officials. The National Oceanic and Atmospheric Administration's (NOAA) $10.9 billion Geostationary Operational Environmental Satellite-R (GOES-R) program recently delayed the planned launch of the first satellite in the new series from March 2016 to October 2016. Based on its ongoing work, GAO found that the decision to delay the launch was due to poor schedule performance over the last few years (losing more than 10 days a month on average), recent technical issues with key components, and little schedule margin as the program entered integration testing. The October 2016 launch date may also be delayed if additional technical challenges arise or if schedule performance remains poor. NOAA recently changed assumptions about the expected lifespan of existing GOES satellites from 7 to 10 years based on the longevity of prior satellites. However, the analysis supporting this change is over 10 years old. Even with this extension, NOAA may fall short of its policy of having 2 operational satellites and 1 backup satellite in orbit. The agency faces an 11 month gap in backup coverage until GOES-R is operational, during which time there would be only 2 operational satellites (see figure). Any further delays in the GOES-R launch date could exacerbate that gap. NOAA is now facing important decisions on when to launch the remaining satellites in the GOES-R series to maximize satellite coverage while minimizing development and storage costs. Based on its ongoing work, GAO found that NOAA's $11.3 billion Joint Polar Satellite System (JPSS) program is making progress toward the planned launch of the JPSS-1 satellite in March 2017. However, the program has experienced technical issues that have affected internal schedule deadlines, such as an issue with debris in an instrument's subsystem that delayed its delivery by approximately 8 months, and faces key risks in the remainder of development. NOAA is also facing the risk of a potential near-term gap in polar data prior to the launch of the JPSS-1 satellite. Similar to the decision on the GOES satellites, in April 2015, NOAA revised its assumptions about the expected life of the satellite that is currently in-orbit by adding up to 4 years, which would reduce the chance of a near-term gap. However, risks to the performance and health of the on-orbit satellite, and to development of the JPSS2 satellite could increase the risk of a gap. Also, NOAA faces key decisions on timing the development and launch of the remaining JPSS satellites to ensure satellite continuity while balancing the possibility that satellites could last much longer than anticipated. GAO is not making any new recommendations in this statement, but—since 2012—has made 23 recommendations to NOAA to strengthen its satellite acquisition programs and contingency plans. The department agreed with GAO's recommendations and is taking steps to implement them. To date, NOAA has implemented 6 recommendations and is working to address the remaining 17. Timely implementation of these recommendations will help mitigate program risks. |
Young adulthood is a critical time in human development. During this period, individuals transition into roles that they maintain long into the future. This transition can involve completing school; securing full-time employment; becoming financially independent; establishing a residence; entering into a stable, long-term relationship; and becoming a parent. To successfully accomplish these things, young adults must develop good interpersonal skills, sound judgment, and a sense of personal responsibility and purpose. The transition from child to adult roles can be a challenging one, and evidence suggests that this period has become longer and more complex over the years. During the 1950s, young people often completed their education and secured employment, married, and became parents all in their early 20s. Since then, the economy has grown increasingly information-driven, while wages have declined and the cost of living has increased, when adjusted for inflation. Consequently, young adults require greater technical skills and education to support themselves and may alternate living in an educational setting, with their families, or independently well into their adult years. As they transition to adulthood, some young people may experience a mental illness, which is generally defined as a health condition that changes a person’s thinking, feelings, or behavior and causes the person distress and difficulty in functioning. Some young adults develop their mental illness during childhood, while it is typical for others, such as individuals with schizophrenia, to experience the onset of symptoms as young adults. Although research shows that 50 percent of mental disorders begin by age 14, it can take several years for the illness to be detected and appropriately treated. Early detection and treatment of mental disorders can result in a substantially shorter and less disabling course of mental illness. The symptoms associated with a given type of mental illness can vary in frequency and severity across individuals and for each individual over time. Mental illnesses with particularly severe symptoms can have a dramatic impact on an individual’s ability to function in everyday life. The fatigue experienced by an individual with major depressive disorder can be so severe that it is difficult to summon the energy to work every day. The delusions associated with paranoid schizophrenia can make it impossible to maintain stable personal relationships with spouses, co- workers, or friends. Certain other mental illnesses are known for the unpredictable and episodic nature of their symptoms and the harmful effect this has on the ability to function consistently over time. For example, individuals with bipolar disorder can alternate between periods of mania, relative normalcy, and profound depression. For a young adult, such unpredictable mood swings can stymie progress in securing and maintaining a job or beginning and sustaining a long-term relationship. Individuals with mental illnesses who have particularly severe symptoms can qualify for certain federal supports. The Alcohol, Drug Abuse, and Mental Health Administration Reorganization Act of 1992 uses the term serious mental illness to identify individuals whom states are allowed to treat using federal dollars from the Community Mental Health Block Grant program. In response to the requirements of this Act that HHS develop a definition, SAMHSA defined “adults with a serious mental illness” as those who are: “age 18 and over, who currently or at any time during the past year, have had a diagnosable mental, behavioral, or emotional disorder of sufficient duration to meet diagnostic criteria specified within DSM-III-R, that has resulted in functional impairment which substantially interferes with or limits one or more major life activities.” These major life activities can be basic living skills, such as eating or bathing; instrumental living skills, such as maintaining a household; or functioning in social, family, or vocational/educational contexts. States are free to establish more restrictive eligibility guidelines for their treatment population in several ways: by narrowing the list of qualifying diagnoses, specifying the length of time the individual must have had symptoms, or requiring that individuals function below a certain level. As a result, the criteria used by states to qualify adults for public mental health services can vary. Some individuals with a serious mental illness may be unable to work because of their impairments. These individuals aged 18 or older may qualify for Supplemental Security Income (SSI) or Disability Insurance (DI) provided by SSA if they can demonstrate that their mental illness results in the inability to engage in any kind of substantial gainful activity and has lasted or can be expected to last at least 12 months. DI pays benefits related to prior earnings to those with a work history sufficient to obtain insured status. Children can receive DI benefits based on their parent’s work history. SSI provides cash benefits for those who have limited or no work history and whose income and assets fall below certain levels. Children can receive SSI benefits if they have a qualifying disability themselves. Individuals may receive concurrent payments from both DI and SSI if their work history qualified them to receive DI payments but their income and assets—including the amount of their DI payments— were sufficiently low that they also qualified to receive SSI payments. In fiscal year 2006, approximately 8.6 million individuals received DI payments and 6.9 million received SSI payments, for a total of $126.4 billion in benefits paid out over the course of the year. Many people with serious mental illness receive treatment through the public mental health system, which serves as a safety net for those who are poor or uninsured or whose private insurance benefits run out in the course of their illness. State mental health departments have primary responsibility for administering the public mental health system. In doing so, they serve multiple roles, including purchaser, regulator, manager, and, at times, provider of mental health services. Services are delivered by state-operated or county-operated facilities, nonprofit organizations, and other private providers. The sources and amounts of the public funds that mental health departments administer vary from state to state but usually include state general revenues and funds from Medicaid and other federal programs. The services provided by the public mental health system to individuals with serious mental illness have changed over time. Historically, state-run public mental health hospitals were the principal treatment option available to them. By the 1960s the reliance on inpatient care was viewed as ineffective and inadequate because of patient overcrowding, staff shortages, and other factors. At the same time, improved medications were reducing some of the symptoms of mental illness and increasing the potential for more individuals to live successfully in the community. A recovery-oriented, community-based approach to mental health treatment has since emerged. Under this approach, individuals are to receive services and supports uniquely designed to help them manage their mental illness and to maximize their potential to live independently in the community. These services and supports are to be multidimensional— intended to address not only mental illness but also employment, housing, and other issues. When feasible, these multidimensional services are provided in what is referred to as a “wrap-around” manner—that is, they are uniquely targeted to the nature and extent of each individual’s needs. When services are provided by multiple agencies, those agencies are to coordinate their activities and funding so that the individual experiences the services and supports seamlessly—as if from one system, not many. Services and supports relevant to young adults with serious mental illness that are funded or provided by federal programs include mental health treatment, education and employment assistance, housing, and income support. In all, the Judge David L. Bazelon Center for Mental Health Law (Bazelon) identified 57 relevant programs in 2005. (See app. II.) These programs are administered by a variety of agencies, including DOJ, HHS, Education, HUD, Labor, SSA, and U.S. Department of Agriculture (USDA). The federal government funds mental health services that are provided by programs administered by state agencies. Table 1 lists five examples of such programs. In particular, Medicaid and the Community Mental Health Block Grants are major sources of federal funding for mental health services for young adults with serious mental illness. Medicaid is a health insurance program for certain groups of low-income individuals, including elderly and disabled individuals and children. Funded jointly by the federal government and the states, and administered federally by the Centers for Medicare & Medicaid Services (CMS), Medicaid is the primary federal payer for public mental health services provided by states. In order to receive federal Medicaid funding, states are required to provide certain broad categories of services, such as inpatient and outpatient hospital services and physician care. Reflecting their medical focus, Medicaid mental health services have traditionally been provided by physicians, including psychiatrists, who work at hospitals, clinics, and other institutions. While Medicaid will cover services provided to individuals in facilities with 16 or fewer beds, the program specifically excludes coverage provided in large state-run psychiatric institutions for adults aged 22 through 64. States may choose to provide certain optional categories of services. For example, states may use the Medicaid “rehabilitation option” to cover a broad range of services related to rehabilitation from a mental illness or other condition or disability. States may also participate in certain Medicaid demonstration programs that allow them greater flexibility in the services they choose to cover. Medicaid spending by the federal government and the states totaled $317 billion in 2006. To supplement the Medicaid program, CMS administers several smaller grant programs that states can use to fund improvements to their mental health systems. For example, CMS established Medicaid Infrastructure Grants to support state efforts to enhance employment options for people with serious mental illness and other disabilities. States may use these grants to plan and manage improvements to Medicaid eligibility determination and service delivery systems or to improve coordination between the state Medicaid program and employment-related service agencies. Nearly $43 million was available to states under this grant program in fiscal year 2008. CMS also administers Real Choice Systems Change Grants to help states and others build the infrastructure that will result in improvements in community-integrated services and long-term care supports for individuals with long-term illnesses and disabilities, such as serious mental illness. The goal of the program is to help these individuals live in the most integrated community setting suited to their needs, have meaningful choices about their living arrangements, and exercise more control over their services. Nearly $14 million was awarded to states under this grant program in fiscal year 2007. Through the Community Mental Health Services Block Grant program and other federal grant programs, SAMHSA funds mental health services that can be used by states to assist young adults with serious mental illness. The block grants are allocated to states according to a statutory formula that takes into account each state’s taxable resources, personal income, population, and service costs. In order to receive the funding, states are required by SAMHSA to provide data on the mental health services provided including demographic information annually to SAMHSA on the number of individuals treated by the state’s mental health system. In addition, states are required to maintain statewide planning councils that include consumers, family members, and mental health providers to oversee the mental health system. In fiscal year 2007, SAMHSA provided $401 million in block grants to states. According to a SAMHSA official, this made up an average of between 1 percent and 2 percent of each state’s budget on community-based mental health services. SAMHSA also administers smaller targeted grants to support state mental health services and initiatives. Part of SAMHSA’s activities related to the block grant program are to promote specific practices—known as evidence-based practices—in mental health treatment. SAMHSA considers a practice evidence-based if it has been validated by research, such as clinical trials with experimental designs, and if it reflects expert opinion. On its Web site, SAMHSA provides toolkits for five types of evidence-based practices that states can use to design their programs. These five practices are Illness Management and Recovery, Assertive Community Treatment, Supported Employment, Family Psychoeducation, and Co-Occurring Disorders: Integrated Dual Disorders Treatment. (See app. III for details about these practices.) SAMHSA is also promoting research on evidence-based practices in a number of other areas, including supported education, and plans on providing toolkits or other informational materials for these as well. A condition for receiving Community Mental Health Services Block Grant funds is that states are required by SAMHSA to report on whether they are using the evidence-based practices. In addition, states can use Medicaid funds to pay for certain services associated with the use of evidence-based practices. Other federal programs fund educational and employment-related supports through states, localities, or other groups to individuals with a mental health disability. (See table 2.) Through special education programs funded with federal dollars in part, students through age 21 with emotional disturbances and students with other disabilities with behavioral and emotional components can receive an individually tailored program of specialized instruction and support services set out in an individualized education program (IEP). On the basis of decisions of the student's IEP team, students can receive such services as psychological services, counseling and social work services, and job coaching (as part of services supporting the transition of a student to post-school activities). Another example of a program that provides educational and employment- related supports is Labor’s WIA Youth Activities program, which funds efforts related to workforce training, education attainment, community involvement, and leadership development for low-income individuals aged 14 to 21 who have difficulty completing their education or securing or maintaining employment. Once they are determined to be WIA eligible, youth receive an assessment of their academic level, skills, and service needs. Local youth programs then use the assessment to create individualized service strategies, which lay out employment goals, educational objectives, and necessary services. In 2006, Labor received approximately $940 million in funding appropriated for WIA youth-related activities. Education’s Rehabilitation Services Administration provides grants to assist state vocational rehabilitation agencies in providing employment- related services for individuals with disabilities, including individuals with serious mental illness. Vocational rehabilitation agencies assist individuals in pursuing gainful employment commensurate with their abilities and capabilities. Money for vocational rehabilitation is allotted to states and territories according to a formula, and over $2.8 billion was appropriated to states in 2007. Finally, current and former foster care youth can receive services up to the age of 21 through the Chafee Foster Care Independence program. This program funds independent living, education, and training and gives states the flexibility to extend Medicaid coverage for former foster care youth up to age 21. Federal funding associated with these activities totaled $140 million in 2006. However, we have found that there are critical gaps in mental health and housing services for foster youth and that states were serving less than half of their eligible foster care population through their programs. Other programs provide housing supports. (See table 3.) These programs range in scope, targeting low-income people generally to vulnerable groups specifically, such as the disabled or the homeless. We estimate that at least 2.4 million young adults had a serious mental illness in 2006. This estimate is likely to be low because it is based on a survey that did not include individuals who were homeless, institutionalized, or incarcerated—populations that likely suffer high rates of mental illness. Most young adults with serious mental illness suffer from multiple disorders, and relative to young adults with no mental illness, they have significantly lower rates of high school graduation and postsecondary education. Our analysis also found that about 186,000 young adults received disability benefits from SSA in 2006 because their mental illness was so severe that they were found to be unable to engage in substantial gainful activity. Finally, although we were unable to identify the number of young adults with serious mental illness who were homeless or involved in the justice or foster care systems, research suggests that these groups have high rates of mental illness overall. According to our analysis of the NCS-R, an estimated 2.4 million young adults aged 18 through 26 had a serious mental illness in 2006— approximately 6.5 percent of the estimated 37 million young adults living in U.S. households. We estimate that another 9.3 million—25.3 percent—had a moderate or mild mental illness, and that overall, nearly one in three young adults experienced some degree of mental illness in 2006. (See fig. 1.) Because of limitations in the populations surveyed by the NCS-R, our estimated prevalence of serious mental illness among young adults in 2006 is likely to be low. Because only individuals living in households and campus housing were included in the sample population, individuals who were institutionalized, incarcerated, or homeless are not included in NCS- R data. Research has shown that young adults in these populations may have significant rates of serious mental illness. The NCS-R may also under- represent the prevalence of serious mental illness because some individuals may not have reported what they believe will be viewed as socially unacceptable behaviors or may have chosen not to participate in the survey at all. Finally, the NCS-R does not attempt to measure the prevalence of schizophrenia and other nonaffective psychotic disorders, and for this reason, may only represent a subset of those who would be considered by SAMHSA to meet the criteria for having a serious mental illness. Our analysis of the NCS-R indicates that certain disorders were most common among the young adult population aged 18 through 26 with serious mental illness. Specifically, we found that six mental disorders affected more than 25 percent of young adults with serious mental illness. The most prevalent of these was intermittent explosive disorder, and the other five were major depressive disorder, specific phobia, bipolar disorder, alcohol abuse, and social phobia. (See table 4.) We also found that nearly all young adults with serious mental illness were diagnosed with more than one mental disorder. Specifically, 89 percent had two or more diagnoses and 56 percent had four or more. For example, 20 percent of individuals with the most common diagnosis, intermittent explosive disorder, were also diagnosed with bipolar disorder, while 39 percent were also diagnosed with alcohol abuse. Results of the survey also suggest that about 32 percent of young adults with a serious mental illness had a co-occurring diagnosis of alcohol or drug abuse or dependence along with at least one other mental disorder. Young adults with serious mental illness had significantly lower rates of high school graduation than other young adults, according to our analysis of demographic information in the NCS-R. Specifically, the percentage of young adult respondents with serious mental illness who graduated high school was significantly lower than the percentage of those with moderate, mild, or no mental illnesses. Additionally, the percentage of young adult respondents with serious mental illness who continued their education after high school was also significantly lower than the percentage of those with moderate, mild, or no mental illness. (See figure 2.) Young adults with serious mental illness also had lower rates of employment than other young adults, although the differences were not statistically significant, according to our analysis of the NCS-R. Specifically, 63 percent of young adults with serious mental illness reported they were currently employed, versus 68 percent of those with a mild or moderate mental illness and 71 percent of those with no mental illness. Results of other studies, however, suggest that unemployment is a common problem for young adults with serious mental illness. For example, an analysis of the 1994--95 National Health Interview Survey on Disability found an employment rate of 34 percent among working-age adults with mental health disabilities, versus 79 percent among adults with no disability. In addition, the President’s New Freedom Commission on Mental Health stated in its 2003 report that only one in three persons with a disability resulting from mental illness is employed. (See app. IV for more detailed demographics of young adults with serious mental illness compared to those with moderate, mild, or no mental illness.) In 2006, about 186,000 young adults had a mental illness that was severe enough that they received disability payments from SSI, DI or both, meaning that they were found to be unable to engage in substantial, gainful activity because of their illness, according to our analysis of the TRF. The 186,000 individuals who received benefits in 2006 represented just under a quarter of all young adults who received SSI or DI that year and do not include individuals who receive benefits because of abnormalities in cognition or intellectual functioning, such as mental retardation or autism. Of these young adults, about 67 percent received payments through SSI only, nearly 9 percent received payments from DI only, and 24 percent received concurrent payments from both programs. Among those receiving SSI payments, nearly 60 percent first became eligible before the age of 18. The mental illnesses that were most common among young adults receiving payments from SSI, DI, or both for serious mental illness include schizophrenic, paranoid, and other functional psychotic disorders and affective mood disorders, such as depression or bipolar disorder. (See table 5.) These young adults receiving SSI or DI or both scored lower on certain socioeconomic indicators than the general population of those with serious mental illness. Specifically, when we compared SSI and DI recipients with serious mental illness with their counterparts from the NCS-R, we found that the SSI and DI recipients had lower rates of high school graduation and employment. (See table 6.) In addition, while 59 percent of those receiving disability payments reported having ever worked, only 15 percent reported being currently employed. This compares with an estimated 63 percent rate of employment for those in the NCS-R. Finally, we found that SSI/DI recipients also had a lower average annual household income than all young adults with serious mental illness represented in the NCS-R. (See app. V for more detailed demographic analysis of young adults enrolled in SSI and DI due to serious mental illness.) The number of young adults whose mental illness is severe enough to qualify for SSI or DI is likely to be higher than the 186,000 who were receiving disability payments in 2006 for two reasons. First, there could be some number of individuals who suffer from a serious mental illness who do not apply for SSI or DI or complete the application process. The process of proving eligibility requires the submission of medical records to document the medical nature of the mental illness, probable duration of the symptoms, and the degree of impairment the illness imposes, as well as proof of income for SSI eligibility—a process that might prove too difficult for those with a serious mental illness. The second reason the 186,000 might not represent all who could qualify for disability benefits is that, according to SSA officials, some individuals who have a serious mental illness may be receiving benefits because of another disability, such as mental retardation, or a physical disability. Our analysis of SSA administrative data found about 100,000 young adults whose primary disability was not a serious mental illness had a secondary diagnosis of a mental illness, which may have been severe enough, by itself, to qualify the individual for disability benefits. These individuals were therefore not included in our count of 186,000. We were not able to estimate the number of young adults in certain vulnerable populations who have a serious mental illness, although the available research suggests that rates of mental illness are high in these groups. These vulnerable populations include young adults transitioning out of the foster care system—who may have limited family support for their struggle with serious mental illness—and young adults who become homeless or incarcerated. The NCS-R does not include individuals who are homeless or incarcerated, and although individuals in foster care are included, they are not specifically identified as such in the data. Additionally, a review of literature on homelessness and the justice and foster care systems yielded no studies that produced national estimates of the number of such young adults in those groups. Studies that examine mental illness in those groups either do not yield estimates specific to young adults or do not measure serious mental illness in a consistent way that can be compared across groups. Although the prevalence of serious mental illness has not been studied in these young adult populations nationally, available research suggests that their rates of mental illness may be high. With respect to young adults in foster care, a national survey that included 464 individuals aged 12 to 17 who had been placed in foster care found that they were about four times more likely to have attempted suicide in the preceding year when compared to those never placed in foster care. In addition, they were about three times more likely to have experienced significant anxiety and mood symptoms, such as depression or mania. Research also indicates that mental health problems among foster care children may persist into adulthood. For example, the Northwest Foster Care Alumni Study, which assessed 659 adults aged 20 through 33 in Oregon and Washington who had been in foster care as children, found that over half had experienced symptoms of one or more mental disorders in the previous year, and 20 percent had symptoms of three or more mental disorders. The study compared these results to results from the NCS-R for adults in the same age range, which found that only three percent of adults in that age range had symptoms of 3 or more disorders in the previous year. Studies also suggest high rates of mental illness among young adults who are homeless. For example, an Urban Institute study based on the National Survey of Homeless Assistance Providers and Clients estimated that 46 percent of homeless individuals aged 20 through 24 had experienced a mental health problem in the prior year. Another study of 432 homeless young people in Los Angeles found that 63 percent of those aged 19 through 24 currently had depressive symptoms and 38 percent had attempted suicide at some point in their lives. Finally, studies have found that young adults involved in the criminal justice system have high rates of mental illness. According to two national surveys conducted by the Bureau of Justice Statistics, 62.6 percent of young adults aged 24 or younger in state prisons had a mental health problem in 2004, and 70.3 percent of those in local jails had a mental health problem in 2002. Further, a multi-state survey funded by DOJ’s Office of Juvenile Justice and Delinquency Prevention found that about 70 percent of youth involved with the juvenile justice system had at least one mental health disorder, and 27 percent had a severe mental health disorder in 2006. According to researchers, public officials, and advocates, young adults with serious mental illness can have difficulty finding services tailored to their needs, qualifying for adult programs, and navigating multiple programs and delivery systems. While these young adults need a range of support services, the existing public mental health, employment, and housing programs are not necessarily tailored to their disability or their stage of life, which may lead them to forgo services entirely. Further, young adults who received free or low-cost mental health services as children generally face different, and sometimes more stringent, eligibility requirements as adults. Finally, federal officials and researchers have recognized the difficulties this group and their families have in navigating the broad array of programs that can help meet their needs. Although appropriate mental health services are a key to achieving independence, researchers and officials told us that these services are often not tailored to the age-related needs of the young adult population. We have previously reported that directors of programs serving youth aged 14 through 24 have difficulty finding adequate age-appropriate mental health services for their clients. A national expert has noted that adult mental health service providers in one state, for example, were generally not trained in adolescent development and so were unprepared to treat young adults with serious mental illness who tend to be relatively psychosocially immature. Officials in three of the states we visited similarly reported a need for better training among mental health providers in issues related to young adults. Other researchers have noted that group therapy should involve members in the same age range, given that young adults’ self-esteem can depend significantly on peer acceptance. However, young adults are often referred to group-oriented treatment that may include mainly older adults who do not share their transition-age issues and is therefore often inappropriate for them, according to mental health advocates with whom we spoke. While young adults with serious mental illness can benefit from a variety of employment programs, these programs are also not necessarily tailored to the particular needs of this population. For example, state officials in three of the four states we visited told us that WIA Youth centers in those states often lack the expertise to help young adults with serious mental illness find appropriate employment because these centers generally do not have the capacity to provide the intensive and customized support these individuals need. Labor officials told us that, as a result, WIA staff often refer youth they believe have mental illness to vocational rehabilitation programs. However, according to federal officials, vocational rehabilitation programs have been traditionally used by those with physical disabilities and are also not always designed to meet mental health needs. Advocates working with young adults in most states we visited likewise noted that the vocational rehabilitation services available to the youth they work with have not been responsive to their mental health-related issues. Similarly, officials in one state noted that service providers for students with disabilities at colleges and universities often lack the expertise and training to support students with serious mental illness. Finally, while researchers noted that young adults with serious mental illnesses experience difficulty living independently and in some cases finding housing, officials in all of the states we visited cited the inability to find appropriate housing as a key problem for this population. Specifically, they noted that there are not enough permanent housing options that are targeted to this age group. Supportive housing-–which often includes a comprehensive set of supports such as job training and mental health services-–is recommended by SAMHSA as a resource for those with mental illness. However, officials in all states we visited said there was a lack of such housing available. Additionally, where supportive housing is available, it is not necessarily geared to young adults. For example, HUD officials reported that the median age of the head of households receiving HUD supportive housing is 47. One service provider explained that if housing options are not geared to their age group, young adults with serious mental illness may end up homeless. Researchers and advocates have noted that if services are not suited to their age or disability, young adults with serious mental illness may choose not to participate. Young adults are particularly sensitive to the stigma associated with receiving treatment for their symptoms, and SAMHSA has reported that they have the lowest “help-seeking behavior” of any age group. Furthermore, as they age into the adult mental health system, their parents are generally no longer responsible for their mental health treatment, and these young adults then have the option to decline treatment. In Massachusetts, for example, a state official found that, in one locality, more than half of the young adults who had received mental health services as children chose not to receive them as adults. She attributed this to a lack of services geared to their disability and age. Researchers and state officials we spoke with noted that young adults who once received public mental health services, Medicaid, or SSI or DI as children face different and sometimes more restrictive eligibility requirements for these programs as adults. They added that ineligibility for these adult programs can end established relationships with mental health professionals or otherwise disrupt receipt of mental health services. Qualifying for free or low-cost mental health services is often more difficult for adults than children. The National Conference of State Legislatures found that, among state programs, the clinical criteria for receiving adult public mental health services are generally narrower for adults than for children. Another study has found that in 2001 adult and child mental health policies were different in 34 states, and in 31 of those states, the range of qualifying disorders was more limited for adults than for children. Specifically, in half of the states with different criteria, the adult requirement was more restrictive by virtue of citing fewer specific diagnoses to qualify. Similarly, in most states children can qualify for Medicaid---the major federal funder of public mental health services---at higher household income levels than adults, who must also meet other categorical eligibility criteria. In these states, a young person previously covered by Medicaid as a child who becomes an adult risks losing access to the mental health treatment and psychiatric rehabilitation services covered by the program. Advocates, state officials, and researchers all cited the loss of Medicaid benefits because of different eligibility requirements between children and adults as a challenge for young adults with serious mental illness. When youth receiving SSI are evaluated using adult rules for SSI within one year of turning age 18, as required by law, they also find that the adult eligibility criteria are different and can result in a cessation of payments. This can also lead to a loss of Medicaid eligibility. SSA officials told us that 25 percent of youth who received SSI because of a mental health--related condition do not qualify for SSI once they turn 18. Advocates working with these young adults and their families in the states we visited cited this loss of SSI benefits as a key concern for young adults with serious mental illness in their transition to adulthood. SSA officials indicated that the loss of benefits for these young adults resulted partly from the fact that certain mental disorders that are considered disabling to children are not applicable to adults. For example, adult disorders that qualify for SSI do not include eating disorders and attention deficit hyperactivity disorder. However, SSA officials told us that they are currently working on revising some of the criteria for mental health impairments so that the criteria for children and adults are more closely aligned. Because young adults with serious mental illness usually have a number of needs requiring multiple supports, they can find it difficult to receive all the services they need when programs are administered by different agencies with varying eligibility requirements. Given their multiple needs, a coordinated set of benefits is important for a successful transition to adulthood. Labor recently reported, however, that there is no single system that guides youth, in general, through the process of becoming productive, self-sufficient adults and that existing services for them are uncoordinated. Bazelon has similarly found that the programs serving young adults with mental illness have varying age and income requirements and may use different definitions of mental illness, which can make it difficult to obtain multiple services. In addition, according to state officials with whom we visited, program staff may not collaborate with or notify one another of the service plans they develop for clients. For example, the director of a mental health advocacy organization in Massachusetts told us that when a young person has a serious mental illness and the secondary school is involved, the staff at the school will typically not speak with the young person’s doctor or Medicaid provider in order to coordinate behavior plans or more fully understand the particular mental illness. Navigating the varying eligibility requirements and service plans of multiple programs across a number of delivery systems can be difficult for anyone, but young adults with serious mental illness may have fewer interpersonal and emotional resources with which to do so. Mental health advocates told us that because young adults with serious mental illness tend to be involved in different service delivery systems, their parents or other caring adults must often operate as their de facto case worker, attempting to organize and coordinate various services. However, researchers and family advocates have also found that a major challenge for these parents and caring adults is their need for information related to availability of supports. For example, one researcher found that families wanted information related to the young adults’ condition and treatment, available community resources, and supports for caregivers and that they generally reported feeling overwhelmed by the complexity of the system of agencies and organizations. Recognizing the challenges faced by young adults with serious mental illness, the four states we visited—Connecticut, Maryland, Massachusetts, and Mississippi—have designed programs with multidimensional services to help them transition into adulthood. States have used various strategies to provide these services. They include broadening eligibility criteria for mental health services, employing some of the evidence-based practices promoted by SAMHSA, coordinating efforts across multiple state agencies, leveraging federal and state funding sources, and involving consumers and family members in developing policies and aligning services. The four states we selected for review have developed programs that provide multidimensional services to young adults with serious mental illness. Administered by their respective mental health agencies, these programs are implemented at the local level generally by mental health authorities, non-profit organizations, and community-based mental health providers. In addition to health care services, the programs provide a range of services intended to be age and developmentally appropriate, including vocational rehabilitation, employment, life-skills development, and, in some cases, housing. These four states try to tailor these services so that, to the extent possible, young adults receive services appropriate for each individual’s transition needs. They also try to integrate the services so that young adults do not have to navigate multiple discrete programs. Tailoring and integrating services are both central tenets of the wraparound approach. In Connecticut, the young adult program initially focused on individuals referred from the Department of Children and Families, but has since evolved to focus on a broader group of young adults with serious mental illness. This focus is similar to that of the other states’ programs. The young adult programs administered by these four states vary in the number of young adults with serious mental illness that they serve and have not yet been systematically evaluated for their effectiveness. For example, in state fiscal year 2007, Connecticut’s specialized program for young adults with serious mental illness aged 18 through 25 served 716 individuals, or about 27 percent of the 2,615 young adults with serious mental illness receiving mental health services from the state mental health agency. State officials explained that not every young adult needs the kinds of intensive services provided under the state’s specialized program for young adults but added that many more young adults could benefit from the program than are currently being served. In 2007, Massachusetts’s young adult program served all of the approximately 2,600 young adults aged 16 to 25 with serious mental illness in the state’s mental health system, providing one or more services, including case management, housing, employment, education, and peer mentoring. A smaller number received a variety of other mental health and social services. Although most of the states’ young adult programs have existed for more than five years, none of the states have systematically collected data on outcomes to evaluate the effectiveness of their programs. State officials said that their budget resources are limited and they have focused on providing services. (See app. VII for a description of the four state programs.) In the four states we visited, state officials described a variety of strategies they have used to provide multidimensional services to young adults with serious mental illness. The strategies include broadening eligibility criteria for mental health services, employing evidence-based practices promoted by SAMHSA, coordinating efforts across multiple state agencies, leveraging federal and state funding sources, and involving consumers and family members in developing policies and aligning services. Maryland has chosen to broaden eligibility criteria for mental health services for young adults beyond the medical necessity criteria established for adults with serious mental illness. Specifically, Maryland generally limits its comprehensive adult mental health services to individuals with certain diagnoses and functional limitations, but state officials have approved eligibility for young adults who do not meet all the criteria. Maryland officials told us they aim to identify and treat individuals so that they can become meaningful community participants rather than becoming dependent on the service system. They said that state services target young adults who are in or at risk of out-of-home placement, such as in residential treatment centers. Many of these young adults have histories of severe trauma, have limited community living skills, and have increased psychotic symptoms. Another strategy is to deliver multidimensional services using evidence- based practices promoted by SAMHSA. Although these evidence-based practices have not been empirically tested specifically on the young adult population, states we visited are using some of them. Some of these practices involve bringing integrated mental health and social services to the young adults living in the community rather than expecting them to navigate multiple discrete programs on their own. For example, Massachusetts and Connecticut have used the Assertive Community Treatment model, which employs an interdisciplinary team of psychiatrists, social workers, and nurses to provide psychiatric, rehabilitation, and other support services in the community 24 hours per day. In this model, team members collaborate to tailor services on an individual basis, taking into account cultural diversity. Assertive Community Treatment services are designed for individuals who have the most serious symptoms of mental illness and greatest impairment in functioning. They often come to the program in crisis or upon release from inpatient psychiatric care. In Massachusetts, the Assertive Community Treatment services are available in various locations throughout the state, including in three sites in the Southeastern Area that specifically target these services for young adults. Connecticut uses this treatment model in some of its young adult program sites, often to serve those leaving foster care and the juvenile justice system. Connecticut, Maryland, and Massachusetts provide another evidence- based practice—supported employment—to assist young adults with serious mental illness. Based on the principle that work is therapeutic, supported employment programs are designed to help individuals work in competitive jobs in the community while receiving mental health treatment and rehabilitation services. These programs focus on rapid job placement in competitive employment. Once the individual is working, the program provides supports to retain employment. In Maryland, for example, the state mental health agency and the state vocational rehabilitation agency approved 30 evidenced-based supported employment programs available for young adults with serious mental illness, although these are not uniformly distributed across the state. According to state officials, these programs help individuals find and maintain meaningful jobs that are consistent with the individual’s preferences and abilities. In addition, Connecticut has been providing a type of support that SAMHSA is beginning to explore as a potential evidence-based practice— supported education for young adults with serious mental illness who enroll in higher education. The Connecticut mental health agency provides funding for a supported education counselor at one of the state universities, who provides case management services, acts as a liaison between the university’s disability office and the student with mental illness, and helps students work with relevant university staff to get appropriate accommodations for their mental illness in the classroom or during exams. This counselor serves also as an information resource for the student’s parents, university faculty, and personnel that work with the young adult, as well as local mental health authorities and other key persons in the mental health system across the state. Agencies in states we visited are also coordinating to develop policy and provide multidimensional services. Agencies coordinate client referral, eligibility determination, and service delivery. These coordination efforts help address eligibility gaps between the children and adult mental health systems and ease service delivery so that young adults do not have to navigate multiple discrete programs. Formal Referral Process across Agencies: This strategy can provide a bridge for individuals who were receiving services and supports from one agency as children but must transition to another agency in order to continue to receive those services and supports as adults. In Connecticut, many young adults are formally referred to the Connecticut mental health agency by the state agency responsible for foster care, juvenile justice, and youth mental health services. A cooperative agreement between the two agencies specifies appropriate candidates for the state mental health agency’s young adult program, the process for providing services to them by both agencies during the transition period, and the agencies’ respective funding responsibilities. Transitioning youth are referred as early as possible, generally at age 16, to allow state mental health agency officials to develop appropriate plans. These referrals are made on a monthly basis. Integrated Eligibility Determination and Service Delivery: Maryland’s mental health agency has a formal arrangement with the state’s vocational rehabilitation agency to integrate eligibility determination and service delivery processes. Under a cooperative agreement signed by the two agencies in 2007, individuals determined eligible by the mental health agency are also determined eligible by the vocational rehabilitation agency for supported employment services. The two agencies have automated their eligibility determination processes to be simultaneous. Once approved for services, individuals receive assistance finding and keeping a job and managing their mental illness in the workplace. Services are provided by not-for-profit supported employment programs that hire employment support specialists, according to a state mental health official. Use of Statewide and Local Interagency Task Forces: In 2003, Mississippi’s mental health agency created an interagency Transitional Services Task Force to develop policies and identify resources appropriate for young adults with serious mental illness aged 14 through 25. The task force monitors the implementation of the state’s young adult program at its two current sites and hopes to eventually present the results of an evaluation to justify expansion of the program statewide. At the local level, Mississippi established Multidisciplinary Assessment and Planning Teams, comprised of local officials from various state agencies and advocates that meet and review cases that include individuals aged 14 to 21 transitioning from the child to adult mental health systems, as well as other young adults considered high-risk. Currently operating in 33 of 82 counties in the state, the teams coordinate delivery of various services including mental health, education, vocational rehabilitation, and health care services. They have some flexible funds for providing additional multidimensional services, such as housing, tutoring, school uniforms, and in-home respite care. Another strategy is to leverage federal and state funds to finance programs for young adults with serious mental illness. The four states we visited use Medicaid to pay for mental health services approved by CMS in the states’ Medicaid plans, such as those provided in a physician’s office, at an outpatient clinic, or rehabilitation program in the community. To varying extents, three of the four states—Maryland, Massachusetts, and Mississippi—use Medicaid’s rehabilitation option to pay for additional services that can support a young adult’s recovery from mental illness. These services, which are provided to address daily problems related to community living and interpersonal relationships, may include psychiatric rehabilitation program services, symptom management, and counseling. Further, some of these states have used certain CMS grants to help cover some expenses of their young adult programs. For example, Mississippi targeted the Real Choice Systems Change grant that it received from CMS in 2001 to develop a “person-centered planning” approach for delivering services to young adults with serious mental illness. The grant concluded in 2004, but the state is using its own funds to provide these services in two of its local mental health centers and to provide training related to this approach. In addition, all four states we visited use their own funds to pay for mental health and other services for individuals in their young adult programs that are not eligible for Medicaid or who are Medicaid-eligible but receive services not covered under Medicaid. Examples of such services include housing and transportation costs. In addition, states we reviewed used funds from other federal programs to provide various transition services to eligible youth through their young adult programs. In the case of Maryland, this involves “braided funding” for supported employment services. Braided funding refers to the integration of funding streams from multiple agencies so that the individual receiving services experience a seamless array of services. For example, various components of supported employment services are funded by Maryland’s mental health agency, Maryland’s vocational rehabilitation, and Medicaid. Maryland’s mental health agency and Maryland’s vocational rehabilitation agency have a cooperative agreement that outlines the funding components. In addition, Maryland requires individuals in its public mental health system, including young adults, to apply for SSI or any other applicable public benefit in order to receive income assistance (to pay for housing and insurance) to pay for services, according to a state mental health official. In the development of its young adult program, Maryland also uses part of its CMS Medicaid Infrastructure Grant to consult with experts on funding strategies and to implement the web-based mental health and vocational rehabilitation eligibility system. In addition to federal funds leveraged at the state level, some local state agencies obtain services for their clients from other federally funded programs. Officials from one service provider in Massachusetts told us that their organization works with state housing authorities to secure HUD’s Section 8 Rental Voucher Program for adults who were previously homeless. When we conducted our site visit, the provider was using 10 such vouchers to serve 20 to 30 young adults, according to this provider. State officials said that this was an important initiative by this provider because states find it particularly difficult to obtain appropriate housing for young adults with serious mental illness who have criminal records. In Maryland, although the state mental health agency does not work directly with the state WIA Office, a local provider in its young adult program works with local WIA offices in two counties to coordinate employment services for young adults with serious mental illness. This provider stations case managers at these counties’ WIA One-Stop Centers to help young adults with serious mental illness with tasks such as identifying job opportunities or scheduling interviews. Another strategy is to involve young adults and family members in developing policy and delivering and evaluating services. The Massachusetts mental health agency established a statewide Youth Development Committee in 2002 to focus on individuals aged 16 through 25 with serious mental illness. Committee membership includes young adults, parents, state child and adult mental health agency representatives, transition experts, and other professionals. Co-chaired by young adults with serious mental illness, the committee has engaged in a strategic planning process and meets every month to discuss progress in the field. The Committee has young adult representatives from all areas of the state, and these representatives report on progress related to supported employment, housing, and transition age youth case management in their areas. They also discuss Massachusetts’s implementation of the Transition to Independence Process (TIP) system and identify emerging staff training needs associated with Motivational Interviewing and the TIP model. TIP is an approach that delivers individualized-tailored services to youth and young adults with serious mental illness by involving them in defining and achieving their employment, education, and community-life goals. The state also has a Youth Leadership Academy, which young adults attend to build peer networks and social connections and obtain information on key topics such as substance abuse prevention and health insurance. The needs of young adults with serious mental illness have also received some attention from the federal government, which has, to some extent, supported state efforts to serve them through demonstrations, technical assistance, and research. In response to presidential concern about uncoordinated service delivery in the mental health and other related systems, several federal agencies have formed working groups to consider opportunities for collaboration among programs that involve mental health, youth in transition, or the needs of transitional youth with disabilities. SAMHSA, in collaboration with Education, funded local services through the Partnerships for Youth in Transition demonstration aimed at developing local programs and assisting young adults with serious mental illness as they transition to adulthood. A total of $9.4 million was awarded over 4 years to several sites in Maine, Minnesota, Pennsylvania, Utah, and Washington. The demonstrations were intended to be self- sustaining and, although funding ended in 2006, sites in Pennsylvania, Utah and Washington have continued the program in total and aspects of the program continue in Minnesota and Maine. Pennsylvania, for example, has continued to operate a program serving young adults aged 14 through 25 in two economically disadvantaged communities. In these communities, young adults with serious mental illness continue to be involved in planning and implementing activities and serve on review panels and state- level advisory boards. These communities also use transition facilitators who work with young adults to help determine their goals and how local services can assist them. SAMHSA officials stated that this demonstration project resulted in positive outcomes that they would like other states to achieve. A preliminary evaluation of 193 program participants conducted by the National Center on Youth in Transition at the University of South Florida suggests that there may be some positive outcomes, such as employment, for participants from the program after 1 year. While the Partnerships for Youth in Transition demonstration ended in 2006, SAMHSA officials indicated they are considering continuing similar work and looking for opportunities to use the data and lessons learned from this demonstration to help states better serve young adults with serious mental illness. While we found that there are currently no federal programs that target this population, agencies fund other demonstration projects that support state and local efforts to provide or better coordinate existing services for transition-age individuals. For example, SSA’s Youth Transition Demonstration funds programs at ten sites that help youth aged 14 through 25, who receive or may qualify for SSI, transition from school to employment. SSA officials stated that mental illness is the primary disabling condition of 23 percent of the Youth Transition Demonstration enrollees. SSA developed alternative SSI rules only for the participants in this program that included extending their eligibility for SSI beyond age 18, even if the recipient does not meet SSI adult eligibility criteria. While not targeted to young adults with serious mental illness, CMS also offers a number of Medicaid demonstration waivers or options that can help states pay for services for this population. For example, the Community Alternatives to Psychiatric Residential Treatment Facilities Demonstration Grant Program has awarded 5-year grants to 10 states aimed at preventing youth up to age 21 from entering psychiatric residential treatment facilities. This demonstration can cover the cost of a comprehensive package of community-based services for these youth, such as 24-hour support and crisis intervention, respite care for families, and after-school support programs. Additional federal programs that can be used by states to serve young adults with serious mental illness are described earlier in this report, as well as included in appendix VI. Currently, some federal agencies provide technical assistance on promising practices that can help states coordinate services for young adults with serious mental illness as they transition to adulthood. SAMHSA’s Center for Mental Health Services contracts with two nonprofit organizations to operate the Technical Assistance Partnership for Child and Family Mental Health. The Partnership facilitates collaboration among government officials, organizations, and community leaders to develop and implement systems of care. SAMHSA officials told us the Partnership has recently begun to provide information on the specific needs and issues pertinent to young adults with serious mental illness and resources on child welfare youth in transition. The National Collaborative on Workforce and Disability for Youth, funded by Labor’s Office of Disability Employment Policy, provides technical assistance to One-Stop Centers to increase their capacity to serve youth aged 14 through 25 with disabilities, including those with serious mental illness. For example, according to Labor officials, Florida used this resource to enable its workforce development system to better assist youth with disabilities as they transition to adulthood. Recognizing the uncoordinated service delivery systems that youth must navigate, the Collaborative also published a resource guide for workforce practitioners and policy makers. The guide is designed to promote an understanding of how to serve youth with mental health needs and provides information on overcoming obstacles to better coordinate services across delivery systems for young adults with serious mental illness. With regard to federal support for research in this area, NIMH awarded a $1.1 million grant in 2007 to four research projects examining innovative strategies to provide services to youth with serious mental illness. According to NIMH, while evidence-based and traditional treatment models have been developed and tested for use with younger children and adults, evidence-based interventions and services have not been empirically tested on young adults or systematically adapted for this specific age group. The goal of three of the research projects is to assess the impact of tailoring existing treatment models to the needs of transition-aged youth. For example, one researcher is planning to adapt an established family-focused intervention approach for juvenile offenders to one that gives youth offenders with serious mental illness more control of their treatment and targets age-relevant social, work, and independent living skills. Another project examines young adults’ use of primary care, mental health services, and psychotropic medications, as well as their overall mental health care costs. Agency officials told us information could help inform future research and strategies that promote continuity of care for young adults with serious mental illness as they transition to adulthood. Although there are no federal interagency coordination efforts that focus exclusively on young adults with serious mental illness, three independent multiagency groups were recently formed to consider opportunities to coordinate federal programs and could address the needs of this group. According to agency officials, while efforts are not coordinated across these three groups formally, they have similar agency and staff participation. Figure 3 lists these groups, their target population, goals, and participating agencies. While interagency groups have been tasked with coordinating across agencies, officials from a number of agencies noted that differences in their missions and goals may make it difficult to coordinate services for young adults with serious mental illness. For example, according to one SAMHSA official, mental health agencies are more focused on maintaining youth in the home or in a community-based setting, whereas juvenile justice agencies are more focused on protecting the community from youth offenders. Agency officials also cited differences in eligibility criteria across programs as a challenge for coordination, stating that age requirements for receiving benefits-–often written in statute-–vary across some programs. Despite these limitations, ongoing federal coordination efforts are beginning to address the needs of this population. The Federal Executive Steering Committee on Mental Health was formed in response to the 2003 President’s New Freedom Commission on Mental Health, which made recommendations to the federal government to better coordinate services, such as employment supports and housing, for those with mental illness. The committee has taken steps to promote access to supported employment services for young adults with serious mental illness by reviewing existing federal programs and initiatives for youth transitioning to the workforce to better coordinate their efforts. To promote youth leadership and youth-guided policymaking related to mental health at the federal level, the committee, led by Labor’s Office of Disability Employment Policy, also held a National Youth Summit in 2007. The President’s New Freedom Commission recommended actions to address mental health stigma, and SAMHSA launched a campaign specifically targeted to young adults. The Shared Youth Vision Federal Collaborative Partnership was created to strengthen coordination among federal youth-serving agencies. It was formed in response to a report written by the White House Task Force on Disadvantaged Youth in 2003, which identified challenges related to coordination among youth-serving programs and prompted federal efforts to support capacity building and collaboration among those agencies. Many of the federal officials we spoke with indicated this initiative could have an impact on young adults, including those with serious mental illness. Sixteen states have received funding through this initiative to develop interagency collaboration as well as state and local partnerships to provide transition assistance to disadvantaged young adults, including those with serious mental illness. For example, the Oklahoma Youth Vision Project is working across eight state youth-serving agencies, Job Corps, as well as local school districts, group homes, and employers to help disadvantaged youth, particularly those aging out of foster care, aged 16 through 21, graduate from high school and become employed. In addition, this initiative sponsors technical assistance forums for participating federal agencies and runs a solutions desk that provides the 16 state grantees with a single point of access to federal resources such as training and technical assistance in implementing federal grants related to disadvantaged youth. The third coordination initiative, Federal Partners in Transition Workgroup, led by Labor’s Office of Disability Employment Policy, began in June 2005 and focuses exclusively on disabled youth transitioning to adulthood, including young adults with serious mental illness. The Federal Partners in Transition Workgroup brings together federal agency staff who work on youth, transition, and disability issues. This group has concentrated on strengthening connections with employers and preparing youth with disabilities for the labor market. It also plans to hold a forum in 2008 to coordinate federally funded transition-focused technical assistance centers across agencies. Although none of these federal interagency coordination groups or existing programs focuses exclusively on young adults with serious mental illness, overall they are beginning to explore ways to coordinate and provide services to assist this group. State investments in programs to help young adults with serious mental illness become productive and independent are designed to address the challenges these individuals face. The state and local officials we spoke with appeared to be optimistic about the potential of efforts like theirs to make a difference for these young adults. The federal government has played a limited but important role in these efforts by funding demonstrations and research and providing technical assistance. Evaluations of these demonstration projects have shown some promising outcomes, and the number of practices grounded in evidence-based research continues to grow. While programs that assist transitional youth, youth with disabilities, and the mentally ill are situated in different departments, federal agencies are beginning to work together to coordinate these programs to better serve young adults with serious mental illness. The federal government’s continuing efforts to disseminate information about promising state and local programs may sustain the momentum in this area by providing valuable lessons and encouragement to others interested in assisting young adults with serious mental illness. We provided a draft of this report to Education, DOJ, HHS, HUD, Labor, and SSA and draft sections concerning their states to agencies in Connecticut, Maryland, Massachusetts, and Mississippi. We received technical comments from all of the federal and state agencies, which we incorporated where appropriate, and general comments from HHS, which are included in appendix XIII. In its general comments, HHS indicated that the report was pertinent and timely. However, HHS stated that the report should have included a number of other important topics and should have focused on younger individuals as well as those aged 18 through 26. While we agree that additional research could be beneficial, our report focused specifically on the objectives and population we agreed upon with our requesters. To better convey our scope, we revised the report title in response to HHS’s suggestion. HHS also commented that our definition of serious mental illness was unclear. In particular, they took issue with our use of the NCS-R to estimate the number of young adults with serious mental illness. They believe that data from the NCS-R represent only a subset of those individuals who would be considered to have a serious mental illness under the definition used by SAMHSA to determine how states can use Community Mental Health Block grant (see 58 Fed. Reg. 29422 (May 20, 1993), implementing Pub. L. 102-321). Specifically, they pointed out that the NCS-R does not include those in institutions and does not identify those with schizophrenia, or personality disorder. Additionally, HHS stated that the researchers and consumer organizations that we interviewed were weighted toward those with expertise in childhood mental illnesses and did not include experts in schizophrenia or adult mental health consumer organizations. HHS also stated that the report should have included a more extensive discussion of serious emotional disturbance and the degree to which states were providing services specifically for young adults with serious mental illness. Researchers and policy makers have long recognized that defining serious mental illness in order to estimate its prevalence or to determine eligibility for services presents a significant challenge. Our report generally uses a definition of serious mental illness that is based on SAMHSA’s regulation implementing Pub. L. 102-321. We clarified the text to explain that in places throughout the report, we may use a slightly broader or narrower concept of serious mental illness as necessitated by available data as well as programmatic or administrative definitions. We used NCS-R data to estimate the prevalence of serious mental illness on the basis of recommendations from several researchers. In addition, the NCS-R was identified in a SAMHSA publication as a source of nationally representative data that measures the severity of mental disorders, which relates to SAMHSA’s definition of serious mental illness. Our draft clearly acknowledges the limitations of the NCS-R by stating that our estimate is likely to be low. It also provides the number of individuals 18 through 26 with serious mental illness who receive SSI and DI benefits due to mental illness. This number is likely to include young adults who may not have been included in the NCS-R, such as those living in an institution and many with schizophrenia or psychosis. To respond to HHS’s comments, we have further highlighted our discussion of why limitations of the NCS-R result in an underestimate of the number of young adults with serious mental illness. With regard to the expertise of researchers and consumer organizations we interviewed, we chose the individuals and groups we did primarily because of their expertise in young adults with serious mental illness and, in many cases, because they were recommended to us by federal officials or researchers. While most also have an interest in a younger population, this group included organizations that have a strong interest in adult mental health issues, such as Mental Health America, several National Alliance of Mental Illness chapters, and Black Mental Health Alliance for Education and Consultation, Inc. In addition, we added information in response to HHS comments to better distinguish serious emotional disturbance from serious mental illness and information from other research on the degree to which state mental health agencies are implementing transition services. As agreed with your offices, unless you make arrangements to release its contents earlier, we will make no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to Education, DOJ, HHS, HUD, Labor, and SSA. Copies will also be made available to others on request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact us on (202) 512-7215 or (202) 512-7114 if you or your staff have any questions about this report. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix XIV. To conduct our work, we relied on multiple methodologies, including data analyses, literature reviews, interviews, and site visits to four states. More specifically, to provide information on the number and demographic characteristics of young adults with serious mental illness, which we defined as individuals aged 18 through 26, we analyzed data from the federally funded National Comorbidity Survey-Replication, 2001-2003 (NCS-R), of the 2006 Current Population Survey, Annual Social and Economic Supplement (CPS), and two sources of data on individuals receiving disability benefits from the Social Security Administration (SSA): the 2006 Ticket Research File (TRF) and the National Beneficiary Survey, 2004 (NBS). We also reviewed published research on the extent of mental illness among the homeless and those involved with the criminal justice or foster care systems. To identify the challenges faced by young adults with serious mental illness, we reviewed published research and interviewed federal, state, and local officials; mental health practitioners; experts; and advocacy groups. To describe the programs and strategies that selected states are using to assist these youth, we visited four states that had implemented programs specifically focused on this population— Connecticut, Maryland, Massachusetts, and Mississippi—and met with officials from key state agencies and private organizations involved in service delivery. To determine how federal agencies are supporting states and coordinating federal programs to help young adults with serious mental illness, we interviewed key federal officials from agencies within the U.S. Department of Education (Education), Department of Health and Human Services (HHS), Department of Housing and Urban Development (HUD), Department of Justice (DOJ), Department of Labor (Labor), and SSA. We also reviewed documents pertaining to the activities and accomplishments of interagency coordination groups, as well as funding and eligibility information on federal programs relevant to young adults with serious mental illness. We conducted our work from June 2007 through June 2008 in accordance with generally accepted government auditing standards. To provide information on the number and demographic characteristics of young adults aged 18 through 26 with serious mental illness, we relied on data from the NCS-R, the CPS, the TRF, and the NBS. We considered using data from another survey, the National Survey on Drug Use and Health conducted by the Substance Abuse and Mental Health Services Administration (SAMHSA). Until 2004, SAMHSA reported rates of serious mental illness based on this survey but has since determined that the survey does not employ a sufficiently reliable measure of serious mental illness and therefore no longer uses it for this purpose. The NCS-R is a nationally representative survey of English-speaking household and campus group housing residents aged 18 and over living in the contiguous United States. Funded primarily by the National Institute of Mental Health, with supplemental funding from the National Institute on Drug Abuse and the Substance Abuse and Mental Health Services Administration (SAMHSA), the NCS-R served as the U.S. participation in the World Health Organization’s World Mental Health Survey Initiative. The household sample was selected using a multistage clustered area probability sampling technique, and students living in campus-housing were selected from the household sample. Between February 2001 and April 2003, 7,693 individuals were interviewed, yielding a response rate of 71 percent. During the interviews, respondents were assessed for the presence of mental disorders within the previous year, using the Composite International Diagnostic Interview, a lay-administered survey that generates diagnoses based on the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders--Fourth Edition (DSM-IV) and the International Classification of Diseases--10. To estimate the prevalence of serious mental illness in young adults, we obtained the NCS-R public use data file, as well as a supplemental file containing an indicator of the severity—serious, moderate, or mild—for each respondent diagnosed with a mental illness. This severity indicator was developed separately by the principal investigator of the NCS-R and is not included in the public use file. Using these two files, we isolated the 1,589 respondents who were aged 18 through 26 and identified the subset with serious mental illness as well as the subset with moderate, mild, or no mental illness. We applied weighting variables to our estimates in order to project these results to the general population of young adults in the United States. Following this methodology, we obtained a prevalence estimate of serious mental illness among young adults in U.S. households of 6.5 percent. To estimate the total number of young adults with serious mental illness in 2006, we obtained population estimates from the 2006 CPS. We applied the 6.5 percent prevalence estimate to the total civilian, noninstitutionalized population estimate for young adults aged 18 through 26—37 million. Because NCS-R data pertain to individuals surveyed between February 2001 and April 2003, our 2006 estimates are based on the assumption that rates of serious mental illness were relatively stable among the young adult population from that survey period through 2006. This is supported by research that shows that the prevalence of serious mental illness among adults in the United States did not change significantly between 1990 and 2003. We also compared the demographic characteristics of the cohort of young adults aged 18 through 26 with serious mental illness to the cohort of young adults with mild or moderate mental illness and the cohort of those with no mental illness. We applied weighting variables to project our results to the general population of young adults in the United States, and all estimates are presented using a 95 percent confidence interval, within plus or minus 12 percentage points, unless otherwise noted. All tests of statistical significance were conducted at the 5 percent significance level for our analyses. The TRF is a longitudinal database that combines administrative data from multiple SSA databases for all Supplemental Security Income (SSI) and Disability Insurance (DI) beneficiaries between age 18 and retirement age from 1996 through 2006. SSA provided us with an extract file containing data on the subset of 764,384 individuals aged 18 through 26 in 2006. We identified 186,101 individuals whose primary disability was listed as a serious mental illness at any point in 2006 by including those whose impairment fell under any of the following categories: major affective disorders, schizophrenia and psychoses, anxiety and neurotic disorders, and certain other mental disorders. We then analyzed several characteristics of those individuals, including race, gender, primary and secondary disability, and benefit type, using information in the database. Sponsored by SSA’s Office of Disability and Income Security, the NBS is a nationally representative survey of SSI and DI beneficiaries and Ticket to Work participants between the ages of 18 and 64. The sample was selected using a multistage clustered sampling technique, and 6,520 individuals were interviewed between February and October 2004, for a weighted response rate of 77.5 percent. We used the same methodology for identifying the cohort of young adults with serious mental illness that we used for the TRF, based on each respondent’s primary disabling condition. In total, the subsample contained 1,436 respondents aged 18 through 26 and 356 that were found to have a serious mental illness listed as their primary disability. We applied weighting variables to each estimate in order to project our results to the general population of young adults receiving disability benefits because of a serious mental illness, and all estimates are presented using a 95 percent confidence interval, within plus or minus 7 percentage points. Finally, we identified demographic data in the NBS that could be directly compared to demographic data in the NCS-R. We determined that data from the NCS-R, CPS, TRF, and NBS were sufficiently reliable for our purposes. In order to assess the reliability of the NCS-R, CPS, and NBS, we reviewed documentation pertaining to the sampling methodologies, survey instruments, and the structure of the data files. In order to assess the reliability of TRF data, we reviewed documentation on the construction of the file and the data reliability tests conducted by SSA’s contractor—Mathematica Policy Research, Inc. To provide information on the number of young adults with serious mental illness who are in certain vulnerable populations—specifically, those who are homeless or involved in the justice or foster care systems—we conducted a literature review that included published peer-reviewed research articles identified through databases such as ProQuest, Dissertations, Ovid, PsycINFO, PsycFirst, MEDLINE, ECO/WorldCat, Social Science Abstracts, and GAO publications. We used various search terms, such as young adult, mental illness, homeless, incarcerated, and foster care, in searching these databases, and we selected original research published since 1990. We were unable to identify any original research since 1990 that provided national estimates of the rates of serious mental illness in young adults in the three vulnerable populations. We did identify research on rates of mental illness in these vulnerable populations. We reviewed these studies’ findings for methodological rigor and determined that they were sufficiently reliable for the purposes of this study. To learn more about the major challenges faced by young adults with serious mental illness and their families, as well as their demographic characteristics, we conducted a literature review using the same databases identified above. We used various search terms, such as young adult, mental illness, challenges, support needs, service needs, family, and caregivers, and selected original research published since 1995. We also collected other literature cited in these studies as well as literature recommended to us during our interviews. We then conducted a more intensive review of the 18 studies identified through these methods. For each selected study, we reviewed the study’s findings for methodological rigor and determined that it was sufficiently reliable for the purposes of this study. To gather information related to all four objectives, we also conducted interviews with academic researchers and other experts on mental health issues, including some who represented mental health organizations. We identified interviewees through our literature review and through recommendations from federal agency officials and other mental health experts. In addition, we identified mental health-related organizations in the states we visited as part of our site visits. For this study we interviewed: Hewitt B. Clark, Ph.D., University of South Florida; Maryann Davis, Ph. D., University of Massachusetts; Mary Molewyk Doornbos, Ph.D., R.N., Calvin College; Donna Folkemer, National Conference of State Legislatures; Vicki Hines-Martin, Ph.D., R.N., C.S., University of Louisville; Ronald C. Kessler, Ph.D., Harvard Medical School, Harvard University; Chris Koyanagi, Judge David L. Bazelon Center for Mental Health Law; Linda Rose, R.N., Ph.D., The Johns Hopkins University School of Nursing; Ann Vander Stoep, Ph.D., University of Washington; Mary Wagner Ph.D., SRI International. We also interviewed representatives from the following advocacy groups: Black Mental Health Alliance for Education and Consultation, Inc.; Maryland Coalition of Families for Children’s Mental Health; Mississippi Families as Allies for Children’s Mental Health, Inc.; Family Advocates for Children and Behavioral Health, Connecticut; National Federation of Families for Children’s Mental Health; Generations United; National Alliance on Mental Illness, headquarters and chapters in Connecticut, Massachusetts, and Maryland; Mental Health America; National Family Caregivers Association; National Council on Independent Living; and Self Reliance, Inc., Center for Independent Living. To describe the programs and strategies that selected states are using to assist young adults with serious mental illness, we visited four states that had implemented programs specifically focused on this population— Connecticut, Maryland, Massachusetts, and Mississippi. To identify these states, we reviewed published research and interviewed federal and state officials, mental health researchers, and advocacy groups to learn of states that were viewed as offering progressive statewide or state-organized programs that focus specifically on young adults with serious mental illness. Programs in these states should not be considered representative of how states assist young adults with serious mental illness nationally; rather, they serve as examples of states that are providing such assistance. We considered other states identified by research or by the officials, researchers, and advocacy groups, but these states generally had small, local programs available to serve young adults with serious mental illness, not statewide or state-organized programs. Before we made the site visits, we reviewed available literature on the four states’ mental health systems and programs, including state mental health planning documents and federal grants pertaining to this population. During the site visits, we met with officials from state mental health agencies, as well as other key state agencies and private sector organizations involved in providing, coordinating, or advocating for services for this population. During some of these meetings, we spoke with young adult consumers of state mental health services. Given that state mental health agencies are responsible for administering and coordinating services across the state for individuals with serious mental illness, we relied on each state mental health agency to serve as the lead agency in arranging visits with local mental health organizations, other state agencies, and private organizations. While state programs that assist young adults with serious mental illness varied in the specific age ranges they targeted, for purposes of this report we focused on the key programs that state mental health agency officials identified, which generally served individuals aged 16 through 25. In addition, we reviewed written information on state policies and programs provided by state officials we interviewed. Appendix II: Federal Programs Identified by Bazelon as Helping Young Adults with a Serious Mental Illness (SMI) Engaging Persons with Disabilities in National and Community Services Grants Drug-Free Communities Support Program Grants Juvenile Justice and Delinquency Prevention State Formula Grant Title V Community Prevention Grants Program National Guard Youth ChalleNGe Program Elementary and Secondary School Counseling Program Federal Direct Student Loan and Family Education Loan Programs Grants for the Integration of Schools and Mental Health Systems Federal Supplemental Educational Opportunity Grants Safe and Drug Free Schools Vocational and Adult Education State Basic Grants Vocational Rehabilitation: Supported Employment State Grants Vocational Rehabilitation, Title I Formula Grants Community Mental Health Services Block Grant Comprehensive Community Mental Health Services for Children and Their Families Educational and Training Vouchers Program for Youths Aging out of Foster Care Health Care for the Homeless Healthy and Ready to Work Initiative John H. Chafee Foster Care Independence Program Maternal & Child Health Block Grant Medicaid Partnerships for Youth in Transition Projects for Assistance in Transition from Homelessness - PATH Runaway and Homeless Youth Act Programs State Adolescent Substance Abuse Treatment Coordination Substance Abuse Prevention and Treatment Block Grant Temporary Assistance for Needy Families Title IV-B and Promoting Safe and Stable Families Title IV-E – Payments for Children in Foster Care Youth Transition Into the Workplace Grant Safe Schools and Healthy Students Initiative Section 8 Housing Choice Vouchers Workforce Investment Act Youth Formula Grants Ticket-To-Work and Work Incentives Improvement Special Supplemental Nutrition for Women, Infants and Children (WIC) Helps people stay out of the hospital and develop skills for living in the community, through treatment customized to individual needs delivered by a team of practitioners, available 24 hours a day. Integrated treatment for mental illness and substance abuse addiction for people who have these co-occurring disorders. Involves a partnership among consumers, families, and practitioners to learn ways to manage mental illness and reduce tension and stress within the family. Emphasizes helping people set and pursue personal goals and implement action strategies in their everyday lives. A well-defined approach to help people with mental illness find and keep competitive employment within their communities, through employment services that are integrated with mental health treatment. Appendix IV: Demographic Characteristics of Young Adults Aged 18–26, by Severity of Mental Illness, 2001–2003 45% (34-56) 43% (38-49) 53% (49-58) 55 (44-67) 57 (51-62) 47 (42-51) 59 (47-72) 68 (61-75) 64 (59-70) 19 (11-27) 11 (6-15) 12 (8-16) 17 (10-24) 16 (11-21) 19 (15-23) 5 (1-8) 5 (2-9) 5 (3-7) 20 (11-28) 16 (11-21) 19 (12-27) 26 (17-36) 20 (15-26) 21 (17-26) 34 (25-43) 35 (26-44) 33 (26-41) 20 (8-32) 29 (16-41) 26 (15-37) 20 (13-28) 25.5 (21-30) 20 (16-23) 80 (72-87) 74.5 (70-79) 80 (77-84) 71 (57-84) 75 (69-81) 81 (76-85) 29 (16-43) 25 (19-31) 19 (15-24) 63 (52-75) 68 (62-73) 71 (67-75) 37 (25-48) 32 (27-38) 29 (25-33) 36 (24-47) 19 (15-23) 17 (14-21) 32 (24-40) 35 (28-42) 31 (25-38) 32 (24-41) 46 (39-54) 51 (44-59) 71 (61-81) 73 (38-78) 75 (70-80) 29 (19-39) 27 (22-32) 25 (20-30) Appendix V: Demographic Characteristics of Young Adults 18-26 Who Received SSA Disability Benefits Because of a SMI Results from the Ticket Research File, 2006 Results from the National Beneficiary Survey, 2004 10% (6-13) 90 (87-94) 15 (10-19) 85 (81-90) 59 (53-66) 41 (34-47) 48 (41-55) 42 (36-48) 10 (7-13) 93 (89-96) 7 (4-11) The Justice and Mental Health Collaboration Program was created to increase public safety by facilitating collaboration among the criminal justice, juvenile justice, and mental health and substance abuse treatment systems to increase access to services for offenders with mental illness. Jointly funded by SSA and Labor, the Disability Program Navigator Initiative funds program liaisons who seek to coordinate all federally funded services to assist disabled individuals with employment training and employment placement at One-Stop centers which were established under the Workforce Investment Act of 1998. The Individuals with Disabilities Education Act authorizes formula grants to states and discretionary grants to institutions of higher education and other non-profit organizations to support research, demonstrations, technical assistance and dissemination, technology and personnel development, and parent-training and information centers. Under Title I of the Rehabilitation Act, these grants provide federal funds to help cover the costs of providing vocational rehabilitation services which include assessment, counseling, vocational and other training, and job placement necessary for an individual with a disability to achieve an employment outcome. Activities under this program include carrying out special demonstrations for expanding and improving the provision of rehabilitation and other services including: technical assistance, special studies and evaluations, demonstrations of service delivery, transition services, supportive employment, services to underserved populations and/or unserved or underserved areas, among other services. This SAMHSA grant focuses on a state’s infrastructure in order to reduce fragmentation of services across systems. These CMS grants are specifically intended to help states build the infrastructure that will result in improvements in integrated community-based services. This grant program was created by SAMHSA to provide community-based systems of care for children and adolescents with a serious emotional disturbance and their families. The CMS Medicaid buy-in program allows states to expand Medicaid coverage to workers with disabilities whose income and assets would ordinarily make them ineligible for Medicaid. The CMS Medicaid rehabilitation option provides a more flexible benefit and can be provided in other locations in the community, including in the person’s home or other living arrangement. Rehabilitation services may extend beyond the clinical treatment of a person’s mental illness to include helping the person to acquire the skills that are essential for everyday functioning. This grant creates a system of flexible financing for long-term services and supports that enable available funds to move with the individual to the most appropriate and preferred setting as the individual’s needs and preferences change. Populations targeted for transition include individuals of all ages with disabilities including mental illness. This appendix provides an overview of the key programs that target services for young adults with serious mental illness in the four states we visited—Connecticut, Maryland, Massachusetts, and Mississippi. Connecticut’s Department of Mental Health and Addiction Services administers the Young Adult Services program. Since 1998 in coordination with the Department of Children and Families and several other state agencies, this program has provided mental health treatment, supported employment, vocational or educational support, life skills training, and supportive housing, with the particular array and level of care varying slightly by location. Connecticut offers different levels of care, ranging from basic case management services and employment and educational support to highly structured group homes or supervised housing programs with intensive case management, or Assertive Community Treatment programs. In addition, some programs are gender specific. Sixteen of the 21 local mental health authorities offer the Young Adult Services program. State officials indicated that they launched the program due to a federal lawsuit, which resulted in legislative funding for as special group of young adults who were diagnosed with pervasive developmental disorders and exhibited high risk sexual behavior issues. The program evolved to encompass a broader cohort of young adults with severe behavioral health issues and high risk behavior who, without any services, would have ended up in jail or homeless. Because many of these young adults spent most of their lives in institutional settings, such as psychiatric rehabilitative treatment centers, they had not developed interpersonal skills to effectively live in the community. In state fiscal year 2007, 716 individuals were served in the Young Adult Services program or about 27 percent of the 2,615 young adults with serious mental illness receiving any mental health services from the state mental health agency. Maryland’s Mental Hygiene Administration, within the Maryland Department of Health and Mental Hygiene, administers the Transition-Age Youth Initiative, which consists of various programs that provide mental health treatment, supported employment, life skills training, residential services, and, in some cases, supportive housing in the community. Eleven of the state’s 20 mental health agencies offer services through this initiative, although the type and number of services offered vary by region. Some of these programs provide a greater array of services, including various types of mental health treatment services with supported employment, residential, and supportive housing, while others provide more limited case management services. Maryland mental health agency officials stated that program variety was beneficial, because a particular program design will work well for some young adults but not others. State officials told us that Maryland’s Transition-Age Youth programs originated in the late 1990s when the Governor launched an initiative to expand services for young adults with disabilities who were transitioning from the children’s system. As part of the initiative, funds were made available to the various agencies that serve these youth, including the Mental Hygiene Administration. With the money, mental health agency officials decided to fund a variety of types of small programs around the state, with the goal of evaluating them to identify promising programs that could be expanded. A Maryland mental health official said that funds were used to leverage and maximize other types of funds in order to create new services. While these Transition-Age Youth programs continue, a comprehensive evaluation has not been done. In fiscal year 2007, 8,753 young adults aged 18 through 24 received services from the Department of Mental Health and Hygiene, of which 415 received case management and 287 received supported employment services. In total, the state funded the Transition-Age Youth Initiative, which has capacity to treat about 250 individuals per year. Age criteria for individual programs differs, with one program serving individuals as young as 13 and another covering individuals as old as 25. Massachusetts’s Department of Mental Health established the Transition Age-Youth Initiative in 2005 to assist young adults with serious mental illness, including those transitioning from the children’s mental health system to the adult system, as well as those aging out of foster care or the juvenile justice systems. This initiative provides an array of age- appropriate services to individuals aged 16 through 25 that address their needs in the areas of mental health treatment, vocational rehabilitation, employment, housing, peer support, and family psychoeducation. As part of this effort, as of January 2008, the Massachusetts Department of Mental Health had trained both child and adult case managers, as well as 36 Transition Age Youth case managers on the special needs of transition-age youth to better prepare them to assist young adults with serious mental illness in accessing services from the adult mental health system, according to a state mental health official. Transition-Age Youth services are available in all six Massachusetts Mental Health Service Delivery Areas, but the array of services differs by location. State officials cited several factors that influenced the development of the Transition-Age Youth Initiative. One factor was a concern about an area office that reported a decrease in the number of young adults requesting services after transitioning out of the children’s mental health system. After researching the situation, the state found that the adult mental health program had not been providing the types of transition services that this age group needed and found appealing. Another factor was the issuance of the President’s New Freedom Commission on Mental Health report and various other publications on transition-age youth by mental health researchers. In 2007, about 2,600 individuals were enrolled in the Transition-Age Youth Initiative, according to a state mental health official. In contrast with other states, Mississippi does not have a centralized and statewide program for young adults with serious mental illness but has several small-scale initiatives for this population. One of its key initiatives is the Transition Outreach Program, which provides mental health, supported employment, and life skills training to adolescents and young adults in two locations—Hattiesburg and Jackson. This program assists young adults in developing healthy relationships that can motivate them to change their behavior. This program developed because of the gap in services for the transition-age youth with serious mental illness. According to the officials, eventually young people would return to the mental health system, resurfacing at a mental health facility and in crisis. By June 2007, the Transitional Outreach Program had served more than 150 individuals. Another key initiative is the “Multidisciplinary Assessment and Planning Teams,” which consist of officials from various state agencies and advocates that meet to review cases that include youth ages 14 to 21 transitioning from the child to adult mental health systems, as well as other youth considered to be high-risk. Established in 1996, these teams also coordinate the delivery of multiple services including mental health, education, vocational rehabilitation, health care, and juvenile justice services. As of November 2007 the teams were operating in 33 of 82 counties. In addition to the contacts named above, Clarita A. Mrena and Sheila K. Avruch, Assistant Directors; Irene Barnett, Kimberly Siegal, and Yorick Uzes, Analysts-in-Charge; Rachel Beers; Laura Brogan; Leigh Ann Nally; and Carmen Rivera-Lowitt, made major contributions to this report. Martha Kelly, Jean McSween, Suzanne Worth and Paul Gold provided assistance with design and analysis; Susan Bernstein advised on report preparation; and Roger Thomas provided legal advice. Bazelon Center for Mental Health Law. Moving On: Analysis of Federal Programs Funding Services to Assist Transition-Age Youth with Serious Mental Health Conditions. Washington, D.C.: 2005. Burt, M., and others. Helping America’s Homeless: Emergency Shelter or Affordable Housing? Washington, D.C.: Urban Institute Press, 2001. Casey Family Programs. Improving Family Foster Care: Findings from the Northwest Foster Care Alumni Study. Seattle: 2005. Clark, H. B., and others. “Partnerships for Youth Transition (PYT): Overview of Community Initiatives and Preliminary Findings on Transition to Adulthood for Youth and Young Adults with Mental Health Challenges,” 329–32. In The 20th Annual Research Conference Proceedings: A System of Care for Children’s Mental Health: Expanding the Research Base, edited by C. Newman and others. Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, Research and Training Center for Children’s Mental Health, 2008. Clark, H. B., and others. “Services for Youth in Transition to Adulthood in Systems of Care.” In The System of Care Handbook: Transforming Mental Health Services for Children, Youth, and Families, edited by B. A. Stroul and G. M. Blau. Baltimore: Paul H. Brookes, forthcoming. Clark, H. B., and others. “Transition into Community Roles for Young People with Emotional or Behavioral Disorders: Collaborative Systems and Programs Outcomes.” Chapter 11 in Transition of Secondary Students with Emotional or Behavioral Disorders: Current Approaches for Positive Outcomes, Arlington, Va.: The Divisions of the Council for Exceptional Children, 2004. Consumer Quality Initiatives. Voices of Youth in Transition: The Experience of Aging Out of the Adolescent Public Mental Health Service System in Massachusetts: Policy Implications and Recommendations. Dorchester, Mass.: Dec. 11, 2002. Davis, M., and others. “Longitudinal Patterns of Offending During the Transition to Adulthood in Youth from the Mental Health System.” The Journal of Behavioral Health Services and Research 31, no. 4 (2004): 351–66. Davis, M., J. Geller, and B. Hunt. “Within-State Availability of Transition-to- Adulthood Services for Youths with Serious Mental Health Conditions.” Psychiatric Services 57, no. 11 (2006): 1594–99. Davis, Maryann. “Improving Youth Transitions Systems and Measuring Change.” Paper, National Partnerships for Youth in Transition Forum, Center for Mental Health Services, Massachusetts, August, 2005. Davis, Maryann. Pioneering Transition Programs: The Establishment of Programs That Span the Ages Served by Child and Adult Mental Health. Rockville, Md.: Substance Abuse and Mental Health Services Administration, Center for Mental Health Services, 2007. Davis, Maryann, and Nancy Koroloff. “The Great Divide: How Mental Health Policy Fails Young Adults” Research in Community and Mental Health 14 (2006): 53–74. Davis, Maryann, and Diane L. Sondheimer. “State Child Mental Health Efforts to Support Youth in Transition to Adulthood.” Journal of Behavioral Health Services and Research 32, no. 1 (2005): 27–42. Davis, Maryann, and Ann Vander Stoep. The Transition to Adulthood among Adolescents Who Have Serious Emotional Disturbance. Prepared for The Delmar, New York, National Resource Center on Homelessness and Mental Illness Policy Research Associates. Rockville, Md.: Center for Mental Health Services, Substance Abuse and Mental Health Services Administration, April 1996. Deschenes, N., H. B. Clark, and J. Herrygers. “Evaluating Fidelity of Community Programs for Transition-Age Youth,” 137–39. In The 21st Annual Research Conference Proceedings: A System of Care for Children’s Mental Health: Expanding the Research Base, edited by C. Newman and others. Tampa: University of South Florida. Louis de la Parte Florida Mental Health Institute, Research and Training Center for Children’s Mental Health, 2008. Doornbos, Mary Molewyk. “The 24-7-52 Job: Family Caregiving for Young Adults with Serious and Persistent Mental Illness.” Journal of Family Nursing 7, no. 4 (2001): 328–44 Doornbos, Mary Molewyk. “Family Caregivers and the Mental Health Care System: Reality and Dreams.” Archives of Psychiatric Nursing 16, no. 1 (2002): 39–46. Greenbaum, P. E., and others. “National Adolescent and Child Treatment Study (NACTS): Outcomes for Children with Serious Emotional and Behavioral Disturbance.” Journal of Emotional and Behavioral Disorders 4, no. 3 (1996): 130–46. Haber, M. G., and others. “Predicting Improvement of Transitioning Young People in the Partnerships for Youth Transition Initiative: Findings from a Multi-Site Demonstration.” Journal of Behavioral Health Services and Research, forthcoming. Hines-Martin, V., and others. “Barriers to Mental Health Care Access in an African American Population.” Issues in Mental Health Nursing 24 (2003): 237–56. Hines-Martin, Vicki P. “Environmental Context of Caregiving for Severely Mentally Ill Adults: An African American Experience.” Issues in Mental Health Nursing 19 (1998): 433–51. Horwitz, Allan V. and Susan C. Reinhard. “Ethnic Differences in Caregiving Duties and Burdens among Parents and Siblings of Persons with Severe Mental Illnesses.” Journal of Health and Social Behavior 36, no. 2 (1995): 138–50. James, D. and L. Glaze. Mental Health Problems of Prison and Jail Inmates. Special report prepared by the Bureau of Justice Statistics, U.S. Department of Justice. Washington, D.C.: 2006. Johnson, Eric D. “Differences among Families Coping with Serious Mental Illness: A Qualitative Analysis.” American Journal of Orthopsychiatry 70, no. 1 (2000): 126–34. Karpur, A. H. B. Clark, P. Caproni, and H. Sterner. “Transition to Adult Roles for Students with Emotional/Behavioral Disturbances: A Follow-Up Study of Student Exiters from Steps-to-Success.” Career Development for Exceptional Individuals 28, no. 1 (2005): 36–46. Kaye, Steve. Employment and Social Participation among People with Mental Health Disabilities. San Francisco: National Disability Statistics and Policy Forum, 2002. Kessler, R., and others. “The Prevalence and Correlates of Serious Mental Illness (SMI) in the National Comorbidity Survey Replication (NCS-R).” In Mental Health, United States, 2004, edited by R. W. Manderscheid and J. T. Berry. DHHS Publication (SMA)-06-4195. Rockville, Md.: Substance Abuse and Mental Health Services Administration, 2006. Kessler, R., and others. “Prevalence, Severity, and Comorbidity of 12- Month DSM-IV Disorders in the National Comorbidity Survey Replication.” Archives of General Psychiatry 62 (2005): 617–27. Kessler, R., and others. “Prevalence and Treatment of Mental Disorders, 1990 to 2003.” The New England Journal of Medicine 352, no. 24 (2005): 2515–23. Mays, Gloria D. and Carole L. Lund. “Male Caregivers of Mentally Ill Relatives.” Perspectives in Psychiatric Care 35, no. 2 (1999): 19–28. National Collaborative on Workforce and Disability for Youth, Institute for Educational Leadership. Tunnels & Cliffs: A Guide for Workforce Development Practitioners and Policymakers Serving Youth with Mental Health Needs. Number E-9-4-1-0070. Washington, D.C.: March 2007. New Freedom Commission on Mental Health. Achieving the Promise: Transforming Mental Health Care in America: Final Report. DHHS Publication SMA-03-3832. Rockville, Md.: 2003. Pilowsky, Daniel J. and Li-Tzy Wu. “Psychiatric Symptoms and Substance Use Disorders in a Nationally Representative Sample of American Adolescents Involved with Foster Care.” Journal of Adolescent Health 38 (2006): 351–58. Rose, L., R. K. Mallinson and L. D. Gerson. “Mastery, Burden, and Areas of Concern among Family Caregivers of Mentally Ill Persons.” Archives of Psychiatric Nursing 20, no. 1 (2006): 41–51. Rose, Linda. “Caring for Caregivers: Perceptions of Social Support.” Journal of Psychosocial Nursing and Mental Health Services 35, no. 2 (1997): 17–24. Shufelt, Jennie L. and Joseph J. Cocozza. Youth with Mental Health Disorders in the Juvenile Justice System: Results from a Multi-State Prevalence Study. Delmar, N.Y.: National Center for Mental Health and Juvenile Justice, 2006. Smiley, Amy, and Alysia Pascaris. A Chance for Change: Supporting Youth in Transition in New York City: A Report on the Findings of the 2006–2007 Youth Initiative Work Group. New York: Coalition of Behavioral Health Agencies, 2007. Styron, T. H., and others. “Troubled Youth in Transition: An Evaluation of Connecticut’s Special Services for Individuals Aging Out of Adolescent Mental Health Programs.” Children and Youth Services Review 28, no. 9 (2006): 1088–101. Unger, J., and others. “Homeless Youths and Young Adults in Los Angeles: Prevalence of Mental Health Problems and the Relationship between Mental Health and Substance Abuse Disorders.” American Journal of Community Psychology 25 (1997): 371–94. Vander Stoep, Ann. “Through the Cracks: Transition to Adulthood for Severely Psychiatrically Impaired Youth,” 357–68. In The 4th Annual Research Conference Proceedings, A System of Care for Children’s Mental Health: Expanding the Research Base, edited by A. Algarin and R. Friedman. Tampa: University of South Florida, Florida Mental Health Institute, Research and Training Center for Children’s Mental Health, 1992. Wagner, Mary M. “Outcomes for Youths with Serious Emotional Disturbance in Secondary School and Early Adulthood.” The Future of Children 5, no. 2 (1995): 90-112. Wagner, Mary and Maryann Davis. “How Are We Preparing Students with Emotional Disturbances for the Transition to Young Adulthood? Findings from the National Longitudinal Transition Study—2.” Journal of Emotional and Behavioral Disorders 14, no. 2 (2006): 86–98. GAO, Disconnected Youth: Federal Action Could Address Some of the Challenges Faced by Local Programs That Reconnect Youth to Education and Employment, GAO-08-313 (Washington, D.C.: Feb. 28, 2008) GAO, Residential Treatment Programs: Concerns Regarding Abuse and Death in Certain Programs for Troubled Youth, GAO-08-146T (Washington, D.C.: Oct. 10, 2007) GAO, School Mental Health: Role of the Substance Abuse and Mental Health Services Administration and Factors Affecting Service Provision, GAO-08-19R (Washington, D.C.: Oct. 05, 2007) GAO, Child Welfare: HHS Actions Would Help States Prepare Youth in the Foster Care System for Independent Living, GAO-07-1097T (Washington, D.C.: Jul. 12, 2007) GAO, African American Children in Foster Care: Additional HHS Assistance Needed to Help States Reduce the Proportion in Care, GAO-07- 816 (Washington, D.C.: Jul. 11, 2007) GAO, Child Welfare: Additional Federal Action Could Help States Address Challenges in Providing Services to Children and Families, GAO-07-850T (Washington, D.C.: May. 15, 2007) GAO, Child Welfare: Improving Social Service Program, Training, and Technical Assistance Information Would Help Address Long-standing Service-Level and Workforce Challenges, GAO-07-75 (Washington, D.C.: Oct. 06, 2006) GAO, D.C. Child and Family Services Agency: Performance Has Improved, but Exploring Health Care Options and Providing Specialized Training May Further Enhance Performance, GAO-06-1093 (Washington, D.C.: Sep. 28, 2006) GAO, Summary of a GAO Conference: Helping California Youths with Disabilities Transition to Work or Postsecondary Education, GAO-06- 759SP (Washington, D.C.: Jun. 20, 2006) GAO, Child Welfare: Federal Oversight of State IV-B Activities Could Inform Action Needed to Improve Services to Families and Statutory Compliance, GAO-06-787T (Washington, D.C.: May. 23, 2006) GAO, Children’s Health Insurance: Recent HHS-OIG Reviews Inform the Congress on Improper Enrollment and Reductions in Low-Income, Uninsured Children, GAO-06-457R (Washington, D.C.: Mar. 09, 2006) GAO, District of Columbia: Federal Funds for Foster Care Improvements Used to Implement New Programs, but Challenges Remain, GAO-05-787 (Washington, D.C.: Jul. 22, 2005) GAO, Medicaid Financing: States’ Use of Contingency-Fee Consultants to Maximize Federal Reimbursements Highlights Need for Improved Federal Oversight, GAO-05-748, (Washington, D.C.: June 28, 2005). GAO, Child Welfare: Better Data and Evaluations Could Improve Processes and Programs for Adopting Children with Special Needs, GAO- 05-292 (Washington, D.C.: Jun. 13, 2005) GAO, Medicaid Managed Care: Access and Quality Requirements Specific to Low-Income and Other Special Needs Enrollees, GAO-05-44R (Washington, D.C.: Dec. 08, 2004) GAO, Foster Youth: HHS Actions Could Improve Coordination of Services and Monitoring of States’ Independent Living Programs, GAO- 05-25 (Washington, D.C.: Nov. 18, 2004) GAO, D.C. Child And Family Services Agency: More Focus Needed on Human Capital Management Issues for Caseworkers and Foster Parent Recruitment and Retention, GAO-04-1017 (Washington, D.C.: Sep. 24, 2004) GAO, Substance Abuse and Mental Health Services Administration: Planning for Program Changes and Future Workforce Needs is Incomplete, GAO-04-683 (Washington, D.C.: June 4, 2004) GAO, Child Welfare: Improved Federal Oversight Could Assist States in Overcoming Key Challenges, GAO-04-418T (Washington, D.C.: Jan. 28, 2004) GAO, Child Welfare: Enhanced Federal Oversight of Title IV-B Could Provide States Additional Information to Improve Services, GAO-03-956 (Washington, D.C.: Sep. 12, 2003) GAO, Child Welfare: Most States Are Developing Statewide Information Systems, but the Reliability of Child Welfare Data Could Be Improved, GAO-03-809 (Washington, D.C.: Jul. 31, 2003) GAO, Child Welfare and Juvenile Justice: Several Factors Influence the Placement of Children Solely to Obtain Mental Health Services, GAO-03- 865T (Washington, D.C.: Jul. 17, 2003) GAO, Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Services, GAO-03-397 (Washington, D.C.: Apr. 21, 2003) GAO, Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care, GAO-03-222 (Washington, D.C.: Jan. 14, 2003) | The transition to adulthood can be difficult for young adults who suffer from a serious mental illness, such as schizophrenia or bipolar disorder. When these individuals are unsuccessful, the result can be economic hardship, social isolation, and in some cases suicide, all of which can pose substantial costs to society. Due to concerns about young adults with serious mental illness transitioning into adulthood, GAO was asked to provide information on (1) the number of these young adults and their demographic characteristics, (2) the challenges they face, (3) how selected states assist them, and (4) how the federal government supports states in serving these young adults and coordinates programs that can assist them. To do this work, GAO analyzed data based on national surveys, including the National Comorbidity Survey Replication (NCS-R), and administrative data from the Social Security Administration (SSA). GAO also reviewed published research; interviewed federal, state, and local officials, as well as mental health providers, experts, and advocacy groups; and conducted site visits in Connecticut, Maryland, Massachusetts, and Mississippi --four states that focus on this population. GAO did not make any recommendations. HHS made comments intended to clarify the report and we made changes as appropriate. GAO estimates that at least 2.4 million young adults aged 18 through 26--or 6.5 percent of the non-institutionalized young adults in that age range-- had a serious mental illness in 2006, and they had lower levels of education on average than other young adults. The actual number is likely to be higher than 2.4 million because homeless, institutionalized, and incarcerated persons were not included in this estimate--groups with potentially high rates of mental illness. Among those with serious mental illness, nearly 90 percent had more than one mental disorder, and they had significantly lower rates of high school graduation and postsecondary education. GAO also found that about 186,000 young adults received SSA disability benefits in 2006 because of a mental illness that prevented them from engaging in substantial, gainful activity. Young adults with serious mental illness can have difficulty finding services that aid in the transition to adulthood, according to researchers, public officials, and mental health advocates. Because available mental health, employment, and housing services are not always suited for young adults with mental illness, these individuals may not opt to receive these services. They also can find it difficult to qualify for adult programs that provide or pay for mental health services, disrupting the continuity of their treatment. Finally, navigating multiple discrete programs that address varied needs can be particularly challenging for them and their families. The four states GAO visited help young adults with serious mental illness transition into adulthood by offering programs that provide multidimensional services intended to be age and developmentally appropriate. These programs integrate mental health treatment with employment and other supports. To deliver these services, states use various strategies. They coordinate across multiple state agencies, leverage federal and state funding sources, and involve young adults and their families in developing policies and aligning supports. The needs of young adults with serious mental illness have also received attention from the federal government, and agencies have been providing some support to states through demonstrations, technical assistance, and research. Federal agencies have also established bodies to coordinate programs to serve those with mental health needs, youth with disabilities, and youth in transition, which may help improve service delivery for young adults with serious mental illness, as well. |
DOE invests in a wide range of civilian R&D programs that are managed by five program offices within the Office of the Under Secretary for Science and Energy, or by ARPA-E, which reports directly to the Secretary of Energy. These offices have the goal of enhancing U.S. security and economic growth through transformative scientific and technological innovation, and through market solutions to overcome science, energy, and environmental challenges that the United States faces. These program offices and ARPA-E fund R&D that is conducted by DOE national laboratories, universities, industry, nonprofit organizations, state governments, and other federal laboratories. Each program office and ARPA-E may fund R&D at any of the national laboratories. For example, the Office of Science funds civilian R&D at all 17 of DOE’s national laboratories, including the national laboratories sponsored by the National Nuclear Security Administration (NNSA). Similarly, national laboratories may receive funding from any of the program offices and ARPA-E, as well as from other governmental agencies and nongovernmental entities. The five program offices and ARPA-E fund and oversee civilian R&D that aligns with their missions, as described below: The Office of Electricity Delivery and Energy Reliability’s mission is to strengthen, transform, and improve U.S. electricity infrastructure and to provide leadership to ensure that U.S. energy delivery systems are secure, resilient, and reliable. The office does not oversee a national laboratory but it is supported by staff located at the Office of Fossil Energy’s National Energy Technology Laboratory. The Office of Energy Efficiency and Renewable Energy’s mission is to create and sustain American leadership in the transition to a global clean energy economy. The office oversees the National Renewable Energy Laboratory in Colorado. The Office of Fossil Energy’s primary mission is to ensure reliable fossil energy resources for clean, secure, and affordable energy while enhancing environmental protection. The office oversees the National Energy Technology Laboratory—with locations in Oregon, Pennsylvania, and West Virginia—which is the only DOE national laboratory operated by the government rather than a contractor. The Office of Nuclear Energy’s primary mission is to advance nuclear power as a resource capable of meeting the nation’s energy, environmental, and national security needs by resolving technical, cost, safety, proliferation resistance, and security barriers. The office oversees the Idaho National Laboratory. The Office of Science’s mission is to deliver scientific discoveries and major tools that transform our understanding of nature and advance the energy, economic, and national security in the United States. This office is the nation’s single largest funding source for basic research in the physical sciences, and supports research in energy sciences, advanced scientific computing and other fields. The office oversees 10 of DOE’s national laboratories. ARPA-E’s mission is to sponsor high-potential, high-impact energy technologies that are considered too early for private-sector investment. ARPA-E does not oversee a national laboratory. Of the five program offices noted above, four consist of several types of offices that manage and oversee DOE’s R&D investments; these four are the offices of Energy Efficiency and Renewable Energy, Fossil Energy, Nuclear Energy, and Science. First, each of these program offices has a headquarters office in the Washington, D.C. area that includes senior leadership and that may include offices that provide support across the program office, such as policy development and oversight, budget, public relations and congressional outreach, and technical assistance programs as well as other administrative and support units. According to DOE officials, the extent to which staff functions are centralized in headquarters offices varies across program offices. Second, the program offices include research offices, generally collocated with headquarters offices, that manage particular scientific areas and research portfolios and provide strategic direction for these areas. For example, the Office of Science includes six research offices that steward different scientific areas. Third, the program offices have site offices—collocated with each national laboratory—that manage the laboratory contracts, oversee federal facilities at the laboratories and, in some cases, manage financial assistance awards to universities and industry, as well as other contracts. ARPA-E and the other program office—the Office of Electricity Delivery and Energy Reliability—make smaller R&D investments, have significantly fewer staff, and do not oversee a national laboratory. As a result, they are organized differently from the other four program offices. According to an ARPA-E official, all ARPA-E staff are located at a central office, and research projects are organized around individual program directors. In the case of the Office of Electricity Delivery and Energy Reliability, the office has one suboffice that is dedicated to research and development; other suboffices are dedicated to regulatory or coordination functions. Figure 1 lists ARPA-E and the five program offices we reviewed, along with research offices and site offices within those program offices. Figure 2 shows the locations of the 13 national laboratories that primarily conduct civilian R&D under the oversight of the program offices we reviewed. This figure does not include the 4 other DOE national laboratories that may also conduct civilian R&D for the offices we reviewed. Of the 13 national laboratories that primarily focus on civilian R&D for DOE, 12 are owned by the federal government and are operated by management and operating (M&O) contractors. The R&D funded by DOE is carried out under the department’s direction and is managed by scientists, engineers, and others employed by the laboratory contractor. The remaining national laboratory—the National Energy Technology Laboratory—is operated by DOE. Therefore, the scientists and engineers who conduct the R&D at this laboratory are primarily federal employees. In addition to M&O contracts that DOE enters into for the operation of the national laboratories, DOE program offices and ARPA-E provide financial assistance—primarily grants and cooperative agreements—to support R&D at universities, industry, and other entities. Under the Federal Grant and Cooperative Agreement Act, an agency is to use a grant agreement when the principal purpose of the relationship is to transfer a thing of value to the recipient to carry out a public purpose authorized by law, and substantial involvement by the agency is not expected. For grants, an agency’s involvement is essentially administrative, and includes standard federal stewardship responsibilities such as reviewing performance to ensure that the objectives, terms, and conditions of the grant are met. In contrast, cooperative agreements differ from grants in that an agency expects to be substantially involved in the project through tasks such as reviewing and approving one stage of a project before work can begin on a subsequent stage. The three program offices we selected for detailed review—the offices of Energy Efficiency and Renewable Energy, Nuclear Energy, and Science—use various activities to oversee DOE civilian R&D investments in national laboratories, universities, industry, and other entities: these investments totaled $7.36 billion in obligations in fiscal year 2015. The activities these three offices used included activities to: identify research priorities and help determine where to invest in R&D, help ensure that national laboratories conduct R&D in alignment with DOE priorities and that M&O contractors manage the research and federally owned properties at the laboratories safely and efficiently, and help ensure that universities, industry, and other entities are meeting research goals as defined in financial assistance agreements. For all investments, including those in DOE’s national laboratories, universities, industry, and other entities, the three selected program offices engage in activities to obtain input from multiple sources to identify research priorities and to help inform where DOE invests in R&D. To help identify specific research priorities, these three program offices review objectives established in the DOE strategic plan and other DOE documents, such as the Quadrennial Energy Review and the Quadrennial Technology Review. These DOE documents are in turn influenced by national policies such as the President’s Climate Action Plan of 2013. For example, one of the objectives of DOE’s strategic plan is to advance sustainable hydropower technologies in order to help double renewable energy generation in the United States between 2012 and 2020, a goal of the President’s Climate Action Plan. Additionally, the program offices hold scientific and technical workshops with the scientific community—scientists and researchers in universities, industry, and government—to help identify priority research areas that, if supported, could contribute to overcoming barriers to advancing particular energy technologies. For example, the Office of Nuclear Energy sponsored a series of workshops in 2015 that sought to identify ideas for advancing nuclear energy technologies. Furthermore, the program offices have established federal advisory committees that provide expert input on particular knowledge gaps or infrastructure needs at the national laboratories. For example, in 2014 a panel of the federal advisory committee for the High Energy Physics research office issued a long-term plan for supporting particle physics, including recommending upgrades at a number of accelerator facilities, such as at the Fermi National Accelerator Laboratory. DOE program offices also engage in other activities to help develop research priorities, such as attending conferences, regularly reviewing published literature, regularly meeting with national laboratory staff, and engaging with interagency working groups. In addition, according to DOE officials, new ideas often come from the scientific community in the form of proposals submitted in response to solicitations from the program offices. To help ensure that DOE civilian R&D investments in national laboratories ($5.14 billion in obligations in fiscal year 2015) align with DOE priorities, and that M&O contractors manage research and federally owned properties safely and efficiently, the offices of Energy Efficiency and Renewable Energy, Nuclear Energy, and Science carry out three broad types of activities for their respective national laboratories. We identified these activities through reviews of DOE documents and interviews with officials from DOE’s program offices and site offices. First, the three program offices conduct planning activities to help ensure that DOE investments in the laboratories support national R&D priorities. For example, the program offices require that each M&O contractor that operates a national laboratory develop a long-term strategic plan for its laboratory. The DOE program offices and the relevant site office staff review the plans and provide feedback to the laboratory contractor. These plans can identify the laboratory’s vision for the future, core capabilities, major initiatives, and laboratory infrastructure needs, among other things. The three program offices conduct this process with their laboratories on an annual basis. Complementary to this, program offices may also develop their own strategic planning documents. For example, within the Office of Energy Efficiency and Renewable Energy, research offices develop strategic “roadmaps” that establish a vision with broad and long- range goals to provide overall program direction. Some offices also develop multi-year program plans, which are operational guides for how research offices will manage their activities. Second, research offices and site offices conduct various oversight activities of the new and ongoing R&D projects and scientific facilities that DOE invests in at the national laboratories. As of the beginning of fiscal year 2016, the offices of Energy Efficiency and Renewable Energy, Nuclear Energy, and Science supported and oversaw 32 designated user facilities that were primarily located at national laboratories. A designated user facility is a federally sponsored research facility available for external use to advance science or technology. These facilities are open to researchers and scientists without regard to nation of origin or institutional affiliation. Potential users may be allocated time in the facilities after a merit review of the proposed work. Users of the facilities are not charged a fee if they publish research results in open literature; for proprietary work that is not disclosed publicly, the user is charged full cost recovery. Each of these designated user facilities represents a significant investment of federal funds. For example, according to DOE documents, the Advanced Photon Source, one of four designated user facilities at the Argonne National Laboratory in fiscal year 2015, was completed in 1995 at a cost of $812 million. Since then, DOE has funded more than $100 million in upgrades. The facility produces x-rays that allow scientists to conduct research on the structure and function of materials—for example, to aid in the development of new pharmaceuticals. Research offices solicit research proposals from national laboratories. This proposal solicitation helps the offices determine where to invest DOE funds. The national laboratories submit work proposals for new and ongoing projects, and research office staff review these proposals and hold merit reviews—often with outside experts— to help determine which laboratory projects to invest in. According to information provided by the Office of Science, research offices conduct in-depth merit reviews for new laboratory work proposals, and most research offices review about one-third of ongoing laboratory projects each fiscal year; these reviews generally consist of three or more individual peer reviewers. For example, according to the Basic Energy Sciences research office, which is one of the Office of Science’s six research offices, in fiscal year 2015 research office staff reviewed approximately 132 of 395 ongoing laboratory projects, in addition to 64 new work proposals. Research office staff monitor project performance. These staff conduct this monitoring through other periodic reviews, site visits, and regular meetings or phone calls with laboratory management and project staff, according to DOE officials. DOE officials told us that a significant portion of research office activities involved overseeing the large number of ongoing projects at the national laboratories. For example, according to information provided by the Office of Science, in fiscal year 2015, its research office staff oversaw more than 1,600 new or ongoing projects that received $3.67 billion in obligations that fiscal year. Project size, risk, technological maturity, and other factors can influence how often research office staff review these laboratory projects or meet with project staff. For example, according to officials in the Office of Nuclear Energy, contractors must submit quarterly project reports, and the office holds formal monthly reviews in which laboratory projects are evaluated against performance metrics. Likewise, for Office of Science construction projects, research offices collaborate with site office staff and review projects to ensure they meet their technical, cost, scope and schedule milestones, according to DOE officials. Site offices ensure that laboratory M&O contractors meet contract requirements. For example, one such requirement is that the contractor have a contractor assurance system to oversee its performance and to self-identify and correct potential problems. Each M&O contractor must establish a contractor assurance system that includes management systems and processes to generate the information needed to manage and improve the contractors own performance. Site office staff are responsible for reviewing, monitoring, and assessing the effectiveness of the system to ensure it is working and meets contract requirements. A contractor assurance system also allows site office staff to monitor the contractors own internal assessments and reviews, thereby reducing the number of reviews that the site office otherwise might conduct. For example, Argonne site office officials told us that the Argonne National Laboratory M&O contractor self-identified and corrected a radiation inventory problem at a laboratory building and kept site office staff informed of laboratory actions. Site office staff conduct independent and joint reviews. These staff conduct their own independent reviews, as well as joint reviews with contractor staff, of laboratory facilities to ensure federal properties are being managed safely and efficiently. For example, according to information provided by the Golden Field Office, in fiscal year 2015, site office staff conducted 366 safety and health reviews at the National Renewable Energy Laboratory. Examples of these reviews include construction project safety inspections; laboratory inspections, readiness verifications, compliance visits, and risk assessments of the laboratory safety and health programs; and assessment of lab performance. Site office staff also conducted hundreds of other reviews in areas such as environmental oversight, physical and cyber-security, and financial oversight. Site offices are responsible for administering the M&O contracts. This responsibility includes taking actions such as modifying the contract to add funding for laboratory work or to incorporate new applicable directives from DOE and the Office of Management and Budget. According to site office staff we interviewed, the incorporation of new directives into an M&O contract is an ongoing effort that requires coordination with the contractor and often negotiation about the terms added to the contract. For example, according to officials at one site office, to address a new DOE order that called for particular safety management procedures at DOE facilities, officials tailored the contract so that the new contract language would apply to only the scientific user facility with the highest risk at the laboratory—this action reduced the number of safety management procedures required of the contractor while complying with the new order. Site offices must have specific staff to oversee construction and upgrades. These staff—referred to as federal project directors— oversee the construction of any new scientific facilities or upgrades to existing facilities. These federal project directors also may be assigned from a site office or from the Office of Science’s Integrated Support Center to a project site other than a national laboratory. For example, according to DOE officials, a federal project director at the Integrated Support Center is responsible for overseeing construction of the Facility for Rare Isotope Beams at Michigan State University, a $730 million project. Site offices may conduct other activities to provide centralized support to other site offices and the larger program office. Areas of support include intellectual property and other legal services, procurement, human resources, information technology, and safety and security. Within the Office of Science, many of these activities are provided by the Integrated Support Center. The Idaho Operations Office and the Golden Field Office provide many of these services to the Office of Nuclear Energy and the Office of Energy Efficiency and Renewable Energy, respectively. Third, at the end of each fiscal year, program offices use a performance evaluation and measurement plan to assess each laboratory contractor’s scientific, technological, managerial, and operational performance. The plan is developed collaboratively by the responsible program office, including the site office, and the laboratory M&O contractor before each fiscal year, and it helps form the basis for the evaluation of the laboratory contractor at the end of the year. Research office staff are responsible for evaluating laboratory mission-related areas of the plan, including the contractor’s ability to deliver science and technology to meet DOE missions. Site office staff evaluate the contractor’s operations at the laboratory, including the use of the contractor assurance system, according to DOE documents and officials. Results from this performance evaluation inform the contractor’s award fee determination as well as the possibility of earning additional years on the contract through an award term extension. The offices of Energy Efficiency and Renewable Energy, Nuclear Energy, and Science use various processes to help oversee DOE’s civilian R&D investments in universities, industry, and other entities ($2.22 billion in obligations in fiscal year 2015). Research office staff in these three program offices develop solicitations—also referred to as funding opportunity announcements—for R&D proposals from universities, industry, and other entities. They then conduct or manage merit reviews of submitted proposals, an activity that generally includes independent reviews from technical, subject matter experts. Research office staff identify and recruit teams of experts for these merit reviews. According to DOE officials, program managers take these expert reviews into consideration, along with factors such as portfolio balance and available funds, before making funding recommendations to DOE leadership. According to data provided by DOE, research office staff in the five program offices and ARPA-E conducted or managed more than 6,600 proposal reviews—with reviews consisting of as many as 3 or 4 individual reviewers—to select 1,691 new financial assistance awards in fiscal year 2015. In addition to overseeing new proposals from universities and industry, research office staff oversee thousands of projects that were awarded in previous years, also known as continuing awards. As shown in table 1 below, in fiscal year 2015, DOE research offices oversaw 4,921 continuing awards (projects). Research offices may oversee projects differently, depending on factors such as the maturity of the science or technology, the size and complexity of the work, the size and makeup of the awardees (single institution vs. multi-institution), and the risk of the project, according to DOE officials. Research office staff tailor the level of oversight to the size, scope, and complexity of the project to ensure awardees are meeting research goals as defined in the proposal award agreement. Office of Science officials told us they generally receive annual reports from awardees and have annual project reviews for small financial assistance awards. These officials told us that for larger and more complex grants and cooperative agreements, there may be significantly more oversight, such as through a greater frequency of meetings and progress reports. The Office of Energy Efficiency and Renewable Energy and the Office of Nuclear Energy, which primarily invest in applied research and development projects, typically use cooperative agreements instead of grants, as cooperative agreements allow substantial federal involvement in the projects to help ensure that projects meet specific program office technology goals. These program offices may use a range of oversight activities to help ensure projects succeed. For example, according to Office of Energy Efficiency and Renewable Energy officials, program and project managers follow guidance that requires quarterly project reviews, annual site visits, frequent in-person meetings, and bi-annual peer reviews of the entire project portfolio—a more active management and oversight level than required for small grants. In addition, all financial assistance projects in the Office of Energy Efficiency and Renewable Energy are required to meet certain milestones, and projects that do not meet these milestones could lose funding. To complement project oversight by research office staff, site office staff provide administrative support for financial assistance awards. Specifically, the Golden Field Office in the Office of Energy Efficiency and Renewable Energy, the Idaho Operations Office in the Office of Nuclear Energy, and the Integrated Support Center in the Office of Science administer awards through activities such as preparing solicitations, negotiating award terms with recipient institutions, and closing out awards, among other things. According to information provided by these program offices, in fiscal year 2015, the offices managed $1.82 billion in obligations for thousands of new and continuing financial assistance awards. DOE officials indicated that financial assistance awards that did not receive obligations also required support. For example, according to information provided by the Office of Energy Efficiency and Renewable Energy, in addition to managing obligations for new and continuing financial assistance awards in fiscal year 2015, the Golden Field Office administered another 1,625 awards that had previously received obligations as well as awards that were completed and were in the process of closing out. From fiscal year 2011 to fiscal year 2015, DOE staffing levels for oversight of civilian R&D investments declined by 11.0 percent, while obligations for the civilian R&D that the staff oversaw increased by 3.8 percent. The obligations for staff costs of DOE oversight of civilian R&D increased by 2.4 percent overall from fiscal year 2011 to fiscal year 2015, and these costs varied among the five program offices and ARPA-E. DOE obligations for staff costs decreased 4.0 percent over this period when adjusted for inflation. From fiscal year 2011 to fiscal year 2015, DOE staffing levels for oversight of civilian R&D—including staff in the five DOE program offices and ARPA-E—declined from 2,937 to 2,613 full-time equivalent employees, a decrease of 11.0 percent. Four of the five DOE program offices accounted for the entire decline in staffing levels. In contrast, ARPA-E staffing levels increased over this period as the agency expanded (see fig. 3 below). Appendixes I and II present further data on staffing levels in the five program offices and ARPA-E, as well as data on staff costs and R&D obligations. According to information provided by DOE, a number of factors contributed to staffing level declines. For example, according to information provided by the Office of Energy Efficiency and Renewable Energy, the completion of projects associated with the American Recovery and Reinvestment Act (ARRA) resulted in a reduction in staff brought on board to manage these projects. Other offices attributed changes in staffing to causes such as budgetary pressures and voluntary separation programs. One site office we visited reported that as employees left or retired, the office did not fill open billets; instead, the office spread the responsibilities among remaining staff. Office reorganizations also contributed to staffing declines. For example, Office of Science officials reported to us that the office consolidated many of its support functions, including human resources and legal services, which resulted in a 10 percent reduction in staff over several years, with further reductions planned for fiscal year 2016. As staff levels decreased overall, the five DOE program offices and ARPA-E were responsible for overseeing and implementing civilian R&D obligations that increased from $7.09 billion in fiscal year 2011 to $7.36 billion in fiscal year 2015, an increase of 3.8 percent. Civilian R&D obligations from the program offices changed by varying degrees over this period, ranging from a decrease of 23.3 percent in the Office of Electricity Delivery and Energy Reliability (a decrease of 28.1 percent when adjusted for inflation) to an increase of 8.8 percent in the Office of Nuclear Energy (an increase of 1.9 percent when adjusted for inflation). ARPA-E’s civilian R&D obligations experienced a more than 100-fold increase because the agency, which was founded and initially funded in 2009, grew significantly during the period under review. Total obligations for staff costs to oversee DOE’s civilian R&D investments increased from $632.9 million in fiscal year 2011 to $647.9 million in fiscal year 2015—an increase of 2.4 percent during the period under review (a decrease of 4.0 percent when adjusted for inflation). Staff costs include salaries and benefits, travel, support services, and other costs, such as contributions to DOE’s working capital fund for common administrative services such as building occupancy and network and telephone services. The extent of the change in staff costs varied among the five DOE program offices and ARPA-E from fiscal year 2011 to fiscal year 2015. Over this period, the change in obligations for staff costs in DOE’s five program offices ranged from a decrease of about 8 percent each at the Offices of Nuclear Energy and Science (a decrease of about 14 percent when adjusted for inflation), to an increase of 16.0 percent at the Office of Electricity Delivery and Energy Reliability (an increase of 8.8 percent when adjusted for inflation). ARPA-E’s obligations for staff costs more than doubled and staff levels increased. The overall increase in obligations for staff costs would have been greater if DOE had received the full amount it requested in appropriations, according to DOE officials. According to these officials, while staff costs are determined by DOE offices based on staffing plans and estimated staffing needs, the appropriations provided by Congress establish an upper limit to staff costs and may not represent DOE budget requests for the program offices and ARPA-E. For example, in fiscal year 2015, DOE requested $189.4 million for staff costs for the Office of Science and was appropriated $183.7 million. Of the $647.9 million that DOE obligated for staff costs in fiscal year 2015, $179.9 million (27.8 percent) was for headquarters office staff, $173.5 million (26.8 percent) was for research office staff, and $294.6 million (45.5 percent) was for site office staff, as shown in table 2 below. For further information about the functions performed by these staff, see appendix I. The obligations for total staff costs as a percentage of total obligations (R&D and non-R&D obligations) varied among the program offices and ARPA-E (see fig. 4). For example, in fiscal year 2015, the percentage ranged from 3.6 percent in the Office of Science to 21.4 percent in the Office of Electricity Delivery and Energy Reliability. Staff costs as a percentage of total obligations across all the program offices and ARPA-E was 7.6 percent in fiscal year 2015. DOE officials did not identify discrete causes for variations in obligations for staff costs as a percentage of R&D obligations, but we identified several factors that can contribute to such variations through our discussions with DOE officials and a review of DOE documents. These factors include: The extent to which an office uses cooperative agreements instead of grants. As discussed above, cooperative agreements are to be used when an agency anticipates that substantial federal involvement in performance or project activities may be necessary. DOE officials said that cooperative agreements may incur higher staff costs than grants because of this increased involvement. According to DOE officials, program offices that invested primarily in applied R&D typically used cooperative agreements. The extent to which an office supports non-R&D activities. Obligations for non-R&D activities, such as regulatory support, come out of staff costs and may have limited or no corresponding R&D obligations, thus increasing staff costs’ share of the total. For example, the Office of Electricity Delivery and Energy Reliability issues permits for construction of electrical transmission lines that cross national boundaries, and the office coordinates with the Department of Homeland Security on electrical grid issues in emergency planning. In fiscal year 2015, 64.5 percent of the office’s total obligations were for R&D activities ($86.5 million of $134.1 million in total obligations); in contrast, 36.7 percent of obligations for staff costs was to oversee R&D ($10.5 million of $28.7 million obligated in staff costs). Other factors unique to certain offices. For example, ARPA-E obligations for staff costs in fiscal year 2015 included support for 52 federal staff as well as 49 support services contractors, a greater proportion of contractors than in the five program offices. In addition, fiscal year 2015 obligations for staff costs in the Office of Energy Efficiency and Renewable Energy supported ARRA projects, even though there were no R&D obligations for these projects during this period. In particular, program office staff continued to manage and close out ARRA awards until the end of fiscal year 2015. We provided a draft of this report to DOE for its review. DOE provided written comments, which we have reprinted in appendix III. DOE also provided technical comments, which we incorporated as appropriate throughout our report. In its written comments, DOE stated that it generally agreed with the summarized statements in the draft report. In addition, DOE stated that its program offices and ARPA-E are structured differently to address their unique mission needs and oversight responsibilities, and that it is not appropriate to make staff comparisons across the different offices as staff functions vary. We believe that side-by-side analyses of the number of staff and staff functions across the DOE program offices and ARPA-E are appropriate as we requested similar data from all of the DOE program offices and ARPA-E. Moreover, where we did compare data, we did not draw conclusions from the comparisons, and we explained the differences in office structures to provide context. For further context, we added clarifications and details as suggested to us by DOE officials. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Program office and Advanced Research Projects Agency-Energy (ARPA- E) activities to oversee civilian research and development (R&D) investments were split among staff in headquarters offices, research offices, and site offices, where the staff performed various functions. The role of the site offices is to manage the management and operating (M&O) contracts with laboratories and oversee federal facilities. The composition of site office staff varies depending on the characteristics of the laboratory and the responsibilities of the individual site office; this composition can range from contracting officers to federal project directors and environmental, health, and safety inspectors. For example, the Idaho Operations Office is responsible for 331 federal properties at the Idaho National Laboratory, including overseeing the safety and management of 17 nuclear facilities and the storage of nuclear materials. DOE guidelines require a risk determination for such facilities that in turn dictates the number of federal representatives who are assigned to conduct health and safety oversight at each facility. These factors contributed to the Idaho Operations Office having 44 environmental, health, safety, and quality staff, out of 188 total staff. In contrast, the Office of Energy Efficiency and Renewable Energy reported that it had a similar number of total staff at the Golden Field Office and the National Energy Technology Laboratory but fewer environmental, health, safety, and quality staff. In addition, site offices also include other staff who indirectly support oversight. For example, the Golden Field Office, the Idaho Operations Office, and the Integrated Support Center include significant numbers of staff who provide contract and finance support or information technology support, or who are intellectual property lawyers (see table 4). To learn about staffing levels and costs associated with Department of Energy’s (DOE) offices that oversee research and development (R&D) investments, we developed a data collection instrument that we sent to the five program offices and the Advanced Research Projects Agency- Energy (ARPA-E). The data collection instrument asked for information on program office obligations for R&D, obligations for federal staff, and the composition of program office staffs. The DOE program offices provided the data in the tables below. The tables below show data by program office and ARPA-E from fiscal year 2011 to fiscal year 2015, including obligations for R&D provided to (1) the national labs and (2) universities and private industry, as well as the number of full-time equivalent staff in each office and staff costs. Advanced Research Projects Agency–Energy (ARPA-E) In addition to the contact named above, Joseph Cook (Assistant Director), Matthew J. Ambrose, and David Messman made key contributions to this report. Also contributing to this report were Camilo Flores, Justin S. Fisher, Richard Johnson, Mae Liles, Cynthia Norris, and Dan C. Royer. | In fiscal year 2015, five DOE program offices and ARPA-E invested $7.36 billion for civilian R&D in DOE national laboratories as well as in universities, industry, and other entities. These civilian R&D investments (investments not related to nuclear security) supported diverse science and energy research areas, including energy efficiency, renewable energy, and nuclear energy. The five program offices and ARPA-E also obligated funds for staff to oversee these R&D investments—referred to as staff costs in this report—and include federal staff salaries and benefits, travel, support services, and other costs. GAO was asked to review DOE's oversight of its civilian R&D investments. This report discusses (1) the activities selected DOE offices use to oversee investments in civilian R&D, and (2) staffing levels and costs associated with DOE oversight of civilian R&D. GAO obtained staffing and obligations data from the five DOE program offices and ARPA-E that funded civilian R&D for fiscal years 2011-2015, the most recent years for which data was available; examined DOE policies, plans, and guidance; and interviewed DOE officials. GAO selected three of the five program offices for detailed review because they oversee nearly 90 percent of DOE's civilian R&D investments and 12 of the 13 national laboratories that primarily conduct civilian R&D. GAO used a broad definition of oversight, including any activity that directly or indirectly supported DOE's R&D mission. In commenting on a draft of this report, DOE generally agreed with GAO's findings. Three Department of Energy (DOE) program offices that GAO selected for detailed review—the offices of Energy Efficiency and Renewable Energy, Nuclear Energy, and Science—use various activities to oversee civilian research and development (R&D) investments. Activities to identify research priorities . The program offices obtain input from multiple sources to help determine the areas in which DOE invests in research at its national laboratories, as well as in universities and industry. For example, the Office of Nuclear Energy sponsored workshops in 2015 that sought to identify ideas for advancing nuclear energy technologies. Activities to oversee investments at national laboratories . The program offices require that the laboratories they oversee develop strategic plans to help ensure DOE investments in these laboratories support national R&D priorities. They also monitor and review individual laboratory R&D projects. For example, in fiscal year 2015, the Office of Science oversaw over 1,600 new or ongoing laboratory projects that received $3.67 billion in obligations. Finally, the program offices annually assess each laboratory contractor's scientific, technological, managerial, and operational performance. Activities to oversee investments in universities, industry, and other entities . To help determine where DOE invests in civilian R&D, the program offices review R&D proposals from universities, industry, and other entities. According to data provided by DOE, in fiscal year 2015 the three program offices conducted or managed more than 5,600 proposal reviews—with each review including as many as 3 to 4 individual reviewers—and selected 1,490 proposals for new financial assistance awards. The program offices then monitored and periodically reviewed the awarded proposals. Staffing levels for oversight of civilian R&D decreased by 11.0 percent from fiscal year 2011 to fiscal year 2015 in five DOE program offices—those noted above, plus two others that oversee a smaller percentage of DOE's civilian R&D investments—and in the Advanced Research Projects Agency-Energy (ARPA-E). At the same time, obligations for staff costs and civilian R&D investments increased by 2.4 percent and 3.8 percent, respectively, without adjusting for inflation (obligations declined slightly when adjusted for inflation). Staffing levels and costs changed to varying degrees among the offices and ARPA-E. For example, staff costs increased in three of the offices and ARPA-E but decreased in the other two offices. Obligations for staff costs made up 7.6 percent of total obligations (R&D and non-R&D obligations) in fiscal year 2015; they also varied among program offices and ARPA-E, ranging from 3.6 percent to 21.4 percent. |
The Military Health System operated by DOD is large and complex and has a dual health care mission—readiness and benefits. The readiness mission provides medical services and support to the armed forces during contingency operations and involves deploying medical personnel and equipment, as needed, around the world to support military forces. The benefits mission provides medical services and support to members of the armed forces, their family members, and others eligible for DOD health care, such as retired servicemembers and their families. DOD’s health care mission is carried out directly through military medical centers, hospitals, and clinics throughout the United States and overseas, commonly referred to as military treatment facilities, as well as by civilian health care providers through TRICARE. Military treatment facilities make up DOD’s direct care system for providing health care to beneficiaries. DOD’s delivery of health care services includes, among other things, inpatient and outpatient care. Inpatient care refers to care for a patient who is formally admitted to a hospital or an institution for treatment, or care. Outpatient care, also known as ambulatory care, refers to health care services for an actual or potential disease, injury, or lifestyle-related problem that does not require admission to a medical treatment facility for inpatient care. The Assistant Secretary of Defense (Health Affairs) is responsible for ensuring the effective execution of DOD’s health care mission and exercises authority, direction, and control over medical personnel authorizations and policy, facilities, funding, and other resources within DOD. The TRICARE Management Activity operates under the authority, direction, and control of Health Affairs. In 2008, the TRICARE Management Activity approved plans to renovate LRMC and the 86th MDG clinic at their existing locations. The initial LRMC plans included renovation of the inpatient tower; construction of an additional tower for emergency medicine, inpatient nursing units, and other clinical and support activities; and demolition of older facilities. The initial plans for the 86th MDG clinic included construction of a single building to consolidate health care services provided at separate facilities that currently make up the 86th MDG clinic. In 2009, the Office of the Deputy Under Secretary of Defense (Installations and Environment), together with Health Affairs, conducted a cost-benefit analysis that included consideration of alternative sites as well as consolidation of the two projects into a single medical center, and determined that consolidating the aging LRMC and 86th MDG clinic into one new facility that provides tertiary care in an area adjacent to Ramstein Air Base, known as the Weilerbach Storage Area, would be more efficient and cost- effective than pursuing two separate renovation or reconstruction projects. The replacement medical center will be operated and maintained by the Army, with the Air Force to provide clinical services that are currently offered at the 86th MDG clinic. The version of DOD’s guidance governing the planning and acquisition of military health facilities (DOD Instruction 6015.17) that was in effect when the facility requirements for the replacement medical center were determined in 2010 described the procedures to be used by the military departments to prepare project proposals for military treatment facilities. This instruction also identified the types of documentation needed to support a project proposal. The documentation includes, among other things, the current and projected beneficiary population served in a military treatment facility’s catchment area, as well as current and projected staffing and workload data. Army Medical Command, with input from the Air Force Medical Support Agency, developed a report that summarizes the projected health care requirements for Military Health System beneficiaries in the areas served by the proposed medical center. Generally, the combination of workload data and staffing requirements are key considerations for determining the size and configuration of military treatment facilities. These facility space requirements are identified in a Program for Design document, which lists square footage requirements per medical department and room. The estimated square footage is then used as the basis for developing overall project cost estimates as reflected on DD Form 1391 (Military Construction Project Data), the standard format used throughout DOD to support the planning and execution of military construction projects. Figure 1 provides an illustration of the process used in determining project costs for the replacement medical center. In planning for the proposed replacement medical center, DOD officials considered beneficiary population data, contingency operations, and changes or expected changes in troop strength known at the time. However, more recent posture changes, announced in January 2012, are currently being assessed by military medical officials for their impact on the replacement medical center. DOD used beneficiary population data as of March 2010 and data on historical patterns of patient migration to identify the areas served by the proposed replacement medical center. A majority of the beneficiaries expected to receive health care from the replacement medical center are located within a 55-mile radius of it. DOD officials told us that because the replacement medical center was designed for peacetime operations—with the capacity to expand to meet the needs of contingency operations—reductions in ongoing contingency operations in Afghanistan would not have an impact on facility DOD posture in Europe has been reduced over the past requirements.few years, and DOD had previously announced that one of four brigade combat teams currently stationed in Europe would be removed by 2015. According to DOD officials, this posture change was not expected to have a significant impact on the size of the replacement medical center because DOD plans to continue to use the facilities at Baumholder, Germany, which will be vacated by the brigade combat team, for other DOD personnel. In January 2012, DOD announced its decision to remove a second brigade combat team currently stationed in Europe, thereby reducing the remaining number of brigade combat teams in Europe to two—one stationed in Germany and the other in Italy. At the time of our review, DOD officials told us that they were in the process of assessing these proposed changes in posture to better understand their ramifications for DOD’s medical facility needs. The replacement medical center will serve as the only tertiary-level referral hospital for the EUCOM, Central Command, and Africa Command theaters of operation. Because of these unique aspects, according to medical planners they did not use typical DOD catchment area standards. Military treatment facilities are typically designed to offer sufficient health care for active duty beneficiaries and their dependents within a 40-mile radius of the military treatment facility. In the case of LRMC, medical planners determined that the historical patterns of care indicated that this area should be a 55-mile radius. Medical planners in the Office of the Secretary of Defense, the Army, and the Air Force analyzed historical patterns of patient migration and contingency operations at LRMC and the 86th MDG to define four catchment areas. See figure 2 for the location of these four catchment areas. The four catchment areas, as defined by military medical planners, are based on populations of patients who are enrolled as beneficiaries or who are eligible to enroll for the following locations: 1. The Kaiserslautern Military Community catchment area includes all beneficiaries enrolled in LRMC, 86th MDG, and Kleber/Kaiserslautern military treatment facilities. This catchment area is approximately 55- miles in radius surrounding the proposed facility’s site. 2. The Germany-wide catchment area includes all beneficiaries enrolled in the Kaiserslautern Military Community catchment area plus beneficiaries enrolled in the military treatment facilities in Germany. This catchment area definition was essential in determining the patterns of enrolled beneficiaries’ use of German health care.3. The Europe Regional Medical Command catchment area includes all beneficiaries in the Germany-wide catchment area plus beneficiaries enrolled in all military treatment facilities in Italy and Belgium. This catchment area reflects historical inpatient referral patterns at LRMC. 4. The EUCOM catchment area includes all enrolled beneficiaries and eligible beneficiaries in Europe, including all beneficiaries in the other three catchment areas. Table 1 shows the beneficiary population, by catchment area and beneficiary category, as of March 2010. In appendix II we include catchment area populations by beneficiary category, for fiscal years 2006 through 2011. According to DOD officials, the flow of patients from theaters of operation, including contingency operations, minimally affects the volume of inpatient care at LRMC and outpatient care at both LRMC and 86th MDG. Table 2 shows that approximately half of all inpatient care at LRMC, a little more than 77 percent of outpatient care at LRMC, and almost 96 percent of outpatient care at the 86th MDG is provided to beneficiaries located within the Kaiserslautern Military Community catchment area as well as the Germany-wide catchment area. According to DOD officials, the replacement medical center is being sized for peacetime operations, not for contingency operations. However, these officials told us that the replacement medical center is being designed with the flexibility to expand capacity during surges to be able to handle casualties that result from contingency operations. DOD officials determined that the replacement medical center should be able to accommodate contingency operations’ medical needs similar to those experienced in Fallujah, Iraq, during November 2004, in which the United States sustained about 100 casualties and 600 wounded over a 2- month period. For this reason, the new medical center is designed to be able to nearly double its medical/surgical bed capacity if needed to support contingency operations. According to Army officials, to mitigate the increase in patient workload resulting from surges caused by contingency operations, the new medical center will follow the procedures currently in use at LRMC. These procedures require that priority be given to active duty servicemembers, and therefore, other beneficiaries normally treated at LRMC would be directed to German health care facilities during a time when surge capability is needed (and capacity is constrained) and then redirected back to LRMC when the workload from contingency operations lessens. DOD has been reducing its military posture in Europe since German reunification in 1990. At its peak, the United States had approximately 350,000 active duty servicemembers stationed in EUCOM’s area of responsibility. The size of DOD’s military posture in EUCOM’s area of responsibility is currently estimated at about 78,000 active duty servicemembers. DOD has been reducing its medical treatment capacity over time to correspond to the reduction in the number of military servicemembers stationed in Europe. Today, LRMC is DOD’s only remaining tertiary care medical center in Europe. Furthermore, it is the only medical center in Europe, Asia, or Africa that serves beneficiaries from the EUCOM, Central Command, Africa Command, and Special Operations Command areas of responsibility. In 2004, DOD announced its plans for an overseas basing strategy that called for reducing the number of Army brigade combat teams stationed in Europe from four to two. However, in the February 2010 Quadrennial Defense Review, DOD decided that it would retain all four Army brigade combat teams in Europe, rather than returning two to the United States as originally planned. Moreover, in April 2011, based on several factors, including consultations with allies and the findings of the North Atlantic Treaty Organization's new Strategic Concept, DOD announced that it planned to remove by 2015 only a single brigade combat team from Europe. According to DOD officials, the brigade they anticipated removing from Europe was stationed at U.S. Army Garrison (USAG) Baumholder, Germany, initially leaving brigades at USAG Grafenwoehr and USAG Vilseck, which are located close to one another in Germany and at USAG Vicenza, Italy. There are also elements of the Grafenwoehr brigade at USAG Schweinfurt, Germany. DOD also has plans to eventually close four Army locations in Germany—Heidelberg, Mannheim, Bamberg, and Schweinfurt. As a result of these closures, the elements of the Grafenwoehr brigade at Schweinfurt were expected to move to Grafenwoehr when Schweinfurt closed. As of the date of this report, the four brigade combat teams are still assigned at their original locations in EUCOM. The April 2011 announcement also included a DOD decision to station four Aegis Cruisers in Spain, a change that would increase the military beneficiary population in Europe. Figure 3 shows the locations of DOD military installations in Europe where posture changes are expected to take place that could affect the facility requirements for the replacement medical center. The brigade combat team currently located at Baumholder is within the Kaiserslautern Military Community catchment area and is expected to reduce the beneficiary population when it leaves. According to Army officials, the brigade consists of approximately 4,200 soldiers, who are accompanied by about 6,300 dependents. However, according to DOD officials, when this brigade leaves Baumholder other DOD personnel will be restationed there because Baumholder is considered an enduring installation with accessible joint military training facilities nearby. Army officials also told us that because some of the housing at Baumholder is substandard, they expect only 2,300 to 3,500 servicemembers to move to Baumholder. Using the Army ratio of 1.5 dependents to each military member indicates that as approximately 10,500 servicemembers and their dependents who are medical beneficiaries of LRMC leave the catchment area, they will be replaced by 5,750 to 8,750 new servicemembers and their dependents—an overall reduction in the Kaiserslautern Military Community catchment area of from 4,750 to 1,750 beneficiaries. DOD officials told us that even though the beneficiary population at Baumholder will be reduced, they expect this change to have little impact on the workload and sizing requirements for the replacement medical center. In October 2009, DOD hired an independent contractor, Noblis, to perform a sensitivity analysis that would provide an order of magnitude estimate of potential changes to the beneficiary population that would need to occur to affect the size of the facility. This sensitivity analysis was further refined and updated in 2010. It specifically assessed the type of population changes that would require the addition or subtraction of intensive-care unit (ICU) and medical/surgical beds, as well as specialty care exam rooms for outpatients. The analysis concluded that the planned capacities for the replacement medical center would be resilient to sizable changes in the population served. A population change of up to 70,000 beneficiaries—a change in the total EUCOM beneficiary population of about 29 percent—would necessitate resizing of the requirements for ICU or medical/surgical beds by the addition or subtraction of a 20-bed module. A population change of 25,000 to 31,000 beneficiaries—a change in the total EUCOM beneficiary population of between 10 percent and 13 percent would necessitate re-sizing requirements for specialty care exam rooms by the addition or subtraction of an 8 to 10 exam room module. DOD officials told us that changes in the beneficiary population are expected to occur in the EUCOM catchment area through 2015. Although some of these changes will increase the population in certain locations, the overall change will be a reduction in the overall number of beneficiaries in EUCOM’s area of responsibility. The following beneficiary changes are expected: The Army expects a reduction in the Europe Regional Medical Command’s active duty servicemembers and their dependents’ population of about 21,000—a reduction in the total EUCOM beneficiary population by about 8 percent—by fiscal year 2015, according to the Updated (FY10) Health Care Requirements Analysis. However, it does not expect a significant change to the beneficiary population in the immediate Kaiserslautern Military Community catchment area. The Air Force does not expect a change in its beneficiary population through fiscal year 2015. The Navy expects to gain about 1,200 sailors from the stationing of the Aegis Cruisers in Rota, Spain, along with about 1,300 additional dependents—for a total increase of about 2,500 beneficiaries, or a 1 percent gain in the total EUCOM beneficiary population. Based on the results of DOD’s 2009 sensitivity analysis, the expected changes would not necessitate a change in the number of ICU beds, medical/surgical beds, or outpatient exam rooms. In January 2012, however, DOD announced new posture decisions that will further reduce EUCOM’s troop strength. According to DOD, these posture decisions are part of a deficit reduction package based on the Budget Control Act of 2011 requirement to reduce the department’s future expenditures by approximately $487 billion over the next decade. EUCOM data indicate that by 2015 approximately 71,500 active duty military servicemembers will remain in Europe following the latest changes to DOD’s European posture. According to the January 2012 DOD publication Defense Budget Priorities and Choices, DOD has updated its April 2011 plans for its European basing strategy and has stated that it intends to now remove two brigade combat teams from Europe. These two brigades are currently located at Baumholder and Grafenwoehr with elements of the brigade in Grafenwoehr located in Schweinfurt. As a result, the elements in Schweinfurt will not relocate to Grafenwoehr as previously planned. DOD’s decision to remove two brigades from Europe and how this shift in troop numbers will affect health care requirements in the EUCOM area of responsibility have yet to be fully determined. However, DOD officials noted that they did not believe the removal of a second brigade combat team would affect the beneficiary population of the replacement medical center because the second brigade is currently stationed outside the immediate Kaiserslautern Military Community catchment area. DOD officials told us that they have started a review to confirm that the shift in DOD posture will not affect the requirements for the proposed replacement medical center. They noted that recent troop reductions are being studied to determine what impact, if any, they will have on the proposed size of the replacement medical center. They also noted that they are developing a sensitivity analysis to accommodate the information and will include it as part of DOD’s statutorily required recertification of the facility. As of the date of this report, they had not completed the study because along with the recertification, DOD must also submit a plan for implementing GAO’s recommendations with respect to the LRMC facility. When developing facility requirements for the replacement medical center, DOD officials incorporated many patient quality of care and environmentally friendly design standards. However, our review of the documentation DOD provided in support of these facility requirements revealed gaps, inconsistencies, and calculation errors that required extensive explanation by DOD officials to understand the deviations and decisions made to develop the requirements. Without clear documentation that explains how the analyses were performed and any adjustments made, stakeholders and decision makers lack reasonable assurance that the proposed replacement medical center will be appropriately sized to meet the needs of the expected beneficiary population in Europe. DOD officials used checklists and discussions with external health care providers to incorporate updated patient quality of care standards into the facility requirements for the replacement medical center; they also incorporated environmentally friendly design standards. They used DOD’s military hospital construction checklists to ensure that they incorporated updated patient quality of care standards, such as evidence-based design and world-class standards, when determining the size of the replacement medical center. For example, DOD officials told us they used the Evidence Based Design Checklist—which DOD created in August 2007 and updated in 2009—to incorporate design concepts into health care construction projects that have impacts on patient-centered care. Examples of evidence-based design include single-patient instead of multiple-patient rooms to better accommodate family involvement in the provision of care and to better control infections, and studying layouts and workspace ergonomics to maximize work pattern efficiency. Additionally, DOD officials and the architectural and engineering firm contracted for the design of the replacement medical center used DOD’s Military Health Service World-Class Checklist to ensure that world-class standards were integrated into the facility’s design. The checklist identifies areas for DOD officials to research to help ensure that world-class standards are systematically developed, validated, and communicated with project teams. The completed checklist described examples of how world-class standards—which encompass many of the evidence-based designs from the Evidence Based Design Checklist—were integrated into the facility’s design. Some of the world-class standards incorporated into the facility requirements were (1) optimizing the size and position of the patient windows to provide exterior views for the patient from the bed, (2) providing patient and family control over the environment in the patient room (e.g., heating and cooling), and (3) providing full height walls with higher noise transmission ratings (a higher noise transmission rating blocks more noise from transmitting through a wall) in spaces where patients would be asked to disclose personal information. DOD officials told us they also met with officials from Department of Veterans Affairs’ hospitals, private sector hospitals, and German hospitals to obtain information on evidence-based practices for providing health care that could be applied to the replacement medical center’s design. DOD has also incorporated additional environmental and efficiency features into the design of the replacement medical center and expects to exceed the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) green building standards, which have been adopted by several federal agencies. The LEED system awards points for meeting a variety of standards and certifies buildings as silver, gold, or platinum. The replacement medical center’s current design will likely qualify for a “silver” certification. However, the facility’s extensive energy efficiency and renewable energy features indicate that it may qualify for a “gold” certification once it has met the more stringent German design requirements. For example, the project will use low water plumbing fixtures and commercial kitchen equipment available in Germany to reduce water use and achieve higher efficiency. DOD sized the replacement medical center based on projected patient workload data. However, our review of the planning documentation DOD provided in support of its facility requirements showed that there were (1) inconsistencies in how DOD projected patient workload and applied the planning criteria, (2) some areas where the planning documentation did not clearly show how DOD officials had applied the formulas provided in the criteria to generate requirements, and (3) calculation errors throughout. DOD guidance in effect when the facility was designedprovided that when designing medical facilities, planners should develop patient workload factors—both current and projected—and use these factors to determine the sizing requirements for the facility. While DOD officials acknowledged that inconsistencies, gaps in documentation, and calculation errors existed in the requirements documentation, they did not think the identified issues alone would necessitate a revision of the facility requirements. However, because DOD has not yet determined the effects of the newly proposed posture changes on projected patient workload— which in turn drives the requirement for the facility size—it is not known if the inconsistencies, gaps, and calculation errors coupled with the posture change will require DOD to revise its facility requirements. DOD officials plan to examine these concerns in their recertification process. The Updated (FY10) Health Care Requirements Analysis report for LRMC captures some of these data and steps DOD used to determine the sizing requirements for the replacement medical center (see table 3 for the sizing requirements that DOD developed, by medical center department). Inconsistencies in projecting workload and applying criteria. To project most inpatient and outpatient workload for the replacement medical center, DOD officials used fiscal year 2010 estimated patient workload data as a baseline. However, they used different baseline data in different parts of the analysis. For example, in determining the number of labor and delivery rooms, DOD officials did not use workload data from fiscal year 2010 as the baseline. According to DOD officials, the obstetrician workload has historically been relatively stable. Therefore, they used the labor and delivery room workload data from the Health Care Requirements Analysis, which had been conducted in fiscal year 2008 to support the original plan for renovating and reconstructing LRMC and determined that the data were accurate enough for their purposes. Once DOD officials determined what projected workload data to use in their calculations for the new facility, they were to use the criteria in DOD Space Planning Criteria for Health Facilities to calculate the facility’s requirements, for example, the appropriate number of inpatient beds and outpatient exam rooms. DOD officials generally used the formulas provided in this document, but they applied them inconsistently when determining the appropriate size for individual departments within the facility. For example, the space planning criteria direct DOD officials to divide an inpatient department’s projected workload—in this case, the average daily census—by a particular occupancy rate to determine the number of inpatient beds that would be required. The criteria specify that certain inpatient beds should be designed in modules of 4, 6, or 8 beds. DOD generally followed these criteria in calculating the number of nursing unit medical/surgical beds, a type of inpatient bed. The criteria specify an occupancy rate of 85 percent for inpatient medical/surgical beds. Following this formula, DOD officials divided the projected average daily census (48.7 patients) by 0.85. This calculation resulted in a requirement for 57.3 beds. To conform to the modular grouping criteria, DOD officials rounded to 60 beds. However, in determining the number of inpatient behavioral health beds DOD officials deviated from these criteria. The projected average daily census for behavioral health was 24 patients. The space planning criteria specify a 70 percent occupancy rate for psychiatric (i.e., behavioral health) beds when the average daily census is fewer than 25 patients, instead of the 85 percent occupancy rate specified for nursing unit medical/surgical beds. Nevertheless, DOD officials used an 85 percent occupancy rate to calculate the requirement for behavioral health beds. This resulted in a requirement for 28.2 beds—rounded to 30 beds to conform to the modular grouping criteria. According to DOD officials, they chose to use a different occupancy rate factor because they reasoned that since space planning criteria had not been updated to reflect the shift to single occupancy rooms, the 70 percent rate would likely result in a requirement for a higher number of beds. Following the space planning criteria’s guidance would have produced a requirement for 34.3 beds, which would have been rounded to 36 beds to account for the modular grouping criteria. As a result, the need for behavioral health beds may actually be higher than DOD officials determined. The documentation did not clearly convey the reasons for the deviations or adjustments DOD officials made when applying the criteria, and as a result, decision makers may lack reasonable assurances that the number of beds required would be sufficient to meet the needs of the expected beneficiary population in Europe. Although these deviations or adjustments may not adversely affect the size of the replacement medical center, their effect when combined with the yet to be assessed posture changes remains unknown. Inadequate documentation of how facility requirements were estimated. DOD’s documentation of its processes for determining the replacement medical center’s sizing requirements did not always clearly indicate how DOD officials had generated these requirements and omitted details that would have helped demonstrate how DOD officials had determined the size of the replacement medical center. For example, DOD’s planning documentation reported contradictory methods for projecting patient workload. According to the Updated (FY10) Health Care Requirements Analysis, DOD used three different scenarios to project the facility’s workload, resulting in a low, a midrange, and a high projection; all three scenarios used estimated patient workload data from fiscal year 2010 as the baseline: Scenario A excluded the workload attributable to the conflicts in Iraq and Afghanistan, and assumed that the change in patient workload would continue to follow the trend set over the previous 5 years. Scenario B adjusted for potential future decreases in beneficiary population, and assumed that the change in patient workload would continue to follow the trend set over the previous 5 years. Scenario C assumed that the change in patient workload would continue to follow the trend set over the previous 5 years and made no exclusions or adjustments. The Updated (FY10) Health Care Requirements Analysis first reported using Scenario B—the scenario that resulted in midrange projections—to project inpatient and outpatient workload for the replacement facility. However, later sections of the document report the use of different methods to project patient workload. DOD officials confirmed that they had used a combination of methods to project inpatient and outpatient workload, and that they had used Scenario B only to validate these projections after they had calculated them. These officials acknowledged that the Updated (FY10) Health Care Requirements Analysis could have better documented how these projections were developed. The lack of clear documentation makes it difficult to understand the processes used without extensive explanation by DOD officials. In addition, the Updated (FY10) Health Care Requirements Analysis omitted details on how DOD officials developed certain data. For example, the document does not show how DOD officials projected inpatient workload for behavioral health beds, only noting that the projected average daily census was 24 patients. Although the Updated (FY10) Health Care Requirements Analysis did not document how the average daily census was calculated, DOD officials told us that the historical data on inpatient behavioral health workload were not sufficient for projecting workload because LRMC’s behavioral health inpatient capacity was such that any beneficiaries other than active duty servicemembers were referred to the German economy for treatment. Therefore, the officials said they used another method (Scenario C) to project workload, so that the facility would have the inpatient behavioral health capacity to treat additional patients. The planning documentation also does not show how DOD officials projected the number of providers required for outpatient ambulatory departments. The Updated (FY10) Health Care Requirements Analysis contains a table with the number of outpatient ambulatory providers but does not show how or whether projected outpatient workload data for the replacement medical center were used to determine the number of outpatient providers that would be required. These gaps in documentation make it unclear whether the size of the replacement medical center will be adequate to meet the needs of the beneficiary population, and when combined with potential posture changes and previously discussed deviations or adjustments, the extent to which they may affect the size of the facility is unknown. Calculation errors in the planning documentation. We also found several calculation errors within the Updated (FY10) Health Care Requirements Analysis report. One table in the report that shows historical (5-year average), baseline, and projected workload for inpatient and outpatient care had errors in the 5-year average column for inpatient and bed days of care. When we spoke with DOD officials, dispositionswe pointed out these errors. DOD officials acknowledged the errors and noted that the correct numbers could be found in a separate table in the report’s appendix—although the appendix table was not listed as a reference to support the historical workload numbers. Additionally, a table in the report’s appendix, which illustrated the different projected inpatient and outpatient workload data, calculated using the three different scenarios, had many calculation errors in the projected outpatient workload columns. Specifically, in calculating projected workload using Scenarios A and B, DOD incorrectly used the 5-year average—instead of the fiscal year 2010 data—as a baseline, and when using Scenario C, DOD adjusted for potential decreases in the beneficiary population, although this scenario did not call for such an adjustment. As a result, outpatient workload data using Scenario B, for example, was calculated to be 288,534 encounters instead of 328,944 (a 14 percent difference). The projected data derived by incorrectly applying Scenario B were then used in another table in the report’s appendix to verify that the projected outpatient provider staffing would be sufficient to treat the projected number of outpatients. DOD officials acknowledged the error and provided us with correct data. According to DOD officials, even though there was a 14 percent difference in the projected outpatient workload data, the outpatient provider staffing levels would still be sufficient. Although these calculation errors may not adversely affect the size of the replacement medical center, it remains unknown to what extent this error will affect facility requirements when combined with the yet to be assessed posture changes, previously discussed deviations or adjustments, and gaps in documentation. Standards for internal controls include, among other things, control activities. Control activities include policies, procedures, techniques, and mechanisms that enforce management’s directives. They can include a wide range of activities—such as authorizations, verifications, and documentation—that should be readily available for examination. Detailed and appropriate documentation is a key component of internal controls. Without clear documentation of key analyses, and of how adjustments to facility requirements were made, stakeholders lack reasonable assurances that the proposed replacement medical center will be able to provide the appropriate health care capacity to meet the needs of the beneficiary population it is expected to serve. In developing the cost estimate for the replacement facility, DOD followed many of the best practices in developing estimates of capital projects, but DOD minimally documented the data sources, calculations, and estimating methodologies used in developing the cost estimate. Further, it is anticipated that the replacement medical center will become the hub of a larger medical-services-related campus, for which neither cost estimates nor time frames have yet been developed. The GAO Cost Estimating and Assessment Guide contains cost estimating best practices that have been identified by GAO and cost experts within organizations throughout the federal government and industry. These best practices can be grouped into four general characteristics of sound cost estimating: 1. “Accurate” refers to being unbiased and ensuring that the cost estimating is not overly conservative or overly optimistic and is based on an assessment of most likely costs. 2. “Credible” refers to discussing any limitations of the analysis because of uncertainty or bias surrounding data or assumptions used in the cost estimating process. 3. “Comprehensive” refers to ensuring that cost elements are neither omitted nor double counted, and all cost-influencing ground rules and assumptions are detailed. 4. “Well documented” refers to thoroughly documenting the process, including source data and significance, clearly detailed calculations and results, and explanations of why particular methods and references were chosen. See appendix III for detailed information on each of these cost estimating characteristics. In addition, Office of Management and Budget (OMB) best practices note that programs should maintain current and well-documented estimates of program costs, and that these estimates should encompass the full life cycle of the program. The characteristics of sound cost estimating are divided into individual criteria, which we used to assess DOD’s process for developing its cost estimate. Our process for evaluating the cost estimate consisted of assigning an assessment rating for the various criteria evaluated on a 1 to 5 scale: not met = 1, minimally met = 2, partially met = 3, substantially met = 4, and met = 5. Then, we took the average of the individual assessment ratings to determine an overall rating for each of the overarching characteristics: accurate, credible, comprehensive, and well documented. Criteria assessed as not applicable were not given a score and were not included in our calculation of the overall assessment. Furthermore, our review of DOD’s process for developing the cost estimate does not reflect an assessment of how facility requirements were developed or their quality, but only a determination of whether they are described in technical documentation and reflected in the estimate.However, as discussed previously in this report, during our assessment of DOD’s process for determining facility requirements for the replacement medical center, we found some calculation errors in the facility requirements. Table 4 provides a summary of our assessment of DOD’s cost estimating process. We determined that the cost estimate for the replacement medical center had been updated as project requirements were better defined. The overall cost estimate was broken down into costs per square foot, which were based on historical records of costs and actual experiences from other comparable programs. Although the DD Form 1391 does not include documentation regarding how inflation was factored into the estimated costs for the replacement medical center, DOD officials told us that costs on the DD Form 1391 have been adjusted for inflation using departmental guidance. We found no evidence indicating that the cost estimate is biased. However, it is not possible to fully assess the accuracy and reliability of a cost estimate without conducting a risk analysis that indicates the confidence level associated with the project’s estimated cost. Yet, the independent estimate and estimate validation that are further described below are sufficient to meet the requirements of this criterion. DOD hired an architecture and engineering firm to validate the cost estimate using a cross-check of major cost elements to determine whether alternative methods would have produced similar results. The contractor concluded that the cost estimate was valid. It also developed an independent cost estimate and determined that the design of the facility was within 1 percent of the size listed on the DD Form 1391, and that the resulting cost was also within 1 percent of DOD’s cost estimate. DOD officials told us that they also hired a separate firm to develop sensitivity and risk analyses that were designed to meet GAO cost estimating standards as published in the Cost Estimating and Assessment Guide. However, we found some limitations in these analyses. The only cost drivers evaluated were the exchange rate, German inflation, the cost of various raw materials, and a composite labor rate. The analyses did not evaluate the potential cost impact of variations in the beneficiary population, catchment area, level of care provided, or amount of battle-related injuries. Moreover, the analyses did not evaluate the cost impact of varying the square footage requirements documented in the Program for Design.credible, key cost elements should be tested for sensitivity, and other cost estimating techniques should be used to cross-check the reasonableness of the ground rules and assumptions. It is also important to determine how sensitive the final results are to changes in key assumptions and parameters. DOD’s cost estimating methodology for the replacement medical center substantially met best practice criteria for overall comprehensiveness, but some costs and assumptions were not included in the individual criteria that make up the comprehensive cost estimating characteristic. The cost estimate generally includes categories of costs for the design, construction, and outfitting of the replacement medical center. Additionally, DOD provided an appropriate work breakdown structure for the facility to help ensure that cost elements were neither omitted nor double counted. DOD also provided us with technical baseline documentation, including the Updated (FY10) Health Care Requirements Analysis report and the Program for Design, which defines the technical and programmatic requirements of the project. DOD officials told us that technical baseline documentation was developed by qualified personnel—including a multidisciplinary team of health care planners, architects, and engineers—and has been updated as the project has evolved. We found no instances in which any costs for design, construction, and outfitting of the replacement medical center were omitted. Although DOD provided us with some cost information as well as technical baseline documentation, additional recurring life cycle costs were, for the most part, not available, resulting in this subcategory criterion for comprehensiveness being rated as minimally met. The cost estimate does not include any facility sustainment costs, costs for supporting infrastructure, or any operation and maintenance costs for personnel or equipment required to operate the facility. In addition, the cost estimate does not include costs associated with the disposition or retirement of proposed medical center facilities at the end of their life cycles, such as demolition or renovation costs. In addition, DOD officials said costs associated with the disposition of the current LRMC or 86th MDG are not included in the cost estimate. Army officials told us that the facilities that make up the current LRMC will remain under the auspices of the Army. These officials noted that following completion of the replacement medical center, ownership of the current LRMC facilities will transfer to Army Installation and Management Command. Under this arrangement, these facilities will no longer be classified as part of the Military Health System. Therefore, Army officials told us that any costs associated with their disposition should not be included in the overall estimate for the replacement medical center. The 86th MDG clinic consists of 13 separate buildings. The remaining components that make up the current 86th MDG clinic will be transferred to Ramstein Air Base. According to 86th MDG officials, some of these buildings will remain in use following completion of the replacement medical center, while others will be demolished. However, it has not been decided how the remaining clinic buildings will be used; the officials said that this decision will be made by the installation commander at Ramstein Air Base. Since demolition or continued use of the remaining facilities will require DOD funding, these costs should be captured; they will help to show the full cost impact of the replacement medical center project. Further, the cost estimate contains minimal documentation of cost-influencing ground rules and assumptions. DOD officials noted that some of the ground rules and assumptions have been included in the technical baseline documentation. However, we could not find a documented reference or link in the technical baseline documentation we examined to specific cost elements in the DD Form 1391. We also found no evidence of documentation of the risks associated with assumptions, which should be traced to specific cost elements. A life cycle cost estimate should encompass all past (or sunk), present, and future costs for every aspect of the program, regardless of funding source, including all government and contractor costs. Without a full accounting of life cycle costs, management will have difficulty successfully planning program resource requirements and making wise decisions about where to allocate resources. Cost estimates are typically based on limited information and therefore need to be bound by the constraints that make estimating possible. These constraints are usually defined by ground rules and assumptions. However, because such assumptions are best guesses, the risks associated with a change to any of these assumptions must be identified and assessed. Many assumptions profoundly influence cost; the subsequent rejection of even a single assumption could invalidate many aspects of the cost estimate. Unless ground rules and assumptions are clearly documented, a cost estimate will not provide a basis for developing resolutions concerning areas of potential risk. Furthermore, it will not be possible to reconstruct the estimate when the original estimators are no longer available. A well-documented cost estimate is essential if an effective independent review is to ensure that it is valid. However, the documentation DOD provided in support of its cost estimate did not clearly demonstrate how facility requirements had been factored into cost elements. DOD’s cost estimate lacked documentation that described, in detail, the calculations performed and the estimating methodology used to derive the cost for each element of the replacement medical center. None of the documents provided to us included detailed documentation of how DOD developed and refined the cost estimate. A complete documentation of source data would include, for each line item in the cost estimate, a reference to a specific data source or sources (including the document and page number) used as the basis for each square footage and unit cost amount. For example, the cost estimate contains line item estimates for electricity, water/sewer/gas, steam/chilled water distribution, and storm drainage. However, from the documentation provided, it is not possible to determine how these requirements were used to develop cost estimates. The technical baseline description and data in the technical baseline documentation are spread across several documents, including the Updated (FY10) Health Care Requirements Analysis report, Program for Design, and a Planning Charrette Discussion. However, only the Planning Charrette Discussion is referenced in the cost estimate on the DD Form 1391. Moreover, we found minor differences between the square footage requirements in the Program for Design and the cost estimate as described on the DD Form 1391. For example, the Program for Design reports a total gross square footage requirement of 1,293,409 and the cost estimate reports a total requirement of 1,340,731 square feet. It was not possible to compare square footage amounts for various components of the facility because of the differing levels of detail in the Program for Design and the cost estimate. The difference in square footage numbers between the Program for Design and the DD Form 1391 is not documented; therefore, the reasons for the difference are unclear. Since the technical baseline is intended to serve as the basis for developing a cost estimate, it should be discussed in the cost estimate documentation. Cost estimators should provide a briefing to management about how the estimate was constructed—including specific details about the program’s technical characteristics, assumptions, data, cost estimating methodologies, sensitivity, risk, and uncertainty—so management can gain confidence that the estimate is accurate, complete, and high in quality. However, we found no documentation of a detailed review and approval that included the estimate’s technical foundation, ground rules and assumptions, estimating methods, data sources, sensitivity analysis, risks and uncertainty, cost drivers, cost phasing, contingency reserves, or affordability. DOD officials confirmed our conclusion that their cost estimating process was not fully documented. They told us that they had developed supporting facility costs using expert opinion and parametric models; however, these were not listed in the cost estimate. According to DOD officials, DOD guidance does not require detailed documentation as part of the DD Form 1391 cost estimate. Under DOD’s cost methodology, as the project design matures, so does the level of cost analysis. DOD officials asserted that the current cost estimate is appropriate for the current level of design. DOD officials acknowledged that better documentation would have provided more support and information to the various decision makers in the process and would be a good practice to follow. If the cost estimate for the replacement medical center does not include detailed documentation, stakeholders cannot reasonably conclude that it is reliable. In addition, DOD and Congress may not have the information they need to make fully informed decisions about the facility. If a cost estimate does not fully account for life cycle costs, management will have difficulty successfully planning program resource requirements and making wise decisions. Poorly documented cost estimates can cause a program’s credibility to suffer, because the documentation cannot explain the rationale of the methodology or the calculations underlying the cost elements. Further, without clear technical baseline documentation, the cost estimate will not be based on a comprehensive program description and will lack specific information regarding technical and program risks. Unless the cost estimate is fully documented, it cannot be reconciled with an independent cost estimate. DOD officials told us that the replacement medical center will be a fully functioning military treatment facility and not require any additional support facilities to fulfill its mission of providing inpatient and outpatient care. However, in the Strategic Concept of Operations section of the Updated (FY10) Health Care Requirements Analysis report for the replacement medical center, the center is described as being the hub of a medical-services-related campus at Weilerbach Storage Area. The medical campus is expected to be an integrated health care campus that would include hospital and ancillary components as well as outpatient, administrative, and educational components. The other facilities that DOD expects to develop for this campus under separate military projects include warrior transition unit facilities, medical transition detachment housing, and possibly medical troop barracks, among other facilities. At this time, DOD has not determined the additional costs for these facilities, nor has it developed a time frame for their construction. However, Army officials told us that plans for the campus concept are still predecisional and that certain facilities would only be replicated at Weilerbach Storage Area following the expiration of their useful life. For instance, the child care center near the current LRMC will remain there until it requires renovation or reconstruction. At that point, a similar facility would be constructed at Weilerbach Storage Area to replace it, so that staff working at the replacement medical center would not have to leave the area for day care services for their children. The need to replace the outdated LRMC and the 86th MDG clinic to ensure that military servicemembers and their families receive the care they deserve is widely recognized. A critical step toward meeting this goal is the development of a credible and comprehensive assessment of the facility requirements and the cost of the replacement medical center. DOD’s evolving posture in Europe will likely have an impact on the size of the beneficiary population served by the replacement medical center. However, DOD’s current needs assessment contains inconsistencies and errors in how it used patient workload and staffing data to determine facility requirements, such as facility size. In several situations, DOD officials adjusted the criteria being used but failed to document their rationale or need for taking these steps. Moreover, the documentation used to support the determination of the facility requirements does not clearly describe the methodology or calculations used to develop the requirements, and these requirements provided the basis for the cost estimate. DOD officials have indicated that the issues GAO has identified may not have a substantial impact on the size of the replacement medical center, but they have not yet taken specific action to determine what the individual or cumulative effects would be. DOD’s cost estimating methodology substantially met many best practices criteria but was only minimally documented. Congress has required the Secretary of Defense recertify to the Appropriations Committees in writing that the replacement medical center is properly sized and scoped to meet current and projected health care requirements. With this recertification, DOD has an opportunity to determine the impact the proposed posture changes will have on the proposed facility requirements and revise its documentation to provide clear support for how it developed its facility requirements. Without clear documentation of how key requirements were developed and how they factored into the development of facility requirements and cost, DOD cannot fully demonstrate that the proposed replacement medical center will provide adequate health care capacity at the current estimated cost. To ensure that the replacement medical center is appropriately sized to meet the health care needs of beneficiaries in a cost-effective manner, we recommend that as part of the facility’s recertification process, the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to take the following two actions: provide sufficient and clear documentation on how medical planners applied DOD criteria to determine the facility’s requirements, including how and why medical planners made adjustments to the criteria, and correct any calculation errors and show what impact, if any, these errors had on the sizing of the facility. Furthermore, in light of recently announced posture changes and potential adjustments that may need to be made in facility requirements based on correcting identified calculation errors in the original documentation, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to revise the cost estimate for the center, incorporating the best practices outlined in the GAO Cost Estimating and Assessment Guide to reflect these potential posture changes, update it with the revised calculations as part of the recertification more thoroughly document the data, assumptions, calculations, and methodology used to develop specific cost elements. In written comments to a draft of this report, DOD agreed with our conclusions and each of our recommendations. DOD stated that it recently conducted a reassessment of the original $1.2 billion project submitted in the Fiscal Year 2012 President’s Budget request that responds to GAO’s recommendations by utilizing the most current data, including recently announced force structure changes, and providing a documented audit trail of how the size, scope, and cost of the alternatives were developed. Although we are encouraged that DOD has performed a reassessment, DOD did not make it available for our review. DOD’s comments noted that the reassessment will be provided once approved by the Secretary of Defense. As a result, we are unable to confirm at this time that these actions have been taken. Therefore, we believe our recommendations are still appropriate until the reassessment is released and documentation made available. DOD also provided technical and clarifying comments, which we incorporated as appropriate into this report. DOD’s comments are reprinted in their entirety in appendix IV. We are sending copies of this report to the interested congressional committees, Secretary of Defense; the Secretaries of the Army and the Air Force; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-7968 or mctiguej@gao.gov or (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To describe how DOD officials considered potential changes to DOD’s posture in Europe—and their possible effect on the beneficiary population—when developing facility requirements for the replacement medical center, we obtained available posture planning documentation, including population estimates, and compared it with the beneficiary population data used in planning assumptions for the replacement medical center. We also obtained and reviewed Health Care Requirements Analysis documentation containing beneficiary population information and requested and reviewed more recent updates of this information. We met with officials from the Offices of the Assistant Secretary of Defense (Health Affairs) and the Deputy Under Secretary of Defense (Installations and Environment), U.S. European Command, U.S. Army Europe, and U.S. Air Forces Europe to gain insight into possible scenarios that are being considered for posture changes in Europe. In addition we talked with some of the above individuals and met with officials with the U.S. Army Corps of Engineers Europe and with the U.S. Army Installation Command Europe to discuss how the location for the replacement medical center was selected. We also discussed with some of the officials above the steps they had taken to ensure reasonable accuracy of DOD beneficiary data and determined that the data specifically related to the proposed replacement medical center were sufficiently reliable for the purposes of this report. To assess DOD’s process for determining facility requirements for the replacement medical center to determine to what extent it incorporated quality standards into its design and adhered to DOD guidance, we obtained and reviewed documents detailing the process and any data used in the development of the requirements for the replacement facility. Specifically, we obtained and reviewed documentation used to develop plans for the proposed replacement medical center, such as health care requirements analyses and facility designs. We also reviewed relevant documentation—including checklists—to determine whether DOD included quality and environmentally friendly standards, such as world- class standards and Leadership in Energy and Environmental Design (LEED) green building standards. We also identified key assumptions used to determine facility requirements for the replacement medical center and obtained and reviewed applicable legal and departmental guidance, including DOD instructions and directives, and compared them with the documented assumptions and methods used to develop the facility’s requirements. Additionally, we reviewed their facility requirements documentation for calculation errors and attempted to duplicate their results. We also met with medical and construction planners with the Office of the Assistant Secretary of Defense (Health Affairs), the TRICARE Management Activity, U.S. Army Medical Command, the Landstuhl Regional Medical Center (LRMC), the Air Force Medical Support Agency, and the 86th Medical Group (MDG) to discuss how they determined the size of the replacement medical center. To review the process used to develop the cost estimate for the facility to determine to what extent DOD followed established best practices for developing its cost estimate, we obtained and reviewed available cost estimates for the proposed replacement medical center as well as supporting documentation that was used to determine overall costs. We evaluated this information using GAO’s standardized methodology of cost estimating best practices. For our reporting needs, we collapsed these best practices into four general characteristics for sound cost estimating: accurate, credible, comprehensive, and well documented. We determined the overall assessment by rating whether DOD followed best practices that make up each of the four characteristics. We assigned a number to our ratings: not met = 1, minimally met = 2, partially met =3, substantially met = 4, and met = 5. We took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. Criteria assessed as not applicable were not given a score and not included in the overall assessment calculation. We met with officials from the Office of the Assistant Secretary of Defense (Health Affairs), the TRICARE Management Activity, Army Medical Command, the Air Force Medical Support Agency, and the U.S. Army Corps of Engineers prior to our evaluation to explain our approach for reviewing DOD’s cost estimating process and to discuss project costs. We also met with these officials to discuss the results of our evaluation. To determine the overall costs of the replacement medical center, we obtained and reviewed planning documents. We also met with officials from LRMC and 86th MDG to discuss what the future plans are for the current facilities following construction of the replacement medical center. We conducted this performance audit from July 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The GAO Cost Estimating and Assessment Guide contains cost estimating best practices that have been identified by GAO and cost experts within organizations throughout the federal government and industry. For our reporting needs, we collapsed these best practices into four general characteristics of sound cost estimating: accuracy, credibility, comprehensiveness, and well documented. Table 5 provides detailed information on each of these cost estimating characteristics. In addition to the contacts named above, Laura Durland, Assistant Director; Marcia Mann, Assistant Director; Josh Margraf; Jeff Mayhew; and Richard Meeks made key contributions to this report. Joanne Landesman assisted in the message and report development, Amie Steele assisted in developing the report’s tables and graphics, Jennifer Echard and Dave Brown provided methodological support, and Michael Willems provided legal support. | Landstuhl Regional Medical Center (LRMC) is DODs only tertiary medical center in Europe that provides specialized care for servicemembers, retirees, and their dependents. Wounded servicemembers requiring critical care are medically evacuated from overseas operations to the 86th Medical Group clinic at Ramstein Air Base to receive stabilization care before being transported to LRMC for intensive care. According to DOD, both facilities were constructed in the 1950s and are undersized to meet current and projected workload requirements. DOD plans to consolidate both facilities into a single medical center at an estimated cost of $1.2 billion. In this report, GAO (1) describes how DOD considered changes in posture and the beneficiary population when developing facility requirements, (2) assesses DODs process for determining facility requirements, and (3) reviews DODs process to develop the facilitys cost estimate. GAO examined posture planning documentation, beneficiary demographic data, plans for the replacement medical center, and relevant DOD guidance, as well as interviewed relevant DOD officials. Department of Defense (DOD) officials considered current beneficiary population data, contingency operations, and most of the expected changes in troop strength when planning for the replacement medical center. However, recently announced posture changes in January 2012 have yet to be assessed for their impact on the facility. DOD estimates that the replacement medical center will provide health care for nearly 250,000 beneficiaries. A majority of those who are expected to receive health care from the center come from within a 55-mile radius of the facility. DOD officials told us that because the replacement medical center was designed for peacetime operationswith the capacity to expand to meet the needs of contingency operationsreductions in ongoing contingency operations in Afghanistan would not have an impact on facility requirements. At the time of this review, DOD officials said they were in the process of assessing proposed changes in posture to better understand their possible impact on the sizing of the replacement medical center. DOD officials incorporated patient quality of care standards as well as environmentally friendly design elements in determining facility requirements for the replacement medical center. DOD also determined the size of the facility based on its projected patient workload. Internal control standards require the creation and maintenance of adequate documentation, which should be clear and readily available for examination to inform decision making. However, GAOs review of the documentation DOD provided in support of its facility requirements showed (1) inconsistencies in how DOD applied projected patient workload data and planning criteria to determine the appropriate size for individual medical departments, (2) some areas where the documentation did not clearly demonstrate how planners applied criteria to generate requirements, and (3) calculation errors throughout. Without clear documentation of key analysesincluding information on how adjustments to facility requirements were madeand without correct calculations, stakeholders and decision makers lack reasonable assurances that the replacement medical center will be appropriately sized to meet the needs of the expected beneficiary population in Europe. DODs process for developing the approximately $1.2 billion cost estimate for the replacement medical center was substantially consistent with many cost estimating best practices, such as cross-checking major cost elements to confirm similar results. However, DOD minimally documented the data sources, calculations, and estimating methodologies it used in developing the cost estimate. Additionally, DOD anticipates that the new facility will become the hub of a larger medical-services-related campus, for which neither cost estimates nor time frames have yet been developed. Without a cost estimate for the facility that includes detailed documentation, DOD cannot fully demonstrate that the proposed replacement medical center will provide adequate health care capacity at the current estimated cost. Further, DOD and Congress may not have the information they need to make fully informed decisions about the facility. GAO recommends that DOD provide clear and thorough documentation of how it determined the facilitys size and cost estimate, correct any calculation errors, and update its cost estimate to reflect these corrections and recent posture changes. In commenting on a draft of this report, DOD concurred with GAOs recommendations and stated that it has conducted a reassessment of the project that will be released once approved by the Secretary of Defense. |
The White House Data Base was developed in 1994 to facilitate contacts with individuals and organizations who are important to the Presidency. It replaced a number of existing data bases with a single system which was intended to be easy to use and provide a greater level of service to a variety of users. The system has been operational since August 1995. Among other things, the data base is used for developing invitation lists for White House events and for providing information to help prepare thank you notes, holiday cards, and other correspondence. As such, the information contained on the data base ranges from names, addresses, phone numbers, social security numbers, contributor information, and dates of birth to individual relationships to the First Family and political affiliations. According to the White House, the data base contains personal information on about 200,000 individuals. In developing the data base, the White House used a widely accepted approach—Joint Application Development. Under this approach, users meet with programmers in a more intensive design session than usual—with the goals of eliminating rewrites of user interfaces and paving the way for faster application development. Development of the data base began with a series of technical interviews with potential users to determine, among other things, the sources of the data for the data base and the extent to which the data would be shared with nonfederal entities or individuals. Once these interviews were concluded, design and development elements were pursued on several fronts. First, potential users were asked to review functional aspects of the system and provide feedback. Second, the system architecture was developed and implemented based on detailed requirements and joint design elements provided by the customers and others. The data base operates on and is accessible through the White House’s local area network, or LAN. While more than 1,600 users are authorized to access the LAN, less than 150 users have been given access to the data base and even fewer actually use the data base. The products supporting the White House LAN, operating system, and data base system are widely used in the government and commercial sectors. The LAN uses version 3.12 of Novell’s network operating system. The data base runs on Microsoft’s Windows NT operating system using Sybase’s System 10 data base management system. Sybase’s System 10 is a relational data base management system, which is a system that allows both end-users and application programmers to store data in, and retrieve data from, data bases that are perceived as a collection of relations or tables. The data base is comprised of 125 tables. Data is input to and retrieved from these tables using simple screens and drop-down menus. Sybase’s System 10 is built with published and readily available interface specifications. It is open to the extent that anyone can write a program that will connect to the server. This is unlike traditional proprietary data base management systems, which could be accessed only with vendor-supplied tools or programs written with vendor-specific languages and compilers. In developing the data base, the White House acquired well-established, commercially available products and created a system that users we interviewed were generally satisfied with. However, as I will discuss in more detail, the design of the data base limits system performance. Further, the system—while having in place some internal controls—needs additional controls to assure the integrity and accuracy of data. As noted earlier, data base users primarily use the data base as a tool for maintaining contact with individuals and organizations important to the Presidency. Users told us that they were generally satisfied with the system. Less than 100 White House staff actually use the system, and only about 25 make moderate to heavy use (relative to other users) of the system—with the heaviest users representing the White House Social Office, Personal Correspondence Office, and Outreach Office, as well as system administrators. We examined user accounts and interviewed those staff making heavy use of the system in terms of amount of data both input to and read from the system. These included two staff in the Social Office, one in the Outreach Office, two on the Personal Correspondence staff, the data base data administrator, and a Sybase system administrator. We also interviewed four other business users and a system administrator who represent less heavy users of the system. Social Office personnel use the system to assist in developing invitation lists and planning state dinners and other events. Personal Correspondence personnel use the data base to help compose letters for the President. In doing so, they retrieve information from the data base on addresses, names of family members, White House events attended, and how the correspondent knows the President. The Outreach user we interviewed entered data into the data base for use in generating lists of holiday card recipients. Many users supplement the data base with information from manually accessed address lists. All those users we interviewed who had used the prior systems believed that the new system was better, and—for some users—the system is critical to their ability to complete their tasks. System administrators—who account for about 10 percent of all people who have accessed the data base—manage the system and maintain data base information. For example, they perform system backups, troubleshoot, and perform routine maintenance in the normal course of managing the system. The individual components supporting the data base—the network, server, and data base engine—are individually well-regarded and could be considered to be leading edge components for business applications similar to those run by the White House. However, the strength of the individual system components has been diminished by the design of the data base itself. Specifically, in developing the system, the White House attempted to meet all user requirements for a large array of potential information needs. Rather than take advantage of the relational data base capabilities of Sybase, the designers established a one-to-one relationship between the logical and physical attributes of the data base resulting in 125 tables. The data base operates more as an index sequential data base where relationships between and among data elements have to be established across many tables. This contributes to increased system overhead (requires the system to process additional steps) and thus taxes the performance capabilities of the system. Because the data base has relatively few users and is an improvement over what users had been using, individual users have probably not been affected by the data base design. However, if demand increased, system performance could unnecessarily degrade. In order to minimize performance impact, system administrators have made compromises which affect the data base’s internal controls. First, system administrators told us that turning on the internal audit trail, which I will discuss later, would seriously slow down system performance; and that to turn on the audit trail would take several staff weeks of programming effort to minimize the impact on overall system performance. Second, system administrators have chosen not to use the referential integrity capability that Sybase offers because of performance issues. Referential integrity is critical to any data base to assure that necessary checks are in place to limit inappropriate data input and assure that output is accurate. For the White House Data Base, referential integrity is implemented through the application itself. Because of the complexity of the application structure, it is difficult to assure that all edit checks are in place and work properly across the application. We found that some checks are not operational which in turn leads to a higher probability of inaccurate information being input or retrieved from the system. Good business systems operate in a controlled environment to ensure that data within these systems is accurate, that data output is reliable, and that data integrity is assured so that only authorized users have access to the data and that such access is appropriate to their needs. To provide such assurance, an organization needs well-articulated policies and procedures, good training, and an ability to ensure compliance with established processes and procedures. For the government, these concepts are embodied in the Office of Management and Budget’s Circular A-130 which lays out the need for policies, rules of behavior governing system use, training, and the need to incorporate good controls. Circular A-130 states that accountability is normally accomplished by identifying and authenticating users and subsequently tracing actions on the system to the user who initiated them. As a system containing sensitive information on up to 200,000 individuals, and, as a system that is important to meet the work needs of several White House offices, data base users and managers need to apply the principles of A-130 to system operations. We found that the White House has taken several positive steps to create a controlled environment. For example: Personalized training is available to all users. Users are required to sign a document stating that they will take measures to protect information including establishing and protecting passwords, logging out when leaving their computers, and reporting unauthorized access to the system. Password access is required to enter the system and a warning screen appears to inform the user that information within the data base is for official use only. The data base has an effective defense against outside intruders or “hackers” breaking into the system. Controls have been established within the system to limit access to certain portions of the data base to only those with a need to know. Additionally, only a limited number of users have authority to print reports. Even with these processes in place, we found that the data base requires additional measures before data integrity and operational effectiveness can be assured. For example: Users do not have well-documented processes and procedures for how and when to use the data base. Written documentation, reinforced with training and operational processes, would provide a better basis for assuring system managers that the data base was being used effectively and that all users were appropriately keeping the data base current. While users were trained individually by system administrators or other users, only one user out of the nine business users that we interviewed reported having a users manual. None of these users reported having training concerning the security of the system. Such guidance can help ensure that users are familiar with the system and are entering information correctly. In talking with users we found that most everyone could navigate the system adequately; however, we also found that some duplicate information on individuals was being entered into the system and that some information was being entered into the wrong field. This causes some data base tables to contain more information than necessary and slows down the processing of information. Although the data base has established security policies, procedures necessary to make them effective have not been well-documented. For example, the system does not require frequent changes in passwords. Only one of the applications users we interviewed has changed their password since the system was initiated. Although controls exist to limit printing of reports, any user having general netware printing capability can print the screen contents. Additionally, all users have the ability to download screen content onto an electronic notebook which could then be mailed electronically to a third party. None of the users we interviewed stated that they were aware of this capability. Additionally, White House officials told us that every month they review a sample of outgoing e-mail traffic to identify inappropriate use of the electronic mail system and to comply with records management requirements. Most importantly, there is no audit trail. Although Sybase 10 has this capability, we were told it has not been turned on because it would inhibit system performance. The Sybase audit capability would allow system administrators to monitor and react to attempts to log on and log off the system; execution of update, delete, and insert operations; restarts of the system; execution of system administration commands; and changes to system tables. Without this feature, data base administrators are limited in their ability to ensure that users are properly accessing and using the system. Mr. Chairman and Members of the Subcommittee this completes my testimony. I will be happy to answer any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the White House database, focusing on its users and operational components. GAO noted that: (1) users are generally satisfied with the database as a tool for maintaining information important to the Presidency; (2) fewer than 100 White House staff use the database, and the 25 heaviest users represent three White House offices and systems administrators; (3) the Social Office uses the database to develop invitation lists and plan state dinners and other events, the Personal Correspondence Office uses the database to help compose Presidential letters, and the Outreach Office uses the database for generating lists of holiday card recipients; (4) these users believe that the database is critical in performing their tasks, but the database's design is limited because it does not employ certain relational database capabilities; (5) because of additional processing steps, system performance will degrade if demand increases; (6) systems administrators have made compromises to minimize performance impacts that affect data integrity and audit trails; (7) the White House has taken actions to ensure a controlled environment by providing personalized user training, requiring signed ethics documents and passwords, providing anti-hacker defense systems, and limiting user access; and (8) to ensure data integrity and operational effectiveness, the White House needs to document systems security policies and procedures, limit report printing, and establish an audit trail for systems administrators to monitor database operations. |
As one of the federal government’s principal real estate and business agents, GSA has diverse activities and programs that have governmentwide implications. Its real estate portfolio, supply procurement and distribution activities, travel and transportation services, telecommunication and computer services, and property management and disposal functions involve huge sums of money and extensive interaction with both the federal and private sectors. In many respects, GSA is comparable to a large, diversified commercial business. If GSA were a private sector company, it would rank high, in terms of sales, on the Fortune 500 list of the largest U.S. companies. GSA spends billions of dollars to provide many of the facilities, goods, and services that federal agencies need to carry out their missions. Through various revolving or trust fund-type arrangements, GSA buys most of these goods and services from private vendors and resells them to agencies. Additionally, GSA arranges for federal agencies to purchase billions of dollars’ worth of goods and services directly from private vendors through its governmentwide supply, travel and transportation, automated data processing, and telecommunications contracts. Furthermore, when it was established in 1949, GSA was envisioned, primarily but not exclusively, as a policymaking body with the option of delegating its authority to other agencies while maintaining comprehensive accountability to Congress for economy and efficiency. In recent years, public sector organizations have faced demands to be more effective and less costly, coupled with a growing movement toward a performance-based approach to management. Congress enacted the Results Act in 1993 in conjunction with the Chief Financial Officers Act and information technology reform legislation, such as the Clinger-Cohen Act of 1996, to address these twin demands and to instill performance-based management in the federal government. The Results Act seeks to shift the focus of government decisionmaking and accountability away from a preoccupation with activities—such as grants and inspections made—to a focus on the results of those activities—such as real gains in employability, safety, responsiveness, or program quality. Under the Results Act, agencies like GSA are to develop strategic plans, annual performance plans, and annual performance reports. GSA and other agencies submitted the first cycle of the strategic plans to Congress in September 1997. Like other agencies, GSA submitted its first performance plan to OMB in the fall of 1997. OMB used these draft performance plans to develop and submit the first federal government performance plan to Congress in February 1998 with the President’s fiscal year 1999 budget. Agencies submitted their final performance plans to Congress after the submission of the President’s budget. Appendix II provides a more detailed discussion of the Results Act’s planning and reporting requirements. We found that overall, GSA’s performance plan does not provide a clear picture of expected performance across the agency. First, most of the performance goals and related measures are not quantifiable or results oriented. Second, GSA’s performance plan goals are not always linked to the specific program activities and funding in its budget. Finally, the performance plan does not adequately discuss its coordination with other agencies on GSA’s many crosscutting activities. GSA’s performance plan does not provide a succinct and concrete statement of expected performance for subsequent comparison with actual performance. Despite the expectations of the Results Act and related OMB guidance that annual performance goals be quantifiable, in our view, only 9 of the 31 performance goals in the plan have measures and targets that decisionmakers can use to gauge progress. For example, the performance goal of improving energy systems is expressed in quantifiable and time-bound terms and has a specific unit of measurement, a baseline, and numerical targets. Likewise, the performance goal on keeping GSA’s prices competitive has measures that are expressed in percentages or costs with baselines and accompanying targets. However, for the remaining 22 performance goals, 16 lack measures and targets needed to gauge performance; and 6 had a mix of some quantifiable measures and some still under development or had measures that are not specific enough to gauge performance. Furthermore, some of the performance measures do not appear to provide meaningful information as they relate to their stated goal. For example, the measure tracking the percentage of repair and alteration or new construction projects that are completed on or ahead of schedule seems unrelated to its goal of ensuring that its prices for primary products and services are competitive with those in the private sector. In addition, the plan has some goals that relate to space management but has no measures that relate to cost effectively managing its space—one of GSA’s primary functions. Finally, the goals as written in the performance plan are typically more activity or output oriented rather than results oriented as envisioned by the Results Act. For example, for the performance goal to “continue enhancement of financial, administrative and expert services contracts for Governmentwide asset management,” GSA set forth the following “measures”: awarding master contracts for payment systems; developing contracts for temporary services; completing the Management, Organization, Business Improvement Schedule; and developing a program for sale of receivables. These activities may be initially important to GSA in achieving its strategic goals and accomplishing its mission. However, these measures appear to us to be activities rather than measures, and the accompanying narrative provides no information that describes what these activities are or what outcomes they aim to achieve so that decisionmakers can understand their importance and gauge progress over time. Contrary to the Results Act and OMB guidance, GSA’s performance plan does not always show clear connections between the performance goals and the specific program activities and funding in its budget. Without such a linkage, decisionmakers cannot relate the performance goals in the plan to the program activities in the budget. Furthermore, they cannot readily assess how GSA intends to allocate its anticipated budgetary resources among its performance goals. Although the plan identifies a specific “funding” and “activity” category for most performance goals, the activity does not generally correspond to the specific program activities used in the agency’s budget request. For example, the performance goal to improve energy systems in federal buildings to meet or exceed the federal energy consumption standards for 2005 identifies the “Federal Buildings Fund” as the funding and “energy” as the activity, but the President’s budget for the Federal Buildings Fund does not have an energy program activity. Also, for some performance goals, the plan shows that “multiple” activities are involved but does not specifically identify the activities. Furthermore, because the plan does not identify the funding level for most of the activities named in the plan or the program activities in the budget request, the reader cannot determine how much funding GSA proposes to use to meet its performance goals. In addition, contrary to the criteria in the Results Act, some program activities assigned large levels of funding in the budget, such as construction and acquisition of facilities and construction of lease purchase facilities, are not linked to specific performance goals. We believe the plan would be more useful if the activity and funding identified with each performance goal could be easily linked to GSA’s budget request. The plan includes GSA’s mission statement and gives abbreviated versions of its strategic goals presented in its strategic plan, but they are not identified as such. Further, although none of the strategic goals were revised for the performance plan, we noted that GSA appears to have dropped two of the five objectives related to the fourth strategic goal but provides no rationale for this revision. Consequently, it may be difficult for the reader to judge whether the performance goals in the annual performance plan are related to and consistent with GSA’s strategic plan, as envisioned by the Results Act and OMB guidance. In addition, we noted that like the strategic plan, the performance plan does not address major management problems we and the GSA’s IG have identified in recent years. These include data reliability, which will be discussed in more detail later; insufficient management controls; and impediments to businesslike asset management in the real property area. In a January 29, 1998, memorandum to agencies, the Director of OMB said that “performance goals for corrective steps for major management problems should be included for problems whose resolution is mission-critical, or which could materially impede the achievement of program goals.” As we reported in January 1998, our work has shown over the years that major management problems at GSA have significantly hampered GSA’s and its stakeholder agencies’ abilities to accomplish their missions. Although GSA’s performance plan recognizes the crosscutting nature of its activities, it does not adequately explain how it will coordinate its crosscutting functions with the federal community. OMB Circular A-11, Sec. 220.8, states that the annual performance plan should identify performance goals that reflect activities being mutually undertaken to support programs of an interagency, crosscutting nature. Because GSA is an agency with governmentwide policysetting, oversight, and operational functions, its major activities collectively affect the whole federal community. Some of GSA’s specific performance goals are crosscutting in nature. For example, according to the plan, three of the performance goals under the goal to “promote responsible asset management” involve “collaboration among many federal agencies brought together by GSA” and “measurement of the results of policy initiatives will require collection of other agencies’ costs.” However, although the discussion of some of the efforts contain references to coordination with other federal agencies, the plan does not discuss how GSA will coordinate these efforts. In another example, GSA’s performance goal to improve access to quality child care for all federal employees does not explain exactly how GSA is coordinating with the federal community for this wide-reaching goal. In the excel-at-customer-service section, GSA generally describes what it is doing to better understand its customers’ needs. These actions include face-to-face meetings with customers or their agency representatives and working with interagency groups and councils. However, it is difficult to relate these actions to the specific crosscutting aspects of the goals in this section of the plan. GSA’s performance plan does not explicitly discuss the strategies—how it will use its operational processes, skills, and technologies—and resources (human, capital, information, or other resources) that will be needed to achieve its goals. Without this discussion, decisionmakers cannot determine if GSA has a sound approach for achieving its goals and using its resources wisely. GSA’s performance plan for the most part does not present clear and reasonable strategies for achieving its intended performance goals. The Results Act and OMB Circular A-11 state that the performance plan should briefly describe the agency’s strategies to accomplish its performance goals. Specifically, we found that the narrative accompanying each objective and specific performance goal provides descriptive information on GSA activities. However, the narrative does not describe how GSA intends to meet the performance goals in the plan. For example, two of the three measures under the performance goal to increase market share for primary services are (1) the combined market share for information technology solutions and network services and (2) market share for fleet. Target percentages for fiscal years 1998 and 1999 are listed. The accompanying narrative, however, gives little indication of how GSA intends to increase its market share in these areas. GSA makes general statements about leveraging its competitive pricing with broad market penetration and government downsizing—“as the government downsizes agencies are looking to GSA to provide cost effective solutions to the workload needs and requirements.” However, it offers no information on its specific approach or strategy for how it plans to leverage prices or take advantage of downsizing to increase its market share for its vehicle fleet. Although the Results Act does not require that the performance plan specifically discuss the impact of external factors on achieving performance goals, we believe that a discussion of such factors would provide additional context regarding anticipated performance. In its September 1997 strategic plan, GSA identified four external factors—economic conditions, social policy, changes in technology and the marketplace, and legislative framework—that could likely affect its overall performance. GSA’s performance plan does not explicitly discuss these factors or their impact on achieving the performance goals. In addition, other external factors that we have reported on over the years—such as the lengthy prospectus authorization process and budget scorekeeping rules that favor operating leases over ownership—are not mentioned in the performance plan. GSA’s performance plan does not adequately discuss the resources it will use to achieve the performance goals. The Results Act and OMB Circular A-11 specify that the performance plan should briefly describe the human, capital, information, or other resources it will use to achieve its performance goals. Most of the performance goals in GSA’s performance plan contain a subheading entitled “Human, Capital, Information, or Other Resources”; however, the information under these subheadings, which typically said “no additional resources required,” falls short of the Results Act criterion that the plan briefly describe the resources needed to achieve performance goals. We found that only 3 of the 31 performance goals specified any amount of budgetary resources associated with the achievement of the performance goal. Even in these three cases, there is no explanation of specifically how the funds will be used. We also noted that two goals made a limited reference to staffing issues. For example, for the performance goal to implement capital planning for information technology to comply with the Clinger-Cohen Act, the plan identifies the type of staff (project managers, planners, budget analysts, and executives) that will be involved. However, the plan does not contain any information on how GSA intends to use its resources to achieve its performance goals. We found that GSA’s performance plan partially meets the Results Act criteria related to including information on verifying and validating performance data. Although GSA included information on the general approaches it will use to ensure that performance information is reliable, the plan makes no reference to ongoing controls and procedures that are in place to ensure data integrity. A succinct discussion of some of these procedures and controls would provide decisionmakers with better insights into, and confidence in, what is being done to prevent the use of unreliable data. Also, we found that the plan does not contain a discussion of actions GSA will take or has taken to address known data limitations. The Results Act does not require a discussion of data limitations in the performance plan; however, an explanation of such limitations can provide decisionmakers with a context for understanding and assessing agencies’ performance and the costs and challenges agencies face in gathering, processing, and analyzing needed data. This discussion on data limitations can help identify the actions needed to improve the agency’s ability to measure its performance. GSA’s performance plan partially discusses how the agency will ensure that its performance information is sufficiently verified and validated. Specifically, we found that the plan highlights the importance of having credible data. It also meets the intent of the Results Act by identifying actions that GSA believes will identify data problems. These include audits of its financial records and systems by an independent accounting firm and top level quarterly meetings to review the financial and programmatic results of its various business lines. However, we believe that the performance plan would be greatly improved if GSA were to also highlight some of the specific controls it may use for its major systems to verify and validate performance information on an ongoing basis. Such controls could include periodic data reliability tests, computer edit controls, and supervisory reviews of data used to develop performance measures. Various financial audits and management reviews are certainly useful steps to identifying data problems that require management attention; but they are no substitute for effective front-end procedures, practices, and controls to ensure data reliability—a critical component of performance measurement. GSA has had financial and program audits on an ongoing basis for many years. However, despite these efforts, the agency has a history of data problems as shown by our work and that of the IG (this work is discussed later in more detail). A succinct discussion of the major procedures and controls that are in place to ensure credible data, at least for the more important systems, would be more helpful to decisionmakers in assessing the reliability of the data being used to gauge performance. GSA’s performance plan does not discuss known data limitations that could raise questions about the validity of the performance measures GSA plans to use. For several years, our work and that of the IG have identified several data reliability problems at GSA. Our work showed that GSA lacked the timely, accurate, and reliable program data needed to effectively manage and oversee its various activities and programs. Between 1994 and 1997, IG audits of the internal controls over the production of reliable data to support various GSA performance measures found problems. Specifically, of the eight audits conducted, controls designed to produce reliable data to support various GSA performance measures were found to be at moderate risk in three, high risk in one, and low risk in the other four. In February 1998, the IG reported on reviews of two additional performance measures; one was low risk, and the other was removed from the Fiscal Year 1997 Annual Report as a result of issues raised during the IG review. In addition, the IG reported in its October 31, 1997, Semiannual Report to Congress that many of the 87 major systems GSA uses to support its functions are old and incorporate inefficient technologies compared with today’s advanced systems. Modification and maintenance of these old systems have become complex and costly. Finally, the independent audit of GSA’s 1996 and 1997 financial statements noted data problems related to property account classifications for construction projects and access controls over the Federal Supply Service’s information systems. Also, the independent auditors reported that although the Public Buildings Service has addressed certain deficiencies in its internal control structure, attention to improving internal controls in its business and financial processes is required to assess, improve, and report the results of program performance. Despite such evidence that suggests data reliability is still a major problem, the performance plan is silent on this critical issue. At a minimum, it would have been helpful if the plan had an explicit discussion of current data reliability problems and how GSA plans to address them. GSA’s performance plan falls short of meeting the criteria set forth in the Results Act and related OMB guidance. It is not a stand-alone document that provides a clear road map of what GSA wants to accomplish, how it plans to get there, and what results it expects to achieve. The plan does not fully meet the Results Act criteria for objective, measurable, and quantifiable goals and measures and lacks clear connections between the performance goals and the specific program activities in GSA’s budget. The performance plan also lacks an adequate explanation of how it will coordinate its crosscutting functions with the federal community. In addition, it often does not contain meaningful discussions on the strategies and resources GSA plans to use to meet its goals and achieve intended results and on the questions surrounding data reliability. We recognize that this is the first performance plan developed under the Results Act, and, as such, there is a large learning process in understanding what constitutes a good plan. However, this and future plans can be significantly improved if they follow the criteria set forth in the Results Act and related guidance more closely. We recommend that the GSA Administrator take steps to ensure that GSA’s fiscal year 2000 performance plan (1) conforms with the criteria in the Results Act and related OMB guidance and (2) gives decisionmakers a better framework for gauging GSA’s performance. Specifically, in developing the next plan, we recommend that the Administrator take steps to refine GSA’s performance goals to make them more quantifiable and results clarify how GSA’s performance goals link to specific program activities in GSA’s budget; explain how GSA has coordinated its crosscutting functions with the discuss GSA strategies to be used and resources needed to achieve its performance goals and their intended results, as well as external factors that could affect its overall performance; and discuss specific controls for verifying and validating data used to measure performance, recognize existing data limitations, and explain GSA efforts to overcome those limitations. On April 9, 1998, we obtained oral comments from GSA’s Chief Financial Officer, Director of the Office of Performance Management, and Managing Director for Planning on a draft of this report. They said that GSA generally agreed with our analysis and will implement our recommendations when it prepares the fiscal year 2000 performance plan. As you know, 31 U.S.C. 720 requires that the head of a federal agency submit a written statement of actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this report. A written statement must be sent to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We would appreciate receiving a copy of the statement. We are sending copies of this report to each of the individual requesters of our work in this area; the Chairmen and Ranking Minority Members of other Committees that have jurisdiction over GSA activities; and the Director, Office of Management and Budget. Copies will be made available to others on request. Major contributors to this report are listed in attachment III. If you or your staff have any questions concerning this report, please contact me at (202) 512-8387. This appendix contains a compilation of guidance on annual performance plans, including the Results Act, GAO reports, and OMB documents, and is arranged by the major issues discussed in this report. The Government Performance and Results Act (Results Act), 31 U.S.C. 1115(a)(1), 1115(a)(2), 1115(a)(4), 1115(a)(5), 1115(b), and 1115(c). Senate Committee on Governmental Affairs Report accompanying the Results Act (Senate Report 103-58, June 16, 1993), pp. 15-16, “Performance Plans”; p. 29, “Performance Goals”; pp. 29-30, “Performance Indicators”; and p. 30, “Alternative Forms of Measurement.” OMB Circular A-11, secs. 220.1, 220.4, 220.10(a), 220.10(b), 220.10(c), 220.14, 220.16, 220.17, 221.4(a), 221.4(b), and 221.4(d). OMB Checklist for Agency Annual Performance Plans (Nov. 24, 1997), pp. 1-2, “Coverage of Program Activities”; pp. 3-4, “Annual Performance Goals”; p. 4, “Performance Indicators”; and p. 5, “Alternative Form of Measurement.” The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997), pp. 55-57, 61-63, and 71-72. Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996), pp. 24-26. Implementation of the Government Performance and Results Act (GPRA), A Report on the Chief Financial Officer’s Role and Other Issues Critical to the Governmentwide Success of GPRA, Chief Financial Officers Council, GPRA Implementation Committee, May 1995. Agencies’ Annual Performance Plans Under The Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18; Feb. 1998, Version 1), pp. 10-11. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20; Apr. 1998, Version 1), pp. 14-19. Results Act, 5 U.S.C. 306(c), 31 U.S.C. 1115(a), and 31 U.S.C. 1115(c). Senate Committee on Governmental Affairs Report accompanying the Results Act (Senate Report 103-58, June 16, 1993), pp. 15-16, “Performance Plans”; p. 29, “Performance Goals”; and p. 31, “Coverage of Program Activities.” OMB Circular A-11, secs. 210.2(c), 210.4, 220.3, 220.4, 220.5, 220.6, 220.7, 220.8, 220.9(a), 220.9(b), 220.9(d), 220.9(e), 220.10(c), 221.3, 221.4(b). OMB Checklist for Agency Annual Performance Plans (Nov. 24, 1997), pp. 1-2, “Coverage of Program Activities”; pp. 3-4, “Annual Performance Goals”; p. 7, “Mission Statement and General Goals and Objectives”; and p. 8, “Budget Account Restructuring.” The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997), pp. 90-93. Performance Budgeting: Past Initiatives Offer Insights for GPRA Implementation (GAO/AIMD-97-46, Mar. 27, 1997). Integrating Performance Measurement into the Budget Process, Chief Financial Officers Council, GPRA Implementation Committee Subcommittee Project, September 22, 1997. Agencies’ Annual Performance Plans Under The Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18; Feb. 1998, Version 1), pp. 12-14. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20; Apr. 1998, Version 1), pp. 19-29. OMB Circular A-11, secs. 220.8, 220.10(b), and 221.4(c). OMB Checklist for Agency Annual Performance Plans (Nov. 24, 1997), p. 8, “Cross-cutting Programs.” Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap (GAO/AIMD-97-146, Aug. 29, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997), pp. 53-55. Agencies’ Annual Performance Plans Under The Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18; Feb. 1998, Version 1), p. 15. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20; Apr. 1998, Version 1), p 29-30. Results Act, 31 U.S.C. 1115(a)(3) and 31 U.S.C. 9703. Senate Committee on Governmental Affairs Report accompanying Results Act (Senate Report 103-58, June 16, 1993), pp. 15-16, “Performance Plans”; pp. 17-18, “Managerial Flexibility Waivers”; and pp. 34-36, “Section 5. Managerial Accountability and Flexibility.” OMB Circular A-11, secs. 220.10(b), 220.12(a), 220.12(b), 220.12(c), and 221.4(b). OMB Checklist for Agency Annual Performance Plans (Nov. 24, 1997), p. 6, “Means and Strategies”; p. 8, “Tax Expenditures and Regulation”; and p. 8, “External Factors.” The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997), pp. 63-66. Business Process Reengineering Assessment Guide, Version 3 (GAO/AIMD-10.1.15, Apr. 1997). Privatization: Lessons Learned by State and Local Governments (GAO/GGD-97-48, Mar. 14, 1997). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996), pp. 18-21 and 24-26. Agencies’ Annual Performance Plans Under The Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18; Feb. 1998, Version 1), pp. 17-18. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20; Apr. 1998, Version 1), pp. 32-36. Results Act, 31 U.S.C. 1115(a)(3). Senate Committee on Governmental Affairs Report accompanying Results Act (Senate Report 103-58, June 16, 1993), pp. 15-16, “Performance Plans”; and pp. 29-30, “Performance Indicators.” OMB Circular A-11, secs. 220.1, 220.9(a), 220.9(e), 220.10(c), 220.11(a), 220.11(b), 220.11(c), 220.12(a), 220.12(d), and Part 3. OMB Checklist for Agency Annual Performance Plans (Nov. 24, 1997), p. 5, “Future Year Performance”; p. 5, “Performance Goals Funded By Prior Year Appropriations”; and p. 6, “Means and Strategies.” OMB Capital Programming Guide, v. 1.0 (July 1997). Executive Guide: Measuring Performance and Demonstrating Results of Information Technology Investments (GAO/AIMD-97-163, Sept. 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997), pp. 90-97. Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, Sept. 1997). Assessing Risks and Returns: A Guide for Evaluating Federal Agencies’ IT Investment Decision-making, Version 1 (GAO/AIMD-10.1.13, Feb. 1997). Information Technology Investment: Agencies Can Improve Performance, Reduce Costs, and Minimize Risks (GAO/AIMD-96-64, Sept. 30, 1996). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996), pp. 18-21 and 39-46. Transforming the Civil Service: Building the Workforce of the Future—Results of a GAO-Sponsored Symposium (GAO/GGD-96-35, Dec. 26, 1995). Federal Accounting Standards Advisory Board (FASAB) Volume 1 Original Statements: Statements of Federal Financial Accounting Concepts and Standards, Statement of Federal Financial Accounting Standards No. 1, Objectives of Federal Financial Reporting (GAO/AIMD-21.1.1, Mar. 1997), pp. 11-62. FASAB Volume 1 Original Statements: Statements of Federal Financial Accounting Concepts and Standards, Statement of Federal Financial Accounting Standards No. 4, Managerial Cost Accounting Standards (GAO/AIMD-21.1.1, Mar. 1997), pp. 331-394. Agencies’ Annual Performance Plans Under The Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18; Feb. 1998, Version 1), pp. 19-20. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20; Apr. 1998, Version 1), pp. 36-38. Results Act, 31 U.S.C. 1115 (a)(6). Senate Committee on Governmental Affairs Report accompanying Results Act (Senate Report 103-58, June 16, 1993), p. 30, “Verification and Validation.” OMB Circular A-11, secs. 220.7, 220.13, and 221.5. OMB Checklist for Agency Annual Performance Plans (Nov. 24, 1997), p. 7, “Verification and Validation.” Executive Guide: Information Security Management (GAO/AIMD-98-21, Nov. 1997). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996), pp. 27-29. GPRA Performance Reports (GAO/GGD-96-66R, Feb. 14, 1996) pp. 6-8 and 11. FASAB Volume 1 Original Statements: Statements of Federal Financial Accounting Concepts and Standards (GAO/AIMD-21.1.1, Mar. 1997). Budget and Financial Management: Progress and Agenda for the Future (GAO/T-AIMD-96-80, Apr. 23, 1996). Agencies’ Annual Performance Plans Under The Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18; Feb. 1998, Version 1), p. 22. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20; Apr. 1998, Version 1), pp. 41-43. OMB Circular A-11, sec. 221.5. OMB Checklist for Agency Annual Performance Plans (Nov. 24, 1997), p. 7, “Verification and Validation.” Managing for Results: Regulatory Agencies Identified Significant Barriers to Focusing on Results (GAO/GGD-97-83, June 24, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997), pp. 61-75. Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Measuring Performance: Strengths and Limitations of Research Indicators (GAO/RCED-97-91, Mar. 21, 1997). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996), pp. 27-29. GPRA Performance Reports (GAO/GGD-96-66R, Feb. 14, 1996). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1, 1995). Agencies’ Annual Performance Plans Under The Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18; Feb. 1998, Version 1), p. 23. The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20; Apr. 1998, Version 1), pp. 43-47. The Results Act is designed to improve the efficiency and effectiveness of federal programs by establishing a system to set goals for program performance and to measure results. Specifically, the Act requires executive agencies to prepare multiyear strategic plans, annual performance plans, and annual performance reports. The Results Act requires virtually every executive agency to develop strategic plans covering a period of at least 5 years forward from the fiscal year in which it is submitted and to update those plans at least every 3 years. Agencies’ first strategic plans were to be submitted to Congress and the Director of OMB by September 30, 1997. The strategic plans are to (1) include the agencies’ mission statements; (2) identify long-term general goals and objectives; (3) describe how the agencies intend to achieve those goals through their activities and through their human, capital, information, and other resources; and (4) explain the key external factors that could significantly affect the achievement of those goals. Under the Act, strategic plans are the starting point for agencies to set annual performance goals and to measure program performance in achieving those goals. Consequently, strategic plans are also to include a description of how long-term general goals will be related to annual performance goals as well as a description of the program evaluations that agencies used to establish their long-term general goals and a schedule for subsequent evaluations. As part of the strategic planning process, agencies are required to consult with Congress and solicit the views of other stakeholders—those governmental and nongovernmental entities potentially affected by, or interested in, the agencies’ activities. Building on the decisions made as part of the strategic planning process, the Results Act requires executive agencies to develop annual performance plans covering each program activity set forth in the agencies’ budgets. The first annual performance plans, covering fiscal year 1999, were to be submitted to OMB in the fall of 1997 and to Congress after the President’s budget in 1998. The Results Act requires that each agency prepare an annual performance plan that shall: “(1) establish performance goals to define the level of performance to be achieved by a program activity; “(2) express such goals in an objective, quantifiable, and measurable form unless authorized to be in an alternative form . . . ; “(3) briefly describe the operational processes, skills and technology, and the human, capital, information, or other resources required to meet the performance goals; “(4) establish performance indicators to be used in measuring or assessing the relevant outputs, service levels, and outcomes of each program activity; “(5) provide a basis for comparing actual program results with the established performance goals; and “(6) describe the means to be used to verify and validate measured values.” The Act authorizes agencies to apply for managerial flexibility waivers in their annual performance plans. Agencies’ authority to request waivers of nonstatutory administrative procedural requirements and controls is intended to provide federal managers with more flexibility to structure agency systems to better support performance goals. An example of increased flexibility would be to allow an organization to recapture unspent operating funds because of increased efficiencies and then to use these funds to purchase new equipment or expand employee training. Another example might involve delegating more authority to line managers to make procurement decisions. OMB is to use the performance plans that agencies submit to develop an overall federal government performance plan. OMB is to submit this governmentwide plan each year to Congress with the President’s budget. According to the Senate Committee report accompanying the Act, the overall federal government performance plan is to present to Congress a single, cohesive picture of the federal government’s annual performance goals for the fiscal year. The first overall plan was due with the President’s fiscal year 1999 budget. Finally, the Results Act requires each executive agency to prepare annual reports on program performance for the previous fiscal year. The first performance reports for fiscal year 1999 are due to Congress and the President no later than March 31, 2000; subsequent reports are due by March 31 for the years that follow. In each report, an agency is to review and discuss its performance compared with the performance goals it established in its annual performance plan. When a goal is not met, the agency is to explain in the report the reasons the goal was not met; plans and schedules for meeting the goal; and, if the goal was impractical or not feasible, the reasons for that and the actions recommended. According to the Senate committee report on the Act, actions needed to accomplish a goal could include legislative, regulatory, or other actions. If an agency finds a goal to be impractical or not feasible, it is to include a discussion of whether the goal should be modified. In addition to evaluating the progress made toward achieving its annual goals, an agency’s program performance report is to evaluate the agency’s performance plan for the fiscal year in which the performance report was submitted. Thus, in their fiscal year 1999 performance reports that are due by March 31, 2000, agencies are required to evaluate their performance plans for fiscal year 2000 on the basis of their reported performance in fiscal year 1999. This evaluation is to help show how an agency’s actual performance is influencing its performance plan. The report also is to include (1) the summary findings of program evaluations completed during the fiscal year covered by the report and (2) the use and effectiveness of any of the Results Act managerial flexibility waivers that an agency received. Agencies also are to include baseline and trend data in annual performance reports to help ensure that their reports are complete and that performance is viewed in context. Such data can show whether performance goals are realistic given the past performance of an agency. Such data can also assist users of reports to draw more informed conclusions than they would if they compared only a single year’s performance against an annual goal, because users of reports can see improvements or declines in an agency’s performance over prior years.For fiscal years 2000 and 2001, agencies’ reports are to include data on the extent to which their performance achieved their goals, beginning with fiscal year 1999. For each subsequent year, agencies are to include performance data for the year covered by the report and 3 prior years. Congress recognized that in some cases not all the performance data will be available in time for the required reporting date. In such cases, agencies are to provide whatever data are available with a notation as to their incomplete status. Subsequent annual performance reports are to include the complete data as part of the trend information. Joan Hawkins, Assistant Director Franklin Deffer, Assistant Director Laura Castro, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the General Services Administration's (GSA) fiscal year (FY) 1999 annual performance plan, which was submitted to Congress as required by the Government Performance and Results Act of 1993. GAO noted that: (1) GSA's performance plan has several performance goals for each of its strategic goals; (2) some of its performance goals and measures are objective and quantified and provide a way to compare actual to planned performance; (3) in addition, the plan contains some goals and measures that involve comparisons of GSA and the private sector; (4) however, for the most part, the plan falls short of meeting the criteria set forth in the Results Act and related guidance; (5) it does not adequately provide a clear picture of expected performance across the agency because: (a) like the goals in its strategic plan, many performance goals, and related measures, are not quantifiable or results oriented; (b) performance plan goals are not always linked to the specific program activities and funding in its budget; and (c) also like the strategic plan, the performance plan does not discuss GSA's coordination efforts for many crosscutting activities; (6) GAO also found that the performance plan generally does not have an explicit discussion of the strategies and resources that will be needed to achieve goals or the external factors that will affect accomplishment of the goals; and (7) although the plan includes a discussion of how GSA plans to verify performance data that provides partial confidence that performance information will be credible, it does not discuss the actions GSA has taken or will take to address known data limitations. |
According to EPA, about 52,000 community water systems use energy to treat and deliver drinking water to over 290 million Americans. In a typical drinking water treatment plant, large debris and contaminants are physically removed from the raw water using screens (see fig. 2). Next, dirt and other particles suspended in the water are removed through the addition of alum and other chemicals during the processes of coagulation and sedimentation. After these particles have separated out, the water passes through filters made of layers of materials such as sand, gravel, and charcoal to remove even smaller particles. At this point, the water is stored in a closed tank or reservoir, allowing time for disinfection which kills many disease-carrying organisms. The treated water is pressurized for distribution to consumers. The distribution infrastructure consists of pumps, pipes, tanks, valves, hydrants, and meters that support delivery of water to the customer and control flow and water pressure. Once water is delivered, residential consumers use it for a variety of purposes, including for drinking; bathing; preparing food; washing clothes and dishes; and flushing toilets, which can represent the single largest use of water inside the home. Energy is needed to accomplish many of these activities. For example, energy is used in homes to filter and soften water and to heat it for use in certain appliances, which accounts for 12.5 percent of a typical household’s energy use, according to DOE. In addition to residential water users, commercial, industrial, and institutional customers use energy for water-related purposes. For example, energy is used to produce hot water and steam for heating buildings, to cool water for air conditioning buildings, and to generate hot water needed to manufacture or process materials, such as food and paper. After water is used by customers, energy is needed to collect and treat wastewater, and to discharge effluent into a water body. Wastewater service is provided to more than 220 million Americans by about 15,000 municipal wastewater treatment facilities. During a typical wastewater treatment process, solid materials, such as sand and grit, organic matter from sewage, and other pollutants, are removed before the treated effluent is discharged to surface waters. Systems for collecting, treating, and disposing of municipal wastewater vary widely in terms of the equipment and processes used, and wastewater may go through as many as three treatment stages—primary, secondary, and advanced treatment—before water is discharged (see fig. 3). Preliminary and primary treatment. As wastewater enters a treatment facility, it is screened to remove large debris and then passes through a grit removal system to separate out smaller particulate matter. After preliminary screening and settling, primary treatment removes solids from the wastewater through sedimentation. Solids removed during the treatment process may be further treated and used for other applications, such as fertilizer; incinerated; or disposed of in landfills. Secondary treatment. After primary treatment, the wastewater undergoes secondary treatment to remove organic matter and suspended solids through physical and biological treatment processes. Activated sludge is the most commonly used biological treatment process in secondary treatment of wastewater. This process relies on micro-organisms to break down organic matter in the wastewater. More specifically, aeration— whereby blowers or diffusers inject oxygen into the wastewater—enables the micro-organisms to digest the organic matter. After being pumped into an aeration tank to allow time for digestion, the wastewater is next pumped to a secondary settling tank for removal of digested material. After secondary settling, the effluent either is disinfected and discharged into a water body, or it undergoes advanced treatment. Advanced treatment. Most wastewater goes through at least secondary treatment. However, before treated wastewater can be released in some receiving waters, it may need to be further treated to reduce its effect on water quality and aquatic life after discharge. Over 30 percent of wastewater treatment facilities provide this kind of advanced treatment, which can remove additional contaminants. Two key pieces of federal legislation—the Safe Drinking Water Act and the Clean Water Act—govern the treatment of drinking water and wastewater. Each municipality or water utility generally may choose amongst technologies for achieving a given standard. Under the Safe Drinking Water Act, EPA has established National Primary Drinking Water Standards for specified contaminants and has the authority to regulate additional contaminants that the agency determines may have adverse health effects, are likely to be present in public water supplies, and for which regulation presents a meaningful opportunity for health risk reduction. EPA’s regulations establish a limit, or maximum contaminant level, for specific contaminants and require water systems to test the water periodically to determine if the quality is acceptable. EPA has regulations in place for 89 contaminants, including disinfectants, byproducts of disinfectants, and microbial contaminants, but has not issued a regulation under the Safe Drinking Water Act for a new contaminant since 2000. The Clean Water Act governs the discharge of pollutants into the waters of the United States, including the treatment of wastewater discharged from publicly owned treatment facilities. Specifically, industrial and municipal wastewater treatment facilities must comply with the National Pollutant Discharge Elimination System (NPDES) permits that control pollutants that facilities may discharge into the nation’s surface waters. The act requires that municipal wastewater treatment plants provide a minimum of secondary treatment prior to discharge. In some cases, modification of secondary treatment requirements may occur, however, for discharges into marine waters under certain conditions. For example, the discharge may not interfere with that water quality which assures protection of public water supplies and the protection and propagation of a balanced, indigenous population of shellfish, fish, and wildlife and allows recreational activities on the water. In 2000, Congress amended the Clean Water Act to require permits for discharges from combined sewers— sewers that transport both wastewater and stormwater to the municipal wastewater treatment plant—to conform with EPA’s Combined Sewer Overflow Control Policy, which requires systems to demonstrate implementation of certain minimum pollution control practices. Combined sewers may overflow when there is heavy precipitation or snowmelt, resulting in the discharge of raw sewage and other pollutants into receiving water bodies. Comprehensive data about the energy needed for each stage of the urban water lifecycle are limited, and few nationwide studies have been conducted on the amount of energy used to provide drinking water and wastewater treatment services to urban users. However, specialists with whom we spoke emphasized that the energy demands of the urban water lifecycle vary by location; therefore, consideration of location-specific and other factors is key to assessing the energy needs of the urban water lifecycle. These factors include the source and quality of the water, the topography of the area over which water is conveyed and the distance of conveyance, and the level and type of treatment required. Providing a reliable and comprehensive estimate of the total energy requirements for moving, treating, and using water in urban areas is difficult, in part, because comprehensive data on the energy demands of the urban water lifecycle are limited and few nationwide studies have been conducted to quantify the amount of energy used throughout the lifecycle. Two studies most often cited by the specialists we spoke with were conducted by the Electric Power Research Institute (EPRI) on the energy needs of the urban water lifecycle. These studies concluded that 3 to 4 percent of the nation’s electricity is used to move and treat drinking water and wastewater. While some specialists noted that these studies provide reasonable estimates of the energy demands of the urban water lifecycle, other specialists raised a number of concerns with the studies. In particular, according to several specialists, the EPRI studies are outdated. The first study dates back to 1996, and the more recent study was conducted in 2002 but relied on projections of future water use based on statistics compiled in 2000. Some specialists also told us these studies do not reflect the treatment processes that have been implemented over the last decade, which have increased the amount of energy needed to treat water. In addition, the studies do not include all stages of the urban water lifecycle—specifically, they omit energy used by customers. Because they exclude end use, the EPRI studies underestimate the energy demands of the entire lifecycle because customer end use, including use by residential customers, can be the most energy-intensive stage of the entire lifecycle, according to some specialists we spoke with and studies that we reviewed. Some specialists also added that the studies underestimate total energy demands because they include only electricity, excluding other fuel types that can be used throughout the lifecycle. For example, the studies do not assess the use of natural gas, which can be a primary energy source at wastewater treatment plants for certain processes. Furthermore, some specialists explained that the studies do not use actual measured data, relying instead on previously published estimates of energy used for portions of the water lifecycle. Furthermore, some specialists noted that there are limited data on the amount of energy associated with customer water use. Federal agencies like DOE’s Energy Information Administration collect some data on energy used to heat water in residences and in the commercial sector, but these data are reported on a national level and do not allow for analysis at the local level. In addition, data needed to get a full picture of the energy needs for water in an urban setting may not be readily available at the local level. Specifically, water utilities may not have detailed data on their facilities’ energy use, may not have conducted audits to understand how their facilities use energy, or may be reluctant to share data, according to specialists we spoke with. Many of the specialists told us that efforts to assess the energy needs of the urban water lifecycle on a national scale can be difficult, and the majority of the specialists we spoke with emphasized that to obtain a more accurate picture, one needs to consider location-specific and other factors that influence energy use. The specialists identified the following as key factors that must be considered for such an assessment. Type of water source. Drinking water systems that rely on surface water are often designed to take advantage of gravity and use little to no energy to extract water from the source and convey it to the treatment facility. In contrast, systems that rely on groundwater require more energy for extraction because water must be pumped to the surface from underground aquifers, especially if they rely on deep underground aquifers. For example, Washington, D.C., which relies on surface water, withdraws its water from two locations—Great Falls Dam and Little Falls Dam—on the Potomac River. Most of the water is withdrawn at Great Falls Dam and conveyed via gravity to the treatment plant, using little energy during the extraction and conveyance process. In contrast, extraction of water is an energy-intensive process for Memphis, which relies on groundwater that is extracted from over 160 wells that draw water from aquifers including the Memphis Sand Aquifer, located 500 to 600 feet below ground. Quality of water to be treated. The quality of water also impacts the amount of energy needed for treatment, with higher-quality water containing fewer contaminants and, therefore, requiring less treatment than lower quality water. For example, treating groundwater generally uses less energy than treating surface water because groundwater is typically of higher quality than surface water. As a result, cities that rely on groundwater as the source for their drinking water, such as Memphis, generally use less energy for treatment than cities that rely on surface water, such as Washington, D.C. However, the type of contaminants in water can also affect the energy required for treatment. For example, as one specialist noted, if groundwater contains arsenic, treating this type of contamination can require the use of more energy-intensive treatment technologies than treating surface water that is extracted from a protected watershed or clean snowmelt. Topography and distance. Pumping water is one of the most energy- intensive aspects of the urban water lifecycle, accounting for 80 to 90 percent of the energy used to supply drinking water in some systems, and most of this energy is used to distribute water to customers. The energy demand of pumping is affected by the topography over which the water must be moved and the distance the water must travel to treatment plants after extraction and to customers after treatment. For example, San Diego gets a large amount of its water from northern California. Transporting this imported water to southern California is energy intensive because the water must be conveyed hundreds of miles and lifted 2,000 feet over the Tehachapi Mountains. Furthermore, because of the hilly terrain in some parts of the city and the great expanse over which the customers are distributed, additional energy is needed to pump water from the treatment plants to consumers. Condition of water system. The age of a system and the condition of its pipes and equipment can also impact the energy demands of providing drinking water and wastewater treatment services. Specifically, older systems can be less energy efficient if the equipment and infrastructure have not been properly maintained. The American Society of Civil Engineers recently evaluated America’s drinking water and wastewater infrastructure and assigned both systems a grade of a D-. The assessment noted that these systems contain facilities that are nearing the end of their useful lives and need upgrades to meet future regulatory requirements. In addition, the condition of pipelines also has energy implications. According to some specialists we spoke with, up to 50 percent of water is lost through leaking pipes, which results in a loss of the energy that was used to extract, convey, and treat the water. Furthermore, if pipelines are not routinely cleaned, blockages can lead to friction in the pipes, requiring additional energy to push water through these pipes. Required treatment level. Energy needed for drinking water and wastewater treatment is affected by the treatment levels required to meet existing water quality standards, with each additional treatment level increasing energy demands. In the case of wastewater treatment, characteristics of the water body into which treated effluent is discharged can impact the required level of treatment. For example, San Diego officials told us the city’s wastewater treatment facility has been granted a modified permit by EPA. According to these officials, this permit allows San Diego to treat its wastewater only to an advanced primary level in part because years of ocean monitoring have shown that the plant discharges have no negative impact to the Pacific Ocean. If the city had to treat its wastewater to secondary treatment levels, city officials estimate that its energy usage would increase six to nine times as a result of having to use more energy-intensive technologies to meet these higher standards. Type of treatment process. The type of treatment process used at drinking water and wastewater facilities also influences the energy demands of providing drinking water and wastewater services to urban users. For example, treatment plants that use the activated sludge process for secondary treatment use more energy than plants that use other processes, such as trickling filters or lagoon systems. The activated sludge process can account for 70 percent of a wastewater treatment plant’s energy consumption because of the energy needed to power the blowers that pump oxygen into the wastewater to sustain the micro- organisms. Furthermore, according to many of the specialists we spoke with, a number of the new technologies used in drinking water treatment plants are more energy intensive than traditional treatment technologies. For example, some treatment plants are installing ultraviolet light disinfection processes that are more energy intensive, accounting for 10 to15 percent of a plant’s total energy use, than traditional disinfection with chlorine. Other energy-intensive technologies that are increasing energy demands for water treatment include filtration using membranes and ozonation, a process that destroys bacteria and other micro-organisms through an infusion of ozone. Water use and type of customer. Characteristics related to customer water use, such as how and where water is consumed, can also influence the amount of energy needed to provide water and wastewater services to urban users, according to specialists we spoke with. Large amounts of household energy are consumed by heating water for showering, dishwashing, and other uses. These uses would require more energy than other household uses, such as flushing toilets. In addition, some specialists told us that where the water is used influences the amount of energy consumed. For example, water used in tall apartment buildings or skyscrapers requires energy-intensive pumps to move the water to the top floors. Furthermore, according to some specialists we spoke with, the type of customer, such as whether the customer is residential or industrial, affects the energy demands of providing water and wastewater services. For example, Memphis has two wastewater treatment plants, one of which is located in an industrial section of the city and receives a higher percentage of its wastewater from industrial sources than the other facility, which receives a higher percentage of its wastewater from residential sources. Because the industrial wastewater contains increased levels of organic contaminants and thus requires more energy for treatment, the two facilities consume different amounts of energy on a per- gallon basis. Water availability. As current water supplies diminish, some cities, especially those in areas that are already water stressed, are moving toward alternative water supply sources that will require more energy for treatment than processes used for surface water and groundwater. For example, to help meet future demands for water and reduce dependence on imported water supplies in San Diego, the region is pursuing energy- intensive seawater desalination, which can be 5 to 10 times more energy intensive than conventional processes to treat surface water and groundwater. Other areas, such as Tucson, Arizona, that do not have ready access to seawater are pursuing desalination of brackish groundwater— water that is less saline than seawater but that contains higher saline levels than found in freshwater. Although treating brackish water is less energy intensive than seawater desalination, it still can use two to three times more energy than conventional water treatment processes for freshwater supplies. Furthermore, San Diego is studying the viability of treating a portion of its reclaimed water—wastewater effluent that is treated to an advanced level and suitable for nonpotable water applications such as irrigation—for potable water use. To implement such a system, San Diego would need to add energy-intensive advanced treatment processes to its current wastewater treatment system. However, because this additional energy use would offset the energy demands for imported water, city officials told us the project is expected to result in a net reduction in San Diego’s energy profile. Using reclaimed water can also increase energy demands for pumping, depending on the design of the existing wastewater system. That is, many wastewater collection systems were designed with treatment plants located in low elevation areas to take advantage of gravity in conveying the wastewater to the plant. However, if wastewater is recycled, energy could be needed to pump this water against the flow of gravity into the distribution system, but such increases may actually be less energy intensive than reliance on imported water. Future regulatory changes. To address growing concerns about emerging contaminants and nutrients in the nation’s water bodies, according to many specialists, additional or more stringent regulatory standards could increase the energy demands of treatment processes in the future. Specifically, any more stringent standards that are promulgated would most likely require additional levels of treatment, and energy-intensive technologies, such as ozonation and membrane filtration, may be necessary to meet such new standards. More stringent regulations in the future could also increase energy demands even for facilities that have already implemented such technologies. For example, according to officials of the Washington, D.C., wastewater treatment plant, while the facility already must meet the nation’s most stringent permit requirements and uses advanced treatment processes, stricter standards are expected to increase the plant’s energy demands, in part, because new energy- intensive technologies may need to be added to the plant’s treatment process. Regulatory changes could also increase energy demands at other stages of the urban water lifecycle. For example, higher standards for effluent discharge from wastewater treatment plants could increase the energy required for treatment. Furthermore, stricter water quality standards for receiving waters could necessitate more plants to employ advanced treatment standards, resulting in increased energy use for the additional treatment or to pump effluent farther away to other waters. Complexity of water systems. In addition to location-specific factors, the complexity of some urban water systems can make assessing the energy demands of the urban water lifecycle challenging. For example, some urban water systems like San Diego’s are highly complex, involving a number of different entities that have responsibility for different parts of the system. Specifically, the City of San Diego currently imports 85 to 90 percent of its water from the Colorado River and northern California. In addition, the city’s regional drinking water, wastewater, and recycled water systems are managed by a number of different organizations responsible for conveying drinking water, wastewater, and recycled water to multiple treatment facilities with over 160 pumping stations spread over 400 square miles within the City of San Diego’s service territory alone. As a result, collecting consistent data on energy use from each of these organizations is challenging, according to San Diego water officials we spoke with. Specialists we spoke with and studies we reviewed identified a variety of technologies and approaches that can improve the energy efficiency of drinking water and wastewater processes associated with the urban water lifecycle, and determining the appropriate solution depends on the circumstances of a particular system. However, adoption of these technologies and approaches may be hindered by costs; inaccurate water pricing; barriers associated with operational factors, such as limited staffing levels at water utilities; competing priorities at drinking water and wastewater facilities; and lack of public awareness about the energy demands of the urban water lifecycle. Several key technologies and approaches are currently available that can improve the energy efficiency of drinking water and wastewater processes, but determining the most appropriate solution depends on the circumstances of a particular system and requires an understanding of the system’s current energy use. Many studies that we reviewed and specialists we spoke with identified process optimization, equipment and infrastructure upgrades, water conservation, and improved energy management as approaches that can help reduce the energy demands for water. In addition, the increased use of renewable energy could offset the energy purchased by water utilities from energy providers. According to some studies we reviewed, energy consumption by water and wastewater utilities can comprise 30 to 50 percent or more of a municipality’s energy bill. Optimizing drinking water and wastewater system processes, including energy-intensive operations like pumping and aeration, was identified in many studies that we reviewed as an approach to reducing the energy demands of the urban water lifecycle. Implementing monitoring and control systems and modifying pumping and aeration operations are some ways to reduce energy use through process optimization. Implementing monitoring and control systems. Monitoring and control systems, also known as supervisory control and data acquisition systems, can be used to optimize drinking water and wastewater operations. Such systems provide a central location for monitoring and controlling energy- consuming devices and equipment, which provides plant operators with the ability to schedule operations or automatically start and stop devices and equipment to manage energy consumption more effectively and improve overall operations. Modifying pumping operations. A variety of modifications could increase the efficiency of pumping systems. For example, operating constant speed pumps as near as possible to their most efficient speed, using higher efficiency pumps as opposed to lower efficiency pumps, and operating multiple smaller pumps rather than a few large pumps to better match pumping needs can help maximize pumping efficiency. In addition, using devices to monitor and control pump speeds—known as variable frequency drives (VFD)—may allow facility operators to accommodate variations in water flows by running pumps at lower speeds and drawing less energy when water flows are low. Potential energy savings from the use of VFDs can range from 5 to 50 percent or more, according to studies we reviewed. However, these studies and some specialists we spoke with also noted that VFDs are not necessarily well suited for all applications— such as when flow is relatively constant—and that potential benefits of VFDs should be evaluated based on system characteristics, such as pump size and variability of flow. Modifying aeration operations. According to many studies we reviewed and specialists we spoke with, aeration in wastewater treatment consumes a significant amount of energy, and systems can be reconfigured and better controlled to improve energy efficiency. Specifically, blowers and mechanical aerators are typically powered by a large motor, and installing variable controls on blowers to enable operators to better match aeration with oxygen requirements can reduce energy demands. Likewise several studies noted that dissolved oxygen control systems can be used to match oxygen supply with demand by monitoring the concentration of dissolved oxygen in the wastewater and adjusting the blower system or mechanical aerator speed accordingly. In addition, probes can be installed to monitor dissolved oxygen levels within the wastewater and signal operators when the system may need adjustment. According to many studies and specialists we spoke with, installing more efficient equipment—motors, pumps, blowers, and diffusers—for energy- intensive processes such as aeration and pumping can reduce energy use. In addition, ensuring the proper sizing and maintenance of equipment and infrastructure can improve energy efficiency. Upgrading equipment. Replacing less efficient equipment with more energy-efficient equipment can reduce energy use. For example, installing more efficient motors could reduce energy use by 5 to 30 percent, according to studies we reviewed. In addition, blower and diffuser technologies, including high-speed “turbo” blowers and fine or ultra-fine bubble diffusers, could decrease the energy demands of aeration. High- speed turbo blowers use less energy than other blower types, although, because these blowers are a new technology and relatively few are in use, efficiency claims are not yet well documented, according to a 2010 EPA report. Energy-saving estimates for fine bubble diffusers, which have higher oxygen transfer efficiencies than coarse bubble diffusers, range from 9 to 50 percent or more, but some specialists and studies expressed concerns about maintenance requirements as well as the durability of this technology. Right-sizing equipment. Many wastewater treatment systems were designed to handle greater capacity in the future because of anticipated population growth. However, this growth has not always occurred and, as a result, existing equipment may be oversized and consume more energy than is needed to treat current flows, according to some specialists we spoke with. Proper sizing and selection of pumping and aeration equipment to more closely match system needs can help maximize efficiency. For example, in Washington, D.C., the operators of the wastewater treatment plant replaced a 75-horsepower motor with a 10- horsepower motor in one facility to better meet actual energy demands. Improving maintenance and leak detection technology. Periodic inspections to assess pump performance and the need for replacement or maintenance of electrical systems and motors can increase the energy efficiency of the overall system, according to studies we reviewed. In addition, leak detection technologies can identify leaks throughout water systems, thereby reducing water loss and the related energy required to pump and treat that “lost” water. For example, acoustic leak detection systems use sensors to monitor for sounds that may indicate potential leaks and relay the data back to a central control room, which helps water utility staff identify actual leaks and schedule maintenance accordingly. The San Diego County Water Authority, which provides water to San Diego and other areas in southern California, has fiber optic lines in place to monitor its pipeline 24 hours a day to detect evidence of leaks. Many studies we reviewed and specialists we spoke with also identified water conservation as an approach to reducing the energy needed for the urban water lifecycle. Several studies noted that decreased customer water use could directly translate into energy savings. Furthermore, water conservation also reduces the amount of energy used to convey, treat, and distribute drinking water to the customers. Studies we reviewed and specialists we spoke with identified a variety of tools that utilities can use to promote water conservation, including enhanced metering, increased water prices, public education, and incentives to install water-efficient appliances. For example, San Diego is implementing advanced metering tools to better manage its system and to provide real-time information to customers regarding their water use in order to help them make choices that conserve water. In addition, EPA has developed water efficiency and performance criteria for several product and program categories through WaterSense, a federal water efficiency program. While many technologies and approaches have been identified to reduce the energy demands for water, determining the most appropriate solution depends on the circumstances of a particular system—including the type of facilities and treatment processes in place—and requires an understanding of current energy use. Several studies we reviewed identified improved energy management, including conducting energy audits of treatment facilities or systems, as a necessary first step to reducing energy demands. Specifically, specialists told us that by providing utility managers with information about their facilities’ energy use, energy audits can help managers identify opportunities to change plant operations in ways that will save energy. For example, the energy supplier for one wastewater treatment plant in Memphis conducted an energy audit of the blower system, which used about 75 percent of the plant’s total energy. As a result of this audit, operators changed their practices to run blowers at the lowest levels possible while still ensuring they continued to meet the effluent discharge standards required by the plant’s permits. Similarly, in 2000, San Diego established an in-house energy management program, which includes an audit team that looks for technologies and approaches to lessen the energy demands of the city’s drinking water and wastewater systems. The team studies the efficiency of existing equipment and treatment processes and considers upgrading or replacing equipment with available energy-efficient technologies. For example, the energy audit team identified over a dozen energy conservation measures that could be applied to reduce energy consumption at two of the city’s sewer pump stations, including installing timers to turn off lighting and upgrading, resizing, and replacing motors and blowers. In addition, EPA’s Energy Star program provides energy management tools and strategies to support the successful implementation of energy management programs. Officials told us that EPA also works with municipal drinking water and wastewater utilities to provide information on potential energy efficiency opportunities. EPA’s online benchmarking tool, known as the Portfolio Manager, offers wastewater treatment plant managers the opportunity to compare the energy use of their plants with that of other plants using the EPA energy performance rating system. EPA has also published a variety of educational materials for drinking water and wastewater utilities to help identify, implement, measure, and improve energy efficiency and renewable energy opportunities. Specialists we spoke with and studies we reviewed identified two additional approaches for reducing the energy required to treat and distribute water: improving advanced treatment technologies and redesigning a city or region’s water system. Improving advanced treatment technologies. According to EPA officials, and as previously noted by specialists, improving energy-intensive advanced treatment technologies—such as ultraviolet disinfection, ozone, and membrane technologies—is important because plants are increasingly using them. For example, the use of membrane materials that require less pressure to push water through to remove contaminants could decrease the energy demands of that technology. In addition, some specialists we spoke with told us that newer technologies are being developed, such as forward osmosis, that may offer alternative treatment approaches that are more efficient than the technologies currently used for desalination. Several specialists told us the federal government should conduct additional research to understand and improve the energy efficiency of water supply, treatment, and water use—for example, by conducting more research on energy-efficient desalination technologies. Redesigning water systems. Some specialists noted that redesigning water systems in ways that better integrate drinking water, wastewater, and stormwater management could improve the energy efficiency of water systems overall. Decentralizing treatment systems, implementing approaches to better manage stormwater, reusing wastewater, and using less energy-intensive processes for biological treatment can help reduce energy needed for providing drinking water and wastewater services. For example, current water systems primarily rely on a few plants with large capacities to treat drinking water and wastewater. Some specialists told us that systems could be redesigned to incorporate more treatment plants with smaller capacities and to locate these plants closer to the point of water use by customers, thereby reducing some of the energy required for pumping to the treatment site. In addition, some specialists identified improvements in stormwater management through strategies such as low- impact development—which involve land use planning and design to better manage stormwater—as a way to reduce the energy required for treatment. For example, by decreasing stormwater infiltration into some wastewater systems through low-impact development activities such as the capture and use of rainwater, flows into treatment plants would also be reduced, thereby decreasing the energy needed for treatment. In addition, reusing wastewater for purposes that may not require potable water, such as industrial processes or landscaping, may reduce overall energy use by decreasing energy used currently to pump, treat, and distribute potable water to these customers, according to some studies we reviewed. However, the potential for energy savings from reuse depends on the energy intensity of a given system’s water supply as well as the level of treatment needed for potential uses. Furthermore, some studies we reviewed and specialists we spoke with noted that relying more on biological treatment processes that do not require aeration, such as using lagoons or trickling filters, may be an option to reduce energy demands. However, these approaches may be limited by available space in urban areas and therefore may not be applicable everywhere. Many studies we reviewed and specialists we spoke with stated that drinking water and wastewater utilities could adopt renewable energy projects to reduce energy purchased from energy providers. Renewable energy projects may include solar, wind, and hydroelectric power as well as the recovery and use of biogas from wastewater treatment processes. In addition, some studies we reviewed and specialists we spoke with identified hydro turbines as an option for recovering energy in the distribution system. For example, water systems with changes in topography that have pressure-reducing valves in place can install turbines that generate electricity as water flows past. This energy could then be recovered for use in powering equipment. The city of San Diego has adopted a variety of renewable energy projects to power its drinking water and wastewater treatment operations. For example, the city installed a 945-kilowatt solar power system at the Otay Water Treatment Plant that produces enough electricity to meet the power needs of the plant’s pumping operation (see fig. 4). In addition, at the city’s Point Loma Wastewater Treatment Plant, both methane and hydroelectric power are recovered from wastewater processes. The plant uses digestion processes to treat organic solids resulting from its wastewater treatment processes. Methane, a by-product of the digestion process, is removed from the digesters and used to power two engines that supply all of the plant’s energy needs, making it energy self-sufficient. In addition, the plant recovers hydroelectric power from the treated effluent that it discharges into the ocean. The effluent drops 90 feet from the wastewater treatment plant to the ocean, powering a 1,350 kilowatt hydroelectric plant. The city can sell any excess energy produced by the plant back to the electric utility. While renewable energy projects have the primary benefit of reducing the energy needed by water treatment facilities from outside providers, such projects could also reduce overall energy use. For example, solar power systems co-located at treatment facilities in San Diego may result in the offset of slightly more electricity than they produce, since electricity generated by the energy provider off-site and transferred over a greater distance results in some loss of energy during transmission. Specialists we spoke with identified a number of key barriers to adopting the available technologies and approaches that could reduce the energy demands of the urban water lifecycle. These barriers fall into five categories: (1) costs associated with these technologies, (2) inaccurate water pricing, (3) barriers associated with how water utilities operate, (4) competing priorities at drinking water and wastewater facilities, and (5) the lack of public awareness about the energy demands of the urban water lifecycle. Energy-saving technologies may lessen the energy demands of the urban water lifecycle, but such improvements are often expensive to adopt. Many specialists told us that, as a result, utilities may not be able to justify the costs necessary to install energy-efficient equipment. For example, some specialists told us that upgrading to VFDs, higher-efficiency pumps, and ultra-fine bubble diffusers may lessen a water facility’s energy demands, but the costs of installing these technologies can be prohibitive for some systems, and it can take years to realize the full energy-saving benefits. As a result, some utility operators may choose to wait until there is an immediate need to upgrade equipment because the costs can be justified more easily at that point. Similarly, some specialists told us that the cost of installing renewable energy projects, such as solar panels, can be a barrier to adoption for some treatment facilities. According to an energy specialist we spoke with, it may take over 30 years to fully realize the cost savings from such projects. However, a DOE official noted that while expensive in the past, the cost of solar panels has been decreasing in recent years. Furthermore, installing energy-efficient equipment and infrastructure upgrades, such as replacing leaking pipelines, can be particularly challenging for smaller water utilities because they often compete for limited funds against other municipal services, such as fire and police protection. In addition, in areas where energy costs are low, there may be little incentive for water utility operators to implement capital-intensive practices to save energy. To help overcome the barriers associated with the costs of upgrading facilities, some specialists told us that utilities should conduct cost analyses to account for the total savings incurred over the life of the energy-saving projects, not just focus on the short-term returns on investment. Some specialists we spoke with also suggested that utilities should take advantage of the federal funding available through the Drinking Water and Clean Water State Revolving Fund programs, which can be used to fund a variety of projects that improve water and energy efficiency. These programs provide financial assistance for drinking water and wastewater infrastructure projects, respectively, and for certain other purposes, such as installing water meters, installing or retrofitting water-efficient devices, and promoting water conservation. In addition, the American Reinvestment and Recovery Act of 2009 and EPA’s fiscal year 2010 appropriation encourage states to use a portion of those funds for such energy and water efficiency projects. According to many specialists with whom we spoke, the true cost of water is often not reflected in rates customers are charged. Specifically, specialists told us that water subsidies have kept water rates artificially low and do not reflect the actual cost, including energy costs, of pumping, treating, and moving drinking water and wastewater. The effect of this situation is two-fold. First, there may be little incentive for customers to use water more efficiently if they are not paying the true cost of it. Second, because these reduced water rates generally do not cover the actual costs incurred by drinking water and wastewater facilities, some utilities do not generate enough revenue to implement upgrades that could lessen their facilities’ energy demands. Some specialists noted, however, that rate increases are not a politically popular approach and may be met with public and political resistance. Other barriers to adopting energy-reducing technologies and approaches are operational in nature. Specifically, specialists we spoke with noted a number of such challenges, including utilities not having staff with adequate knowledge about technologies and access to energy-use data, reluctance to change, and lack of coordination between water and energy utilities. For example, several specialists told us that smaller utilities lack staff with knowledge about the energy-efficient techniques or may only have operators in place part time to manage or oversee new technologies. Because operators generally are the advocates for energy-efficiency upgrades, the specialists believe it could be difficult to gain support for such investments without knowledgeable operators. Further, operators may be unaware of the amount of energy their facilities use because, in many municipalities, these bills are received and paid by other departments and operators may not have access to these data. Consequently, operators may be unaware of the potential for energy savings from upgrades. Moreover, many specialists told us that operators are often resistant to alter the practices that they have employed for years to move and treat water and may be reluctant to adopt new technologies or approaches, especially if the effectiveness of such changes has not yet been adequately proven. Some specialists also told us that drinking water and wastewater utilities do not coordinate as closely as they could with energy utilities to identify opportunities to optimize their operations and, thereby, lessen their energy demands. Considering the energy demands of treatment can be an afterthought to complying with water quality regulations for treatment plant operators, according to some specialists with whom we spoke. One drinking water utility operator told us that energy is considered to the extent possible when decisions are being made about altering treatment processes to meet regulatory requirements but that the safety of the water supply is his primary concern. For example, when the city of San Diego’s Public Utilities Department was considering which disinfection technology to employ, it chose to use ozonation because it would provide more effective disinfection for the plant and also reduces disinfection by-products, even though it is a more energy-intensive technology than the current disinfection process. In addition, to ensure that minimum effluent discharge standards are met, water utility operators may over-treat wastewater by, for example, running aeration blowers at higher levels than necessary to meet regulatory requirements. In light of the potential for more stringent standards in the future, some specialists noted that regulators should consider the energy demands associated with these increased water quality standards. Many specialists told us that many customers are not aware of and do not understand the energy demands of drinking water and wastewater services. While some customers may be aware of their total energy use, it may not be clear to them how much of that energy use is for heating water and other water-related uses. In addition, customers may not be aware that water conservation saves not only water but also energy. Some specialists told us that federal programs such as EPA’s WaterSense and Energy Star and some state efforts, such as in California and New York, have begun to educate the public on the energy demands of the urban water lifecycle; however, additional efforts may be needed to increase awareness of the energy-water nexus for providing drinking water and wastewater to urban users. We provided a draft of this report to the Departments of Defense, Energy, and the Interior and EPA for review and comment. DOE and EPA provided technical comments that we incorporated into the final report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Administrator of EPA; the Secretaries of Defense, Energy, and the Interior; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact us at (202) 512-3841 or mittala@gao.gov or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Our objectives for this review were to describe what is known about (1) the energy needed for each stage of the urban water lifecycle, and (2) technologies and approaches that could lessen the energy needed for the urban water lifecycle, as well as any identified barriers that exist to their adoption. We focused our work on community drinking water systems and publicly owned wastewater facilities located in the United States. We also focused on residential customers and, to the extent possible, commercial, industrial, and institutional customers. To address both of these objectives, we conducted a systematic review of studies and other documents that examine the energy required to extract, move, use, and treat water, including peer-reviewed scientific and industry periodicals, government-sponsored research, and reports from nongovernmental research organizations. In conducting this review, we searched databases such as ProQuest, EconLit, and BioDigest, and used an iterative process to identify additional studies, asking specialists to identify relevant studies and reviewing studies from article bibliographies. We reviewed studies that fit the following criteria for selection: (1) the research was of sufficient breadth and depth to provide observations or conclusions directly related to our objectives; (2) the research demonstrated the energy demands of water supply systems in the United States; (3) the studies typically were published between 2000 and 2010; and (4) the studies were determined to be methodologically sufficient. We examined key assumptions, methods, and relevant findings within the studies related to drinking water processes, customer end use, and wastewater processes. We believe we have included the key studies and have qualified our findings, where appropriate. However, it is possible that we may not have identified all of the studies with findings relevant to these two objectives. We also selected a nonprobability sample of three cities to examine in greater depth to better understand regional and local differences related to urban water lifecycles: Memphis, Tennessee; San Diego, California; and Washington, D.C. We chose these cities as illustrative case studies based on criteria such as their type of water source; water availability; type of wastewater system; unique characteristics, such as potential for desalination; and economic factors, such as energy costs. The results from our visits to these cities cannot be generalized to all U.S. cities, but they provide valuable insights as illustrative case studies. For each of these case studies, we analyzed documentation from and conducted interviews with a wide range of specialists to gain the views of diverse organizations covering all stages of the urban water lifecycle. These groups included relevant drinking water and wastewater treatment facilities, and state and local agencies responsible for water or energy. We requested interviews with representatives from electrical utilities in each location. In San Diego and Washington, D.C., the utilities did not meet with us or told us they did not have relevant data. In Memphis, however, which has a combined water and energy utility, an energy official was present at our meeting with the utility, but the utility told us it does not track data on energy for water- related uses for some customer types. In addition, we conducted site visits to drinking water and wastewater treatment facilities in each of these locations to better understand the role that energy plays in their operation. In addition to the specialists we interviewed as part of our illustrative case studies, we also interviewed a range of specialists whom we identified as having expertise related to the energy needs of all stages of the urban water lifecycle in general. We selected these specialists using an iterative process, soliciting additional names from each person we interviewed. From among those specialists identified, we interviewed those who could provide us with a broad range of perspectives on the energy needs of the urban water lifecycle. We also interviewed specialists that we identified during our systematic review of studies who have analyzed (1) the energy needed in one or more stages of the water lifecycle at the national or local level or (2) techniques available to reduce the energy demands for water. These specialists represented a variety of organizations, including drinki water and wastewater treatment facilities; state and local government offices responsible for water or energy; officials from the EPA; researc from some of the Department of Energy’s national laboratories, such as Sandia National Laboratory; university researchers; water and energy industry representatives from groups such as the American Water Works Association and the Water Research Foundation; and relevant nongovernmental organizations, such as the Pacific Institute, a nonpartisan research institute that works to advance environmental protection, economic development, and social equity. The specialists also included individuals with knowledge of the energy demands for water in other states, including Arizona, Colorado, Florida, New York, and Wisconsin, to gain a better understanding of water and energy issues in other regions around the United States. We also interviewed other federal agency officials and analyzed data and information from federal agencies that have responsibilities related to the energy needs of the urban water lifecycle—the Department of Defense’s U.S. Army Corps of Engineers, th Department of Energy, the Department of the Interior’s U.S. Geological Survey and Bureau of Reclamation, the Environmental Protection Agency, and the National Science Foundation. To analyze information gathered through the interviews with specialists and the scientific studies, research, and other key documents reviewed, we conducted content analyses. Specifically, to conduct the content analysis of information gathered through interviews with specialists, we reviewed each interview, selected relevant statements, and identified and labeled these statements using a coding system that identified the topicarea. Once relevant statements from the interviews were extracted and coded, we used the coded data to develop key themes. An independentreviewer then verified that the codes were accurately applied to the statements and the key themes were correctly developed. During the course of our review, we conducted over 60 interviews with over 100 specialists. For the purposes of our interview analysis, each interview represents the views of one specialist even if more than one specialist present at the interview. We used the following categories to quantify responses of experts and officials: “some” refers to responses from 2 to specialists, “several” refers to responses from 6 to 10 specialists, and “many” refers to responses from 11 or more specialists. We used a similar coding scheme to identify key themes resulting from our analysis of the scientific studies, research, and other key relevant documentation. In addition to the contact named above, Elizabeth Erdmann, Assistant Director; Colleen Candrl; Antoinette Capaccio; Janice Ceperich; Nancy Crothers; Abbie David; Angela Leventis; Katherine Raheb; Ellery Scott; Rebecca Shea; Jena Sinkfield; Kevin Tarmann; and Lisa Vojta made significant contributions to this report. | Providing drinking water and wastewater services are two key functions needed to support an urban lifestyle. To provide these services, energy is needed to extract, use, and treat water and wastewater. As the demand for water increases, the energy demands associated with providing water services are similarly expected to grow. GAO was asked to describe what is known about (1) the energy needed for the urban water lifecycle and (2) technologies and approaches that could lessen the energy needed for the lifecycle and barriers that exist to their adoption. To address these issues, GAO reviewed scientific studies, government-sponsored research, and other reports and interviewed specialists from a variety of organizations, including drinking water and wastewater utilities; federal, state, and local government offices responsible for water or energy; and relevant nonprofit groups, about the energy needed to move, use, and treat water. GAO also selected three cities--Memphis, Tennessee; San Diego, California; and Washington, D.C.--as illustrative case studies to help understand the energy demands of the lifecycle in different areas of the country. GAO is not making any recommendations in this report. A draft was provided to the Departments of Defense, Energy (DOE), and the Interior, and the Environmental Protection Agency (EPA). DOE and EPA provided technical comments, which we incorporated as appropriate.. Comprehensive data about the energy needed for each stage of the urban water lifecycle are limited. In particular, few nationwide studies have been conducted on the amount of energy used to provide drinking water and wastewater services, and these studies do not consider all stages of the lifecycle in their analysis. Specialists GAO spoke with emphasized that the energy demands of the urban water lifecycle vary by location. Considering location-specific and other key factors is necessary to assess energy needs. The specialists mentioned such factors as the topography of the area over which water is conveyed, the level and type of treatment provided, and the quality of the source water. For example, systems relying on groundwater as their source for drinking water generally use less energy than systems relying on surface water because groundwater usually contains fewer contaminants and, therefore, requires less treatment before distribution to customers. A variety of technologies and approaches can improve the energy efficiency of drinking water and wastewater processes, but barriers exist to their adoption. Installing more efficient equipment, adopting water conservation measures, and upgrading infrastructure are among some of the approaches that can decrease energy use, according to specialists GAO spoke with and studies GAO reviewed. For example, technologies to identify potential pipeline leaks throughout water systems can reduce water loss and the energy required to pump and treat that "lost" water. However, according to specialists, adoption of technologies and approaches to improve energy efficiency may be hindered by the costs of retrofitting plants with more energy-efficient equipment and competing priorities at treatment facilities, among other barriers. |
In its 2001 Nuclear Posture Review, DOD significantly expanded the range of strategic capabilities to include not only the old Triad, which consisted of nuclear-armed intercontinental ballistic missiles, submarine-launched ballistic missiles, and strategic bombers, but also conventional and nonkinetic offensive strike and defensive capabilities. The review also called for revitalizing the U.S. research and development and industrial infrastructure that would develop, build, and maintain offensive forces and defensive systems and be capable of responding in a timely manner to augment U.S. military capabilities when necessary. According to DOD, the three legs of the New Triad–offensive strike, active and passive defenses, and responsive infrastructure–are intended to be supported by timely and accurate intelligence, adaptive planning, and enhanced command and control capabilities. Figure 1 shows the three legs of the New Triad and its supporting elements. DOD concluded in the 2001 review that while nuclear weapons will continue to play a critical role in defending the United States, the combination of capabilities included in the New Triad would increase the military options available to the President and Secretary of Defense and allow for the development of responsive, adaptive, and interoperable joint forces that could be employed in a wider range of contingencies. DOD’s review indicated that the additional capabilities provided by the New Triad would partially mitigate the effects of any reductions in the number of operationally deployed strategic nuclear warheads that are planned through 2012. Table 1 shows the weapons systems and capabilities that make up the New Triad. In its 2001 Nuclear Posture Review, DOD indicates that new initiatives and investments would be required to achieve a mix of new or improved capabilities that compose the offensive, defensive, and responsive infrastructure legs and supporting command and control, intelligence, and adaptive planning elements of the New Triad. In particular, the review found that major investment initiatives would be needed in the areas of advanced nonnuclear strike, missile defenses, command and control, and intelligence. DOD also plans to improve existing New Triad-related capabilities by modernizing existing weapon systems and enhancing the tools used to build and execute strike plans to provide more flexibility in adapting or developing military options during crises. An Acquisition, Technology, and Logistics official in the Office of the Secretary of Defense told us that DOD intends to partially address near-term affordability issues for the New Triad by enhancing capability characteristics of current weapon systems, such as range, and leveraging capabilities already in development. In March 2003, DOD published a Nuclear Posture Review Implementation Plan that is intended to identify initiatives for developing the New Triad and institutionalizing the Nuclear Posture Review. DOD plans to implement the New Triad concept and many of the capabilities identified by the Nuclear Posture Review by 2012. However, DOD states that further investments are likely to be needed beyond that time frame as existing nuclear platforms age, such as the Minuteman III intercontinental ballistic missile system, and follow-on nuclear weapon systems are proposed. The Nuclear Posture Review also states that DOD should conduct periodic assessments to determine its progress in developing and integrating capabilities for the New Triad. Specifically, these strategic capability assessments are to review the (1) progress to date in reducing the number of operationally deployed strategic nuclear weapons, (2) state of the security environment, and (3) progress made in the development of the New Triad. An assessment team, which included representatives from DOD and the Department of Energy, completed its first Nuclear Posture Review strategic capability assessment and associated report in April 2005. An Office of the Secretary of Defense official told us that DOD plans to update its first assessment in the fall of 2005 to support the department’s conduct of the Quadrennial Defense Review. DOD intends to conduct subsequent assessments about every 2 years through 2012. Many DOD organizations, including the Joint Staff, military services, combatant commands, and defense agencies, and the Department of Energy, have responsibilities for implementing various aspects of the New Triad. These responsibilities are broadly defined in relevant New Triad implementation and guidance documents. Within the Office of the Secretary of Defense, two organizations have key responsibilities for overseeing and managing the New Triad implementation efforts: The Office of the Under Secretary of Defense for Policy is responsible for developing the policy and guidance to implement the 2001 Nuclear Posture Review and for establishing an organizational framework for coordinating New Triad initiatives within DOD. The Office of the Under Secretary of Defense for Acquisition, Technology and Logistics is responsible for providing oversight for the development and deployment of New Triad capabilities. The U.S. Strategic Command also has a significant role in implementing the New Triad and supporting its missions. In addition to its responsibilities for strategic nuclear deterrence and military space operations missions, the command was assigned several new missions related to the New Triad in January 2003. These missions are: global strike; integrated missile defense; DOD information operations; and command, control, communications, computers, intelligence, surveillance, and reconnaissance. In January 2005, the Secretary of Defense also assigned the command responsibility for the mission of combating weapons of mass destruction. Appendix I provides additional information about the U.S. Strategic Command’s missions. Additionally, the Office of Program Analysis and Evaluation in the Office of the Secretary of Defense is responsible for assembling and distributing the FYDP, which DOD uses to formulate the estimated projected resources and proposed appropriations to support DOD programs, projects, and activities, including those related to the New Triad. The office is also responsible for coordinating with DOD components any proposed changes to the FYDP’s structure, such as updates to existing program element titles and definitions. The Office of the Under Secretary of Defense (Comptroller) has responsibility for the annual budget justification material that is presented to Congress. These offices work collaboratively to ensure that the data presented in the budget justification material and the FYDP are equivalent at the appropriation account level. The FYDP is a report that resides in an automated database, which is updated and published at least 3 times a year to coincide with DOD’s internal budget development activities and annual budget submission to Congress. It provides projections of DOD’s near and midterm funding needs and reflects the total resources programmed by DOD, by fiscal year. The FYDP includes data on estimates for the fiscal year reflected in the current budget request and at least 4 subsequent years. Both detailed data and a summary report are generally provided to Congress with DOD’s annual budget submission. The FYDP is used as a source of data both for analysis and as an input to alternative ways of displaying and portraying actual and programmed resources. It contains data related to the forces, manpower, and total obligation authority for each program element. The FYDP is organized into 11 major force program categories, comprising combat forces and support programs, which are used as a basis for internal DOD program review. The major force program categories include strategic forces, general-purpose forces, research and development, and special operations forces. The FYDP is further arranged according to the appropriation structure utilized by Congress to review budget requests and enact appropriations, which includes major appropriation categories for procurement; operation and maintenance; military personnel; research, development, test, and evaluation; and military construction. Therefore, the FYDP’s structure serves the purpose of crosswalking DOD’s internal review structure with the congressional review structure. In 2003, DOD began implementing the Joint Capabilities Integration and Development System (JCIDS) process to identify improvements to existing capabilities and guide development of new capabilities from a joint perspective that recognizes the need for trade-off analysis. The new process is designed to provide an approach to defense planning that looks at the broad range of capabilities to address contingencies that the United States may confront in the future. When fully implemented, JCIDS is intended to provide an enhanced methodology utilizing joint concepts that will identify and describe existing or future shortcomings in capabilities and identify integrated solutions that meet those capability needs. The system is also expected to provide better linkage to the acquisition process and improve prioritization of validated joint warfighting capability proposals. Specifically, it is intended to provide a broader review of proposals than did the previous planning process by involving additional participants, including the combatant commands, early in the process. The analyses conducted during the process are to result in a set of potential solutions, including additional resources or changes to doctrine and training designed to correct capability shortcomings. These solutions are then incorporated into roadmaps that show the resource strategies to develop and acquire the needed capabilities. DOD has not fully identified the projected spending for New Triad in the FYDP to date. In light of the challenges DOD faces in transforming strategic capabilities, decision makers need to have the best and most complete data available about the resources being allocated to the New Triad in making decisions on the affordability, sustainability, and trade-offs among the efforts to develop and acquire capabilities. The FYDP is one of the principal tools available to help inform DOD and Congress about resource data relating to these efforts. While DOD has identified some New Triad spending in its analyses and in relevant New Triad documents, our notional analysis of New Triad-related program elements indicates that overall projected spending for the New Triad through fiscal year 2009 could be much greater when other program elements that provide New Triad capabilities are considered. Additionally, the current FYDP data structure does not expressly identify and aggregate New Triad program elements that would allow identification of New Triad spending, and the program elements included in the FYDP’s existing major force program category for strategic forces do not fully capture the broader range of strategic capabilities that were envisioned in the Nuclear Posture Review. DOD does not plan to develop a complete and approved inventory of New Triad- related program elements in its FYDP because DOD officials believe that it is difficult to reach agreement on the program elements to be included in such an inventory. However, an inventory of New Triad-related program elements that provides a more complete and clear identification of the projected spending currently planned for the New Triad could help DOD and Congress make decisions on the affordability and spending needed for programs to develop and acquire New Triad capabilities. While DOD has identified some program elements related to the New Triad in documents and internal reviews, it still has not fully identified projected spending associated with the New Triad. DOD documents related to the New Triad, including the Nuclear Posture Review, the Nuclear Posture Review Implementation Plan, and the Secretary of Defense’s fiscal year 2002 Annual Defense Report to the President and the Congress, broadly describe the capabilities of the New Triad and indicate the range and types of activities and weapon systems that provide these capabilities. DOD has also identified and directed resources for some New Triad programs. For example, as the Nuclear Posture Review was being completed in late 2001, DOD issued guidance for preparing its fiscal year 2003 budget that identified 12 initiatives that were considered key to developing the New Triad, such as programs to provide capabilities to defeat hard and deeply buried targets. In anticipation of a potential requirement to identify New Triad program elements in the FYDP, DOD’s Office of Program Analysis and Evaluation conducted an analysis in 2003 that identified a list of 188 FYDP program elements, which accounted for about $186.7 billion in then-year dollars of projected spending for fiscal years 2004 through 2009. The office identified another $17.4 billion for programs and activities that are not readily identifiable in the FYDP, bringing the total to about $204.1 billion. However, Office of Program Analysis and Evaluation officials told us that the analysis included only those program elements that supported the initiatives identified in DOD’s programming guidance or otherwise clearly provide New Triad capabilities. The officials said that the list of programs identified in this analysis was never agreed upon and approved within DOD and there are no current plans to update the analysis. Office of the Secretary of Defense officials told us that the team conducting the first strategic capability assessment for the New Triad performed a subsequent survey of current program elements in the FYDP to determine the capabilities these program elements would provide for the New Triad by 2012. An Office of Program Analysis and Evaluation official said that the survey included all of the program elements on their list. However, the official did not know whether the survey identified any additional program elements. In addition to DOD’s projected spending in the FYDP for the New Triad, the Department of Energy’s National Nuclear Security Administration identified $41.7 billion for nuclear weapons activities for fiscal years 2004 through 2009 in its Future Years Nuclear Security Program prepared for the fiscal year 2005 President’s budget submission. This agency is responsible for maintaining the infrastructure to support nuclear weapons capabilities, including the refurbishment and service-life extension of currently deployed nuclear warheads. DOD’s analyses of FYDP program elements did not include many of the program elements that make up several capabilities identified for the New Triad in the Nuclear Posture Review, such as special operations and intelligence, or those that provide capabilities that are needed to perform New Triad missions but also have wider military applications. If these additional program elements are considered, the overall projected spending for the New Triad could be much greater than DOD has currently identified in New Triad-related documents and in either of the analyses conducted by its Office of Program Evaluation and Analysis or strategic capability assessment team. We conducted a notional analysis to identify any additional spending for New Triad-related program elements included in the FYDP. Our notional analysis considered a broader range of FYDP program elements than either of the analyses conducted by DOD’s Office of Program Evaluation and Analysis or strategic capability assessment team and included many elements that provide capabilities for conducting New Triad missions, but also have wider military applications, such as communications, intelligence, and special operations program elements. Using available DOD definitions of New Triad capabilities, we reviewed each of the FYDP’s 4,725 program elements to determine to what extent the elements provided capabilities needed for New Triad missions. We further distinguished the program elements we identified as being fully dedicated to the missions of the New Triad or not fully dedicated to the New Triad because the capabilities provided by these latter program elements could be used in a wider range of military applications than just for the New Triad. Compared to the 188 program elements and $204.1 billion in then-year spending for fiscal years 2004 through 2009 identified by the Office of Program Analysis and Evaluation, our notional analysis identified a total of 737 program elements in the FYDP that are aligned with New Triad capabilities, with a total associated spending of $360.1 billion over the same period, or about $156.0 billion more than the DOD analysis. Of the 737 program elements that we identified, 385 program elements provide capabilities that would be fully dedicated to New Triad missions, such as program elements for weapons of mass destruction defense technologies and for the Joint Theater Air and Missile Defense Organization. The other 352 program elements we identified provide capabilities, such as special operations, that would be used in conducting New Triad missions but could also be used for other military missions. Figure 2 shows the number of New Triad program elements identified by the Office of Program Analysis and Evaluation and the number of additional program elements identified in GAO’s analysis. Of the $360.1 billion we identified in projected spending for the New Triad, $231.8 billion was for programs that are fully dedicated to the New Triad and $128.3 billion for programs that are not fully dedicated. As table 2 shows, we broke out the spending into the New Triad’s four capability areas–offensive strike; active and passive defenses; responsive infrastructure; and command and control, intelligence, and planning–and created a fifth area for program elements that supported more than one capability area. Our notional analysis shows that projected spending for offensive strike and enhanced command and control, intelligence, and planning capability areas almost doubles when program elements that are not fully dedicated to the New Triad are included. The offensive strike capability area represents the largest amount of the projected spending, $156.0 billion in then-year dollars, and the command and control, intelligence, and planning capability area is next with $108.0 billion in projected spending. Together, these two capability areas account for 73 percent, or about $264.0 billion, of the $360.1 billion total projected spending identified in our analysis. Most of the $86.3 billion of projected spending for the active and passive defenses capability area is in the fully dedicated category. Appendix III provides additional information on the results of our analysis. Officials with Program Analysis and Evaluation, Policy, and Acquisition, Technology, and Logistics in the Office of the Secretary of Defense and the U.S. Strategic Command stated that the methodology we used for our notional analysis was reasonable. Officials from the Office of Program Analysis and Evaluation and from U.S. Strategic Command told us that the program elements we identified were consistent with the capabilities defined for the New Triad. Officials from the Office of Program Analysis and Evaluation also said that our analysis used a more systematic approach in identifying New Triad-related program elements included in the FYDP than was followed in DOD’s analyses. The officials added that when they were compiling their own analysis of New Triad-related program elements, many of the documents that GAO used to identify relevant programs had not yet been published. Therefore, while DOD did not include many program elements that are not fully dedicated to the New Triad in their analyses, the officials told us that it was not unreasonable to include those program elements in our analysis. As our notional analysis shows, including these program elements not only provides greater transparency of the projected spending for the New Triad in the FYDP but also identifies many additional program elements that provide capabilities necessary for carrying out New Triad missions. While the FYDP is a report that provides DOD and Congress with a tool for looking at future funding needs, the current FYDP structure does not readily identify and aggregate New Triad-related program elements to provide information on current and planned resource allocations–including spending changes, priorities, and trends–for the New Triad. In conducting our analysis of FYDP program elements, we observed that DOD has not created any data fields in the FYDP’s structure that would expressly identify program elements as being relevant to the New Triad. According to DOD Program Analysis and Evaluation officials, there is no plan to modify the data fields in the FYDP structure to allow the ready identification of New Triad program elements and associated spending because they have not received direction to do so. Additionally, these officials told us that if DOD were to modify the FYDP structure to allow such identification, it would need to develop an approved list of existing New Triad program elements to allow capture of these elements in the data fields. Additionally, as we have reported in the past, the FYDP’s 11 major force program categories have remained virtually unchanged since the 1960s. Our notional FYDP analysis indicates that the FYDP’s definition of the existing major force program for strategic forces–one of the key major force program categories associated with the New Triad–does not fully capture the projected New Triad spending for the broader range of strategic capabilities that are envisioned for the New Triad in the Nuclear Posture Review. We determined that only $55.6 billion, or about 15 percent of the $360.1 billion of projected spending that we identified in our notional analysis of FYDP program elements, is associated with the FYDP’s strategic forces major force program category, which largely captures projected spending on offensive nuclear capabilities. The remaining $304.6 billion is dispersed among the other 10 major force programs. For example, program elements for the Joint Air-to-Surface Standoff Missile, which is an autonomous, stealthy, long-range, conventional, air-to-ground, precision cruise missile designed to destroy high-value, well-defended fixed or moveable targets, and the Patriot missile defense system, which contributes to the defense leg of the New Triad, are included in the FYDP’s general-purpose forces major force program. Similarly, intelligence-related program elements for hard and deeply buried targets and to support U.S. Strategic Command are part of the FYDP’s command, control, communications, and intelligence major force program. In the past, DOD created new aggregations of program elements and changed the FYDP’s structure as decision makers needed information not already captured in the FDYP. For example, a recent aggregation allows data that relate to every dollar, person, and piece of equipment in the FYDP to be identified as being in either a force or infrastructure category. DOD has also made it possible to identify program elements in the FYDP that are related to activities to capture the resources associated with specific areas of interest, such as space activities. In 2001, DOD established a “virtual major force program” for space to increase the visibility of resources allocated for space activities. This is a programming mechanism that aggregates most space-unique funding by military department and function crosscutting DOD’s 11 existing major force program categories. The Commander of the U.S. Strategic Command, who has key responsibilities for implementing the New Triad, told us that creating a virtual major force program for the New Triad could help align New Triad capabilities with the projected spending in the FYDP, identify responsible organizations, reduce ambiguity of the New Triad concept, and provide better visibility and focus for DOD efforts to develop and acquire New Triad capabilities. The Commander suggested that it could be necessary to create more than one virtual major force program, possibly one for each of the New Triad legs, because of the diversity and scope of New Triad capabilities. Some Office of the Secretary of Defense officials also told us that creating a virtual major force program could provide Congress with more visibility of DOD’s efforts underway to develop the capabilities needed for the New Triad. Until such time as a tool such as a virtual major force program becomes available that can capture and categorize the projected spending for the New Triad in the FYDP, we believe that DOD will be limited in its ability to guide and direct all its efforts to develop, acquire, and integrate New Triad capabilities and Congress will not have full visibility of the resources being allocated. DOD has not established a requirement to develop a complete and approved list of the program elements included in the FYDP that are associated with New Triad spending. Office of the Secretary of Defense officials told us that DOD has not established such a requirement because the diversity and scope of the New Triad make it difficult for DOD officials to reach agreement on a complete list of programs. They also told us that because the New Triad is an ambiguous concept, the program elements included in such a list would change as the New Triad evolves and becomes better defined. However, without a complete and approved DOD list of New Triad program elements included in the FYDP, there is some uncertainty about the total range of programs and projected spending that are being pursued to achieve New Triad capabilities. It also will be difficult for Congress to assess DOD’s progress in achieving the goals identified in the Nuclear Posture Review without having complete information on the resources being spent or needed in the future to meet those goals. Additionally, the broad scope of the New Triad concept and large number of organizations with New Triad-related spending responsibilities makes it even more important to have complete information available on the projected spending being provided for each of the New Triad capability areas and for each of the many organizations developing and acquiring New Triad capabilities. For example, our notional analysis identified as many as 23 defense organizations, including the military services, offices within the Office of the Secretary of Defense, the Joint Staff, several combatant commands, and defense agencies, with FYDP spending related to the New Triad. Office of the Secretary of Defense officials told us that having an approved program list would promote a common understanding of the New Triad and benefit future department program reviews. Additionally, an Office of Program Analysis and Evaluation official told us that an approved program list would aid DOD in making resource decisions for the New Triad. In preparing DOD’s fiscal year 2006 budget, the official told us that an approved list of programs would have made it easier to evaluate the effects of programming changes proposed by the military services on capabilities being acquired for the New Triad. While several New Triad documents and DOD’s recent strategic capability assessment identify investment needs through 2012, DOD’s near-term investment direction is incomplete. Additionally, DOD has not yet developed an overarching and integrated long-term investment approach to identify and plan investments needed to acquire and sustain capabilities for the New Triad. A long-term investment approach is an important tool in an organization’s decision-making process to define direction, establish priorities, assist with current and future budgets, and plan the actions needed to achieve goals. Although DOD recognizes the need for a long-term investment approach, it does not plan to develop one until nonnuclear strike and missile defense concepts are mature. DOD has not identified a specific date for when this will occur. The new JCIDS process could complement any long-term investment approach developed for the New Triad by providing additional analysis and discussions to support New Triad investment and the development of a plan. In our past reporting on leading capital decision-making practices, we have determined that leading organizations have decision-making processes in place to help them assess where they should invest their resources for the greatest benefit over the long term. These processes help an organization determine whether its investments are the most cost effective, support its goals, and consider alternatives before making a final selection. A long-term investment approach is an important tool in an organization’s decision-making process to define direction, establish priorities, assist with current and future budgets, and plan the actions needed to achieve goals. Our analysis of several investment plans showed that such an approach includes information on future investment requirements, projected resources, investment priorities and trade-offs, milestones, and funding timelines, and is intended to be a dynamic document, which would be updated to adapt to changing circumstances. In the past, DOD has developed and maintained long-term investment planning documents for major defense capabilities–such as the Unmanned Aerial Vehicles Roadmap 2002-2027 and the “Bomber Roadmap”–to provide senior decision makers options in the development of broad strategies that will define future DOD force structure and help with the resource allocation process. In 2003, DOD also published an Information Operations Roadmap, which supports collaboration of broad information operations efforts and endorses the need for the department to better track information operations investments. As noted earlier, Office of the Secretary of Defense officials told us that New Triad documents–including the Nuclear Posture Review, Nuclear Posture Review Implementation Plan, and the first strategic capability assessment–identify some of the near-term investments needed to provide capabilities for the New Triad. However, this investment direction is incomplete and does not address long-term affordability challenges that DOD may be faced with in sustaining and developing new capabilities to implement the New Triad. Office of the Secretary of Defense officials told us that the strategic capability assessment provides a near-term investment approach by identifying priorities for focusing resources to keep investment efforts on track to reach New Triad implementation goals for 2012. According to the officials, the team conducting the strategic capability assessment developed a list of capabilities that were needed in key areas, such as strategic strike and missile defense, from the Nuclear Posture Review’s vision of the New Triad. The team then reviewed current operation activities, acquisition programs of record, and a potential range of new technologies to determine any capability shortcomings. Based on this review, the assessment team was able to determine whether initiatives to develop New Triad capabilities in the key areas were (1) met or on track to be satisfied by 2012; (2) on track, but would not be met by 2012; or (3) not on track to be met by 2012 unless additional funding was provided. Office of the Secretary of Defense officials told us that by determining the status of meeting capabilities in each of the key areas, DOD would be able to better prioritize future investment decisions for the New Triad. However, Office of the Secretary of Defense for Policy officials acknowledge that the first strategic capability assessment provides only a limited, near-term investment approach for the New Triad. These officials told us that the assessment did not review and assess some key capabilities of the New Triad, such as cruise missile defense, information operations, and passive defense, and may not have fully surveyed existing capabilities in the areas that were included in the assessment. Further, it does not address the potential for further investments to replace one or more existing nuclear platforms that will approach the end of their useful lives. These officials told us that they expect future strategic capability assessments to include New Triad key areas not reviewed in the first assessment. Additionally, Office of the Secretary of Defense officials told us that while the assessment’s recommendations are not binding on DOD programming and budgeting decisions, the assessment was used during the department’s last program review in developing the fiscal year 2006 defense budget. DOD, in its 2003 Nuclear Posture Review Implementation Plan, called for the creation of an overarching strategic planning document for the New Triad that would establish the strategies and plans for developing new strategic capabilities to meet national security goals stated in the Nuclear Posture Review. The plan also was to provide broad guidance for integrating the elements of the New Triad as new capabilities came on line and for the development of future forces, supporting systems, planning and the creation of a responsive infrastructure. However, Office of the Secretary of Defense Policy officials told us that while a draft plan was prepared, they decided not to circulate the draft for comments because they believed the results of the first strategic capability assessment would result in too many changes to the plan. Instead, the officials told us that the strategic capability assessment process would develop the strategy, plans, and guidance that were to be provided by the plan. In its Nuclear Posture Review Implementation Plan, DOD states a need for a long-term investment strategy for the New Triad, and according to the plan, intends to conduct a study to evaluate options for preparing an integrated, long-term investment strategy for strike capabilities, defensive capabilities, and infrastructure when nonnuclear strike and missile defense concepts are mature. Policy and Acquisition, Technology, and Logistics officials in the Office of the Secretary of Defense told us that there are several concepts related to New Triad capabilities being developed, including the Strategic Deterrence Joint Operating Concept and concept and operational plans for global strike and integrated ballistic missile defense. The officials told us that once nonnuclear strike and missile defense concepts are developed, specific programs could be better identified to implement these concepts, including new programs to develop capabilities that do not currently exist. These officials told us that they recognize the importance of a long-term investment approach for the New Triad to provide a basis for decisions on resources for future capabilities initiatives. However, they do not believe the development of the nonnuclear strike and missile defense concepts are far enough along to begin the study leading to development of a long-term investment strategy. These officials did not provide us with an estimate for when these concepts would be considered sufficiently mature to begin the study. While we agree that some concepts are continuing to evolve, and that new systems are still under development, we do not believe that these circumstances preclude DOD from beginning to plan for the future of the New Triad. For example, although DOD is still developing concepts for missile defense, it is planning to spend billions of dollars over the next several years to develop a range of missile defense capabilities. As new information becomes available, we would expect to see adjustments in DOD’s plans–that is the nature of long-term planning. Further, without the context of a long-term investment approach for acquiring new capabilities and replacing some or all of its aging systems that provide New Triad capabilities, DOD will continue to invest billions of dollars on capabilities that will affect the long-term composition of the New Triad. DOD is likely to face significant affordability challenges in the long term as some existing nuclear weapons platforms begin reaching the end of their expected service lives within the next 15 years and as missile defense capabilities are expanding. Given the length of time to develop and acquire capabilities for the New Triad and the need to consider long-term affordability issues, DOD is also at risk of not considering the best approaches to developing and sustaining capabilities needed to provide the broad range of military options for the President and Secretary of Defense that are envisioned for the New Triad. DOD is further at risk of not effectively integrating the wide range of diverse New Triad capabilities as they are developed and being able to effectively determine future investment costs and the priorities and trade-offs needed to sustain New Triad implementation. In our February 2005 report addressing the challenges that the nation faces by its growing fiscal imbalance in the 21st century, we stated that DOD’s current approach to planning and budgeting often results in a mismatch between programs and budgets and that DOD does not always fully consider long-term resource implications and the opportunity cost of selecting one alternative over another. The new JCIDS process could play a role in any long-term investment approach that is eventually prepared for the New Triad by providing a forum for additional analyses and assessments to support New Triad investment decisions and ensure that those decisions are in concert with DOD’s overall investment priorities. The JCIDS process is intended to provide a means to ensure that new capabilities are conceived and developed in a joint warfighting context. The process intends to (1) focus on achieving joint operational capabilities rather than on individual weapon systems and (2) provide a systematic means to identify capability gaps, propose solutions, and establish roadmaps for future investments to acquire needed capabilities. Capability assessments, developed through the process, are designed to have a long-term focus, consider a wide range of potential materiel and nonmateriel solutions across the military services, analyze trade-offs among different solutions, and identify areas where existing capabilities are redundant or excessive. The process offers proposed solutions that are intended to be integrated and prioritized and would eventually be incorporated into resource roadmaps that show the investment strategies to develop and acquire the needed capabilities. JCIDS also intends to involve the combatant commanders early in the decision-making process to provide a strong warfighter perspective in identifying capabilities and resource priorities. The U.S. Strategic Command has created mission capabilities teams within its Capability and Resource Integration Directorate that closely align its missions with the JCIDS process to strengthen its ability to more effectively advocate for the capabilities needed to perform its missions. The Commander of the U.S. Strategic Command told us that his intent is for these teams to play an active role in identifying and developing New Triad capabilities. New Triad capabilities span most of the functional areas established in the JCIDS process, including command and control and force application. Officials in the Joint Staff’s Office of Requirements Assessment told us that the JCIDS process does not currently identify and track joint warfighting capabilities as capabilities for the New Triad and Office of the Secretary of Defense officials told us that there are no efforts at this time to crosswalk the JCIDS’ joint warfighting capabilities with the New Triad. However, Joint Staff officials said that organizations with New Triad responsibilities, such as the U.S. Strategic Command, do participate in the working groups and other activities throughout the JCIDS process to ensure that their equities are addressed. The JCIDS process could provide benefits to defense planning, but because the process is still very early in its development it is unclear whether or how DOD plans to use JCIDS to address its New Triad investments. It is important for DOD and congressional decision makers to have the most complete accounting possible of the projected spending planned for the New Triad over the next several years as they deliberate the budget. Until DOD reaches agreement on the program elements that comprise New Triad spending in its FYDP, and creates a way to aggregate spending, neither defense officials nor Congress will have visibility over all of the projected spending planned in the near-term for the New Triad. Importantly, the Commander of the U.S. Strategic Command, who has been assigned significant responsibilities for coordinating and integrating New Triad capabilities from a warfighter perspective, will not have the resource visibility needed to effectively carry out this new role. This information is needed to accurately assess the affordability of the various activities and weapon systems that make up the New Triad, and to make timely and informed decisions on the funding required to develop, acquire, and integrate the wide range of diverse New Triad capabilities. Moreover, without an overarching and integrated long-term investment approach for the New Triad, information on affordability challenges, future funding priorities, and requirements beyond the current FYDP is not fully known. While DOD believes it is still too early to develop a long-term investment approach, further delaying the start of this effort puts the department at risk of not developing and acquiring capabilities for the New Triad when needed. As a result, the President and Secretary of Defense can not be assured that DOD has the broad range of military options envisioned in the New Triad. Although New Triad concepts are continuing to evolve and mature, laying the foundation now for a long-term investment approach would provide DOD with an additional planning tool for future development of the New Triad concept–a tool that could be continuously improved and updated as better information becomes available and as changing security and fiscal circumstances warranted. The need for such an approach becomes increasingly important as existing nuclear platforms begin approaching the end of their useful lives and decisions to replace one or more of the platforms are required. Additionally, without such an approach, decision makers lack information on projected costs, spending priorities and trade-offs, resource requirements, and funding timelines in making decisions on the spending commitments needed to sustain New Triad implementation. Further, without a long-term investment approach, the large number of New Triad stakeholders, such as the military services, defense agencies, and combatant commands, will lack the direction and focus they need to effectively prepare future funding plans to develop, acquire, and integrate the capabilities. Lastly, while the new JCIDS process is intended to provide a better approach to identifying solutions to capability shortcomings and strengthen the role of combatant commanders in making decisions on capability investments, it is yet unclear how the process will be used to specifically support investment decisions for the New Triad. To strengthen DOD’s implementation of the New Triad and provide greater transparency of resources that are being applied to developing, acquiring, and sustaining the needed capabilities, we recommend that the Secretary of Defense take the following four actions: Direct the Director, Office of Program Analysis and Evaluation, in consultation with the Under Secretary of Defense (Comptroller), to (1) develop and obtain approval of a comprehensive list of program elements in the FYDP, which support activities for developing, acquiring, and sustaining New Triad capabilities; (2) modify the FYDP to establish a virtual major force program for the New Triad by creating new data fields that would clearly identify and allow aggregation of New Triad-related program elements to provide increased visibility of the resources allocated for New Triad activities; and (3) report each year the funding levels for New Triad activities and capabilities in the department’s summary FYDP report to Congress. The Secretary of Defense should direct that these three actions be completed at or about the time when the President’s budget for fiscal year 2007 is submitted to Congress. Direct the Under Secretaries of Defense for Policy and Acquisition, Technology, and Logistics to develop an overarching and integrated long-term investment approach for the New Triad that provides decision makers with information about future joint requirements, projected resources, spending priorities and trade-offs, milestones, and funding timelines. As part of developing and implementing this approach, DOD should leverage the analyses, assessments, and other information prepared under the Joint Capabilities Integration and Development System process. The Secretary of Defense should direct that development of a long-term investment approach be completed in time for it to be considered in the department’s preparation of its submission for the President’s budget for fiscal years 2008 and 2009 and be updated, as needed, to adapt to changing circumstances. On April 28, 2005, we provided a draft of this report to DOD for review and comment. As of the time this report went to final printing, DOD had not provided comments as requested. However, DOD did provide technical changes, which have been incorporated in this report as appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Commander, U.S. Strategic Command; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me on (202) 512-4402. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix IV. U.S. Strategic Command has a significant role in implementing the New Triad, advocating for the development of New Triad capabilities, and supporting its missions. It derives these responsibilities from missions assigned by the President and the Secretary of Defense. Table 3 describes U.S. Strategic Command’s current missions. To determine the extent to which the Department of Defense (DOD) has fully identified projected spending for the New Triad in its Future Years Defense Program (FDYP), we reviewed key DOD documentation to identify and define the New Triad’s capabilities and determine whether DOD had identified specific, related programs in the FYDP. Specifically, we obtained and reviewed relevant documents on the New Triad, including the 2001 Nuclear Posture Review, the Nuclear Posture Review Implementation Plan, the Secretary of Defense’s fiscal year 2002 Annual Defense Report, the Defense Science Board’s February 2004 report, Future Strategic Strike Forces, briefings by DOD officials, and relevant programming guidance. We also obtained the results of an analysis performed by the Office of Program Analysis and Evaluation that identified New Triad spending in the FYDP, and discussed the purpose, scope, methodology, and limitations of the analysis with officials from this office. In addition, we interviewed officials from the Office of the Secretary of Defense, including officials from the Office of the Deputy Assistant Secretary of Defense for Forces Policy, the Office of Strategic and Space Programs in the Office of Program Analysis and Evaluation, the Office of the Deputy Assistant of the Secretary of Defense for Nuclear Matters, and the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. We also interviewed officials from the Joint Staff, U.S. Air Force headquarters, U.S. Marine Corps headquarters, and the Department of Energy’s National Nuclear Security Administration to gain an understanding of their role in implementing the New Triad. We met with officials of the U.S. Strategic Command in Omaha, Nebraska, to discuss the command’s missions that are relevant to the New Triad. As part of our effort to determine the extent to which DOD has identified the projected spending for the New Triad in its FYDP, we performed our own notional analysis of the FYDP to identify resources associated with the New Triad. In doing so, we examined the FYDP’s structure and related documentation to determine whether the FYDP was designed to capture information that would identify specific program elements as being related to the New Triad. We met with relevant DOD officials to discuss our approach, and reviewed the analysis performed by the Office of Program Analysis and Evaluation. We also reviewed prior GAO work to gain a better understanding of whether the FYDP has been modified to allow for new program element aggregations. In performing our analysis, we assessed the reliability of the FYDP data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, (3) interviewing a knowledgeable DOD official about the data, and (4) reviewing data reliability tests on these data previously performed by GAO. We determined that the data were sufficiently reliable for the purposes of this report. Additional details about how we performed our notional analysis are presented in appendix III. To determine the extent to which DOD has developed a long-term investment approach to identify and manage future investments needed to achieve the synergistic capabilities envisioned for the New Triad, we interviewed officials and reviewed key documentation to determine whether DOD has taken steps to develop and follow such an approach. Specifically, to identify best practices for a long-term investment approach, we reviewed relevant GAO reports, and identified and reviewed investment approaches of other organizations. We then compared DOD’s approach for the New Triad against these elements that we had identified in other organizations to determine the extent to which DOD had these elements in place. In addition, we obtained and reviewed relevant documents, including the 2001 Nuclear Posture Review, the Nuclear Posture Review Implementation Plan, the Secretary of Defense’s fiscal year 2002 Annual Defense Report, the Defense Science Board’s February 2004 report, Future Strategic Strike Forces, briefings provided by DOD officials, and relevant programming guidance to identify investments and investment priorities in building New Triad capabilities. We also met with officials from the Joint Staff’s Directorate for Force Structure, Resources, and Assessments to discuss the development and implementation of the department’s new Joint Capabilities Integration and Development System, and to determine whether the New Triad’s plans for achieving desired capabilities were aligned to this new system. Additionally, we interviewed officials from the Office of the Secretary of Defense, including officials from the Office of the Deputy Assistant Secretary of Defense for Forces Policy, the Office of Strategic and Space Programs in the Office of Program Analysis and Evaluation, the Office of the Deputy Assistant of the Secretary of Defense for Nuclear Matters, and the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. We also interviewed officials from the Joint Staff, U.S. Air Force headquarters, U.S. Marine Corps headquarters, and the Department of Energy’s National Nuclear Security Administration to gain their perspectives. In addition, we visited the headquarters of the U.S. Strategic Command in Omaha, Nebraska, and met with command officials to discuss investments needed to acquire capabilities and implement the command’s missions. Our review was conducted between December 2003 and April 2005 in accordance with generally accepted government auditing standards. To determine how much the Department of Defense (DOD) plans to spend on the New Triad, we performed a notional analysis of the Future Years Defense Program (FYDP) to identify programs and projected spending associated with New Triad capabilities. This analysis identifies 737 program elements that are either “fully dedicated” or “not fully dedicated” to the New Triad. “Fully dedicated” program elements provide capabilities that primarily execute or support New Triad missions, whereas “not fully dedicated” program elements provide capabilities that have wider military application than just the New Triad. Our notional analysis is based on certain assumptions, which we considered to be relevant and reasonable, about how to align New Triad capabilities to FYDP program elements. For example, we assume that: All program elements in the FYDP that are not defined as “historical” are currently active and valid for analysis, even though there may not be any spending currently associated with the program elements over the fiscal years 2004 through 2009 time frame. Certain FYDP field values, or combinations of values, can be used to identify groups of program elements as being related to the New Triad. For example, certain combinations of Force and Infrastructure Codes and Defense Mission Codes can be used to identify particular New Triad capabilities. To ensure that our assumptions were reasonable, we discussed our overall approach with budget experts at GAO and the Congressional Budget Office and with DOD officials. Generally, these officials agreed with our approach to identify the projected spending associated with the New Triad included in the FYDP. However, DOD officials cautioned that identifying program elements that are not fully dedicated to the New Triad can be difficult because of the subjectivity required in deciding on the extent to which a program element provides capabilities for the New Triad. Therefore, our notional analysis suggests a methodology that can be used to conduct a comprehensive accounting of the spending plans for the New Triad, and is not meant to provide a definitive accounting of projected New Triad spending. We recognize that the assumptions we made are subjective, and that other analyses to identify projected spending on New Triad capabilities in the FYDP may use different assumptions and obtain somewhat different results. To identify DOD’s definitions of the four New Triad capabilities—offensive strike; active and passive defenses; responsive infrastructure; and command and control, intelligence, and planning—we used relevant DOD documentation, such as the 2001 Nuclear Posture Review, the Nuclear Posture Review Implementation Plan, the Secretary of Defense’s fiscal year 2002 Annual Defense Report, and the Defense Science Board’s February 2004 report, Future Strategic Strike Forces. We compared these capability definitions with information about each of the 4,725 FYDP program elements we reviewed. When we determined that a program element was related to one or more of the New Triad’s capabilities, we categorized it according to the particular capability that it supported. We then determined whether the program elements that we identified were either fully dedicated or not fully dedicated to the New Triad. In making this determination, we assumed that all of the program elements identified in the Office of Program Analysis and Evaluation analysis were fully dedicated to the New Triad. Table 4 summarizes the criteria we used to identify and categorize program elements that are linked to the New Triad. We then used the FYDP data to identify the projected spending associated with these program elements for fiscal years 2004 through 2009, and expressed our results in then-year dollars. The data for the projected spending are current as of the President’s budget submission to Congress for fiscal year 2005. The FYDP’s strategic forces major force program, one of 11 major force programs in the FYDP, includes $55.6 billion in then-year dollars for the New Triad for fiscal years 2004 through 2009, or 15 percent of the $360.1 billion of total spending that we identified. The offensive forces and weapons systems in this program are primarily nuclear-focused. As indicated in table 5, the remaining $304.6 billion, or 85 percent of the projected spending that we identified, is dispersed among 7 of the remaining 10 major force programs in the FYDP. The command, control, communications, and intelligence program accounted for the largest share of New Triad-related spending—$133.5 billion, or 37 percent of the projected spending that we identified. We did not identify any projected spending on the New Triad in major force programs for central supply and maintenance; training, medical, and other general personnel activities; and support of other nations. We analyzed the $360.1 billion of projected spending associated with the New Triad based on primary appropriation category, as illustrated in figure 3. We determined that the largest amount of projected spending is for research, development, test, and evaluation funding, which accounts for $141.8 billion or 39 percent of the $360.1 billion in projected spending that we identified. We identified $111.0 billion in projected spending for operation and maintenance appropriations, or 31 percent of the total spending that we identified. Defensewide programs, including programs managed by the Missile Defense Agency, the Office of the Secretary of Defense, and intelligence- related defense agencies such as the Defense Intelligence Agency, account for 50 percent of the $360.1 billion of projected spending that we identified as being associated with the New Triad. Spending for Missile Defense Agency-related program elements totals $53.1 billion during fiscal years 2004 through 2009 and is greater than the spending we identified for either the departments of the Army or Navy. As shown in figure 4, among the military departments the Air Force accounts for the largest share of New Triad spending—$112.9 billion, or 31 percent of the $360.1 billion that we identified for fiscal years 2004 through 2009. Spending by the Air Force, Army, and Navy includes service support for defense agencies and combatant commands, such as the U.S. Strategic Command. In addition to the individual named above, Gwendolyn R. Jaffe, Mark J. Wielgoszynski, David G. Hubbell, Kevin L. O’Neill, Julie M. Tremper, and Renee S. McElveen made key contributions to this report. | In its December 2001 Nuclear Posture Review, the Department of Defense (DOD) created a New Triad by significantly changing its definition and conceptual framework for its strategic capabilities to include not only the nuclear capabilities of the old Triad that consisted of intercontinental ballistic missiles, submarine-launched ballistic missiles, and strategic bombers, but also the capabilities of offensive conventional strike forces, active and passive defenses, and a revitalized defense infrastructure. GAO was asked to determine the extent to which DOD has (1) identified the projected spending for the New Triad in its Future Years Defense Program (FYDP) and (2) developed a long-term investment approach to identify and manage future spending for the New Triad. Although DOD broadened its definition of strategic capabilities during the 2001 Nuclear Posture Review and established a New Triad, it has not developed a way to use the FYDP to identify the total amount it plans to spend to sustain and enhance New Triad capabilities during the next few years. The FYDP is one of the principal tools available to help inform DOD and Congress about spending plans for the next 5 years and to make informed decisions in light of competing priorities. While DOD has identified some New Triad spending included in the FYDP, it has not identified all associated spending. GAO's notional analysis of New Triad-related programs in the FYDP through 2009 shows that overall spending could be significantly greater than DOD's limited analyses have identified to date. According to DOD officials, DOD has not fully identified spending in the FYDP because of the diversity and broad scope of the concept. A mechanism for aggregating FYDP data, known as a "virtual major force program," could help DOD address these obstacles and provide the Secretary of Defense and Congress with better visibility into overall DOD spending plans for the New Triad. DOD also faces long-term affordability challenges in funding the New Triad. However, it has not developed an overarching and integrated long-term investment approach to identify the projected resource requirements and funding timelines to acquire and sustain New Triad capabilities beyond the period of time covered by FYDP. Long-term capital investment planning is an important tool to help organizations establish priorities and develop future budgets. DOD is likely to face significant affordability challenges in the long term in deciding the mix of nuclear and conventional capabilities needed to implement the vision of the New Triad, as existing nuclear weapons platforms begin to reach the end of their lives within the next 15 years and missile defense capabilities are expanding. While DOD has identified some near-term investments, its investment plans are incomplete and it lacks a comprehensive strategy for developing a long-term plan. |
Over 150 million U.S. citizens are connected to the Internet. According to the FBI, the number of people with access to the Internet increased 182 percent between 2000 and 2005. In 2006, total nontravel-related spending on the Internet was estimated to be $102 billion by a private sector entity, a 24 percent increase over 2005. While the benefits of interconnectivity have been enormous, it has provided new horizons and techniques for crime. Cybercrime refers to criminal activities that specifically target a computer or network for damage or infiltration. For example, it can be a crime to access (“hack into”) a computer without authorization or to distribute viruses. Cybercrime also includes the use of computers as tools to conduct criminal activity such as fraud, identity theft, and copyright infringement. Computers significantly multiply the criminal’s power and reach in committing such crimes. Figure 1 describes and compares cybercrime and traditional criminal techniques. Cybercrime techniques have characteristics that can vastly enhance the reach and impact of criminal activity, such as the following: Criminals do not need to be physically close to their victims to commit a crime. Technology allows criminal actions to easily cross multiple state and national borders. Cybercrime can be carried out automatically, at high speed, and by attacking a vast number of victims at the same time. Cybercriminals can more easily remain anonymous. To help facilitate cybercrimes, criminals use several techniques listed in table 1. Companies that process large volumes of Internet traffic, such as Postini, Symantec, and IBM analyze their traffic for patterns and trends and have found that the cybercrime techniques in table 1 are prevalent. Table 2 shows reported volumes of cybercrime techniques. Efforts to address cybercrime follow the same basic process as efforts to address traditional crime. As figure 2 shows, this basic process is one of protection, detection, investigation, and prosecution. To protect networks and information against cybercrime, organizations and individuals implement cybersecurity techniques such as access controls (passwords) and firewalls. In addition, they use monitoring devices or intrusion detection systems to detect incidents that could potentially be criminal intrusions. As figure 2 shows, monitoring unusual activity allows organizations and individuals to make adjustments to improve protection. When a suspected cybercrime is detected, organizations and individuals must decide what action to pursue. Depending on the severity of the incident, the level of evidence, and their comfort with revealing the incident, they may or not report it to law enforcement. Generally, investigations begin once an incident is reported to law enforcement. During the preliminary investigation, federal, state, or local law enforcement, along with their respective prosecutors, determine if a crime occurred and if a further investigation is warranted. Also, in some cases, private sector and academic analysts may provide expertise. Among the factors weighed by law enforcement authorities in determining whether to conduct an investigation is whether their agency has jurisdiction over the crime, the number and location of the victims, the expected location of the criminal, the amount of loss, and the agency’s investigative priorities and available resources. If it is determined that an investigation will not be pursued, law enforcement may provide advice to victims that may be used to improve their protective measures. When a criminal investigation is pursued, law enforcement investigators have the initial responsibility for leading the evidence-gathering effort and working with cyberforensic investigators and examiners with the technical expertise to analyze the evidence. In cases where evidence is not voluntarily provided, law enforcement can use various subpoena authorities to obtain information needed to perform the investigation. A key component of cybercrime investigations is the gathering and examination of electronic evidence that can be useful for prosecution. Using cyberforensic tools and techniques, cybercrime investigators and examiners gather and analyze electronic evidence. If available, cyberforensic laboratories may be used to extract the electronic evidence and present it in a court-admissible format. The evidence could entail analysis of terabytes of information on multiple electronic devices, the electronic path taken by a fraudulent e-mail, pornographic images stored on a hard drive, or data stored on a mutilated but later reconstructed CD- ROM. The ability to gather electronic evidence and the assurance that cyberforensic procedures do not compromise the evidence gathered can be key to building a case and prosecuting cybercriminals. Cybercrime investigations and evidence gathering can also be conducted while a crime is ongoing. If a crime is being investigated while it is still occurring, investigators may use sophisticated techniques to investigate criminal activity that include court-ordered wiretaps. In determining whether and how to gather evidence of information transmitted electronically, law enforcement may make an application to a court for a wiretap pursuant to the Wiretap Act. To obtain such orders, the application to the court must describe, among other things, the criminal activity and the identity of those involved, if known. If sufficient evidence is gathered, it can lead to a prosecution. Federal and state prosecutors determine if a prosecution will be pursued based on a number of factors including jurisdiction over the crime, the type and seriousness of the offense, the sufficiency of the evidence, their prosecutorial priorities, and the location and number of the victims. Prosecuting attorneys will also consider the dollar loss and the number of incidents. Some federal prosecuting attorneys may not pursue cybercrime cases because they do not meet the minimum thresholds established for their districts. Thresholds are established by prosecuting attorneys to appropriately focus their limited resources on the most serious crimes that match their district’s priorities. For example, if fraud has been committed through the use of a computer, the amount of the dollar loss may need to reach a specific threshold amount for the U.S. Attorney to accept the case. When the U.S. Attorney does not accept a case for prosecution because it does not meet such a threshold, state authorities may decide to accept the case for prosecution. In addition to criminal remedies, civil remedies are available to address cybercrime activity. The burden of proof in a civil case is not as high as in a criminal case. At the federal level, the FTC investigates activities that could be classified as cybercrime as part of its consumer protection mission and seeks civil injunctions and monetary remedies. In addition, many states have civil statutes that may be applied to cybercrime situations. In the State of Washington, for example, the Attorney General can apply the state’s consumer protection statute to cases of cyber- facilitated fraud. Pursuing the case in civil court, the state’s Attorney General can seek civil remedies such as the repayment of losses or penalties for wrongdoing or fraud, which could potentially deter future criminal attempts. Federal and state governments and other nations have enacted laws that apply to cybercrime and the legal recourse or remedies available. In addition, there are international agreements to improve the laws across nations and international cooperation on addressing cybercrime. Federal statutes address specific types of cybercrime, while other federal statutes address both traditional crime and cybercrime. Table 3 describes key federal laws used to investigate and prosecute cybercrime activity. Members of Congress have proposed new federal legislation to augment current cybercrime statutes. For example, in February 2007, the Internet Stopping Adults Facilitating the Exploitation of Today’s Youth Act (SAFETY) was introduced in the House Judiciary Committee as an anticybercrime bill. Among its various provisions addressing the exploitation of children, the SAFETY Act provides for the promulgation of regulations that would require Internet service providers to retain data such as a subscriber’s name and address, user identification, or telephone number to facilitate law enforcement investigations. Also in February 2007, the Securing Adolescents From Exploitation-Online (SAFE) Act of 2007 was introduced in the Senate Committee on the Judiciary. The SAFE Act would include explicit requirements for Internet service providers to report suspected child pornography violations. The House of Representatives passed the Securely Protect Yourself Against Cyber Trespass Act in June 2007. This bill, if signed into law, would prohibit the use of spyware that could take control of a computer or collect user information without permission. The bill would authorize stiff civil penalties against violators. State and local governments have been enacting laws to serve law enforcement efforts in their individual jurisdictions and to enhance cybercrime prevention, investigation, and prosecution efforts. States have also enacted laws against particular types of cybercrime, including laws addressing spamming and spyware. For example, Virginia’s Anti-Spam Act outlaws the use of fraudulent means, such as using a false originating address, to send spam. Further, aggravating factors (such as sending 10,000 spam messages in a 24-hour period or generating more than $1,000 in revenue from a specific spam message) make the crime punishable as a felony under Virginia law. Also, California’s Consumer Protection Against Computer Spyware Act makes it illegal for anyone to install software on someone else’s computer and use it to deceptively modify settings, including a user’s home page, default search page, or bookmarks. It also outlaws the collection, through intentionally deceptive means, of personally identifiable information through keystroke-logging, tracking Web site visits, or extraction of such information from a user’s hard drive. California has also enacted legislation requiring security measures and warnings for wireless network devices. In addition, Westchester County, New York, has taken action to improve the security of wireless networks. Its wireless security law requires that commercial businesses secure their wireless networks or face fines. The law also requires businesses providing wireless Internet access to put up signs advising users of the security risks. Westchester County’s enforcement efforts have brought fines against businesses exposing sensitive data over wireless networks. Cybercrime laws vary across the international community. Australia enacted its Cybercrime Act of 2001 to address this type of crime in a manner similar to the U.S. Computer Fraud and Abuse Act, discussed above. In addition, Japan enacted the Unauthorized Computer Access Law of 1999 to cover certain basic areas similar to those addressed by the U.S. federal cybercrime legislation. Countries such as Nigeria with minimal or less sophisticated cybercrime laws have been noted sources of Internet fraud and other cybercrime. In response, they have looked to the examples set by industrialized nations to create or enhance their cybercrime legal framework. A proposed cybercrime bill, the Computer Security and Critical Information Infrastructure Protection Bill, is currently before Nigeria’s General Assembly for consideration. The bill, if adopted, would mirror similar cybercrime legislation in industrialized nations like the United States, the United Kingdom, Australia, South Africa, and Canada. Because political or natural boundaries are not an obstacle to conducting cybercrime, international agreements are essential to fighting cybercrime. For example, on November 23, 2001, the United States and 29 other countries signed the Council of Europe’s Convention on Cybercrime as a multilateral instrument to address the problems posed by criminal activity on computer networks. Nations supporting this convention agree to have criminal laws within their own nation to address cybercrime, such as hacking, spreading viruses or worms, and similar unauthorized access to, interference with, or damage to computer systems. It also enables international cooperation in combating crimes such as child sexual exploitation, organized crime, and terrorism through provisions to obtain and share electronic evidence. The U.S. Senate ratified this convention in August 2006. As the 16th of 43 countries to support the agreement, the United States agrees to cooperate in international cybercrime investigations. The governments of European countries such as Denmark, France, and Romania have ratified the convention. Other countries including Germany, Italy, and the United Kingdom have signed the convention although it has not been ratified by their governments. Non- European countries including Canada, Japan, and South Africa have also signed but not yet ratified the convention. Cybercrime is a threat to U.S. national economic and security interests. Based on various studies and expert opinion, the direct economic impact from cybercrime is estimated to be in the billions of dollars. The overall loss projection due to computer crime was estimated to be $67.2 billion annually for U.S. organizations, according to a 2005 FBI survey. The estimated losses associated with particular crimes include $49.3 billion in 2006 for identity theft to about $1 billion annually due to phishing. In addition, there is concern about threats that nation-states and terrorists pose to our national security through attacks on our computer-reliant critical infrastructures and theft of our sensitive information. For example, according to the U.S.-China Economic and Security Review Commission report, Chinese strategists are writing about exploiting the vulnerabilities created by the U.S. military’s reliance on technologies and attacking key civilian targets. Also, according to FBI testimony, terrorist organizations have used cybercrime to raise money to fund their activities. However, despite the reported loss of money and information and known threats from our nation’s adversaries, there remains a lack of understanding about the true magnitude of cybercrime and its impact because it is not always detected or reported. Based on various studies and expert opinion, the direct economic impact from cybercrime is billions of dollars annually. The overall loss projection due to computer crime was estimated to be $67.2 billion annually for U.S. organizations, according to a 2005 FBI survey. The estimated losses associated with particular crimes include $49.3 billion in 2006 for identity theft and $1 billion annually due to phishing. The studies and experts derive their projected losses based on direct and indirect costs that may include estimated cost of intellectual property stolen, recovery cost of repairing or replacing damaged networks and equipment, and intangible loss due to the opportunity loss from lack of customer confidence in the doing online commerce. Table 4 shows the economic impact of cybercrime as reported by various studies and reports over the last several years. Many of the surveys and studies, such as those from IC3 and Computer Security Institute/FBI, are performed at least annually. In addition, the DOJ’s Bureau of Justice Statistics has conducted a cybercrime survey of private sector entities to gain a more definitive understanding of cybercrime’s economic impact on the United States. As of May 2007, the response rate and results had not been reported. Individual legal cases also illustrate the financial losses that victims incur due to cybercrime. Examples include the following: In February 2007, a defendant was convicted of aggravated identity theft, access device fraud, and conspiracy to commit bank fraud in the Eastern District of Virginia. The defendant, who went by the Internet nickname “John Dillinger,” was involved in extensive illegal online “carding” activities. He received e-mails or instant messages containing hundreds of stolen credit card numbers, usually obtained through phishing schemes or network intrusions, from “vendors” who were located in Russia and Romania. In his role as a “cashier” of these stolen credit card numbers, the defendant would then electronically encode these numbers to plastic bank cards, make ATM withdrawals, and return a portion to the vendors. Computers seized from the defendant revealed over 4,300 compromised account numbers and full identity information (i.e., name, address, date of birth, Social Security number, and mother’s maiden name) for over 1,600 individual victims. In September 2005, a Massachusetts juvenile was convicted in connection with approximately $1 million in victim damages. Over a 15-month period, the juvenile hacked into Internet and telephone service providers, stole an individual’s personal information and posted it on the Internet, and made bomb threats to high schools in Florida and Massachusetts. In October 2004, the Secret Service investigated and shut down an online organization that facilitated losses in excess of $4 million and trafficked in around 1.7 million stolen credit cards and stolen identity information and documents. This high-profile case, known as “Operation Firewall,” focused on a criminal organization of some 4,000 members whose Web site functioned as a hub for identity theft activity. In July 2003, a man was convicted of causing an aggregate loss of approximately $25 million and hacking into computers in the United States. The defendant pleaded guilty in these proceedings and admitted to numerous charges of conspiracy, computer intrusion, computer fraud, credit card fraud, wire fraud, and extortion. Those charges stemmed from the activities of the defendant and others who operated from Russia and hacked into dozens of computers throughout the United States, stealing usernames, passwords, credit card information, and other financial data, and then extorting money from those victims with the threat of deleting their data and destroying their computer systems. In May 2002, a New Jersey man was convicted of causing more than $80 million in damage by unleashing the “Melissa” computer virus in 1999 and disrupting personal computers and computer networks in business and government. There is continued concern about the threat that our adversaries pose to our national security through attacks on our computer-reliant critical infrastructures and theft of our sensitive information. Over the last several years, such risks have been described in a variety of reports and testimonies. Table 5 describes the concerns raised. The risks posed by this increasing and evolving threat are demonstrated by actual and potential attacks and disruptions, such as those cited below. DOD officials stated that its information network, representing approximately 20 percent of the entire Internet, receives approximately 6 million probes/scans a day. Further, representatives from DOD stated that between January 2005 and July 2006, the agency initiated 92 cybercrime cases, the majority of which involved intrusions or malicious activities directed against its information network. In November 2006, the U.S.-China Economic and Security Review Commission reported that China is actively improving its nontraditional military capabilities. According to the study, Chinese military strategists write openly about exploiting the vulnerabilities created by the U.S. military’s reliance on advanced technologies and the extensive infrastructure used to conduct operations. Chinese military writings also refer to attacking key civilian targets such as financial systems. In addition, the report stated that Chinese intelligence services are capable of compromising the security of computer systems. The commission also provided instances of computer network penetrations coming from China. For example, in August and September 2006, attacks on computer systems of the Department of Commerce’s Bureau of Industry and Security forced the bureau to replace hundreds of computers and lock down Internet access for 1 month. In August 2006, a California man was convicted for conspiracy to intentionally cause damage to a protected computer and commit computer fraud. Between 2004 and 2005, he created and operated a botnet that was configured to constantly scan for and infect new computers. For example, in 2 weeks in February of 2005, the defendant’s bots reported more than 2 million infections of more than 629,000 unique addresses (some infected repeatedly). It damaged hundreds of DOD computers worldwide. The DOD reported a total of $172,000 of damage due to a string of computer intrusions at numerous military installations in the United States (including Colorado, Florida, Hawaii, Maryland, South Carolina, and Texas) and around the world (including Germany and Italy). In addition, the botnet compromised computer systems at a Seattle hospital, including patient systems, and damaged more than 1,000 computers in a California school district over the course of several months in 2005. Officials from the California school district reported damages between $50,000 and $75,000 to repair its computers after the botnet struck in February 2005. The Central Intelligence Agency has identified two known terrorist organizations with the capability and greatest likelihood to use cyber attacks against our infrastructures. In March 2005, security consultants within the electric industry reported that hackers were targeting the U.S. electric power grid and had gained access to U.S. utilities’ electronic control systems. Computer security specialists reported that, in a few cases, these intrusions had “caused an impact.” While officials stated that hackers had not caused serious damage to the systems that feed the nation’s power grid, the constant threat of intrusion has heightened concerns that electric companies may not have adequately fortified their defenses against a potential catastrophic strike. Terrorist organizations have used cyberspace and cybercrime to raise money in a number of ways, such as facilitating protection schemes, credit card fraud, and drug smuggling. For example, in a July 2002 testimony, FBI officials stated that Al Qaeda terrorist cells in Spain used stolen credit card information to make numerous purchases. In addition, Indonesian police officials believe the 2002 terrorist bombings in Bali were partially financed through online credit card fraud, according to press reports. As larger amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available information technology, the likelihood increases that information attacks will threaten vital national interests. Despite the large reported impact of cybercrime, the true impact of cybercrime in the United States is unknown because cybercrimes are not always detected or reported. Organizations and individuals do not always detect cybercrimes. The effectiveness of the systems put in place to audit and monitor systems, including intrusion detection systems, intrusion protection systems, security event correlation tools, and computer forensics tools, have limitations that impact their ability to detect a crime occurring. For example, the effectiveness of intrusion detection systems is limited by their ability to capture accurate baselines or normal network or system activity. Also, these systems are prone to false positives and false negatives and are not as effective in protecting against unknown attacks. In addition, the effectiveness of security event correlation tools is limited by their ability to interface with numerous security products and the quality of the logs they rely upon. When a cybercrime is detected, companies and individuals can choose not to report the crime. Companies and individuals weigh the cost and impact of the incident with the time and effort needed to support an investigation and prosecution. Cybercrime reporting is discussed further in our challenges section. Federal agencies, state and local law enforcement, private industry, and academia have responsibilities, based on their primary missions or business interests, to protect against, detect, investigate, and prosecute cybercrime. Public and private sector entities are engaged in these efforts individually and through collaborative efforts. DOJ, DHS, and DOD and the FTC have key roles in addressing cybercrime within the federal government, along with the federal inspectors general. State and local law enforcement organizations also have key responsibilities in addressing cybercrime. Efforts range from fighting cybercrime by investigating and prosecuting it and improving the protection of systems through raising awareness and building relationships. The key agencies within DOJ that focus on enforcing cybercrime violations include the Criminal Division, U.S. Attorneys, and the FBI. Table 6 shows key DOJ organizations, suborganizations, and activities. Three key agencies within DHS have a role in addressing cybercrime issues—the Secret Service, the Cyber Security and Communications Office’s National Cyber Security Division, and Immigration and Customs Enforcement. Table 7 shows key DHS organizations, suborganizations, and activities. Within DOD, the Defense Criminal and Counterintelligence Investigation Organizations conduct all law enforcement investigations and the Defense Cyber Crime Center (DC3) can provide forensics support. Table 8 shows key organizations, suborganizations, and activities. The FTC was created to prevent unfair methods of competition. Its mission expanded over time with additional legislation authorizing it to serve as a protective force for U.S. consumers. The agency has the authority to file civil enforcement actions either in federal district court or administratively. Remedies in these civil actions range from orders to stop the illegal conduct to requiring disgorgement of illegal proceeds or payment of restitution. FTC’s Bureau of Consumer Protection investigates and enforces matters related to activities that may be classified as cybercrime. It has several divisions that focus primarily on different aspects of the FTC’s consumer protection mission. According to FTC staff, the Bureau of Consumer Protection is composed of six divisions, which target different substantive areas for enforcement and outreach purposes. The divisions routinely coordinate initiatives and share resources to most efficiently and effectively further the consumer protection mission. Its resources include headquarter staff and staff located at eight regional offices that investigate and bring a variety of consumer protection and competition cases and engage in outreach efforts. In addition, the Criminal Liaison Unit coordinates for all of the Bureau of Consumer Protection’s divisions with criminal law enforcement agencies across the U.S. to encourage the prosecution of criminal fraud. Federal Inspectors General have a role in preventing, detecting, and investigating cybercrime within their respective agencies. Specifically, 14 of 19 Inspectors General that provided information to us stated that they handle cybercrime investigations affecting their respective agency within their own capabilities. For example, certain Inspectors General reported having significant efforts in addressing cybercrime, including those for the Departments of Education, Energy, and Transportation and the Environmental Protection Agency. Additionally, 11 of the 19 Inspectors General stated that they perform an education and awareness role within their respective agencies by conducting training, providing presentations, and performing activities mandated by the Federal Information Security Management Act. State and local organizations address cybercrime through efforts to share information, improve expertise, and facilitate cybercrime prosecutions both nationally and locally. For example, on a national basis, SEARCH, an organization dedicated to improving state-level law enforcement, has three cybercrime focused programs related to providing high-tech crime training, technical assistance, and research on emerging technology nationwide. In addition, the National Association of Attorneys General has a cybercrime initiative benefiting state prosecutors. It also hosts a cybercrime conference that provides training in cybercrime investigative areas, legislation, case law, and public education tools. The association’s executive working group meets quarterly and shares information on criminal issues, including cybercrime. State-level law enforcement entities have implemented initiatives to facilitate the investigation and prosecution of cybercrime in the states. For example, the Commonwealth of Virginia’s Office of the Attorney General has a Computer Crime unit dedicated to investigating criminal cases violating the Virginia Computer Crimes Act. In addition, Virginia’s Attorney General formed the Virginia Cyber Crime Strike Force that collaborates with the U.S. Attorneys’ Offices, the Virginia State Police, the FBI and Virginia’s Bedford County Sheriff’s Office to investigate and prosecute cybercrime. Other examples of state efforts are the (1) Washington Attorney General’s High Tech Crime Unit, which litigates cases of cyberfraud, and pursues civil remedies under the state’s broad consumer protection law and (2) Washington State Patrol Computer Crime unit that serves as a first responder to computer crimes affecting state-funded institutions such as state and local governments and public schools and universities. The private sector’s focus is on the development and implementation of technology systems to protect against computer intrusions, Internet fraud, and spam and, if a crime does occur, to detect it and gather admissible evidence for an investigation. The private entities that focus on these technological efforts include Internet service providers, security vendors, software developers, and computer forensics vendors: Internet service providers offer businesses and home users various levels of access to the Internet and other Internet-related services such as customer support and spam and virus protection. Providers also assist law enforcement by monitoring and providing information on selected Internet activities and provide technical expertise to assist with investigations. In addition, providers can pursue civil action against users to punish inappropriate behavior. Security vendors such as e-mail security firms can screen electronic messages for harmful data and take action to prevent such data from reaching the intended target. Vendors also assist law enforcement by reporting instances of computer crime, providing technical assistance, and pursuing civil action against inappropriate behavior. Software developers are improving the quality and security of operating system programs to detect and block malicious code. Computer forensics vendors provide private companies with computer forensics investigative services to detect the theft of trade secrets and intellectual property, detect employee fraud, locate and recover previously inaccessible documents and files, provide reports on all user activity, and access password-protected files. In addition, computer forensic vendors develop tools used by law enforcement to investigate cybercrime. These tools allow for the analysis of digital media and the gathering of evidence that is admissible in court. Numerous partnerships have been established between public sector entities, between public and private sector entities, and internationally to collaborate and implement effective cybercrime strategies. Each of their strategies includes information sharing activities and consumer awareness efforts. Table 9 gives brief descriptions of key partnerships, their purposes, and primary stakeholders. Numerous challenges impede the efforts of public and private entities to mitigate cybercrime (see table 10) including (1) reporting cybercrime, (2) ensuring adequate law enforcement analytical and technical capabilities, (3) working in a borderless environment with laws of multiple jurisdictions, and (4) implementing information security practices and raising awareness. Although surveys and studies show that the nation potentially loses both billions of dollars annually and sensitive information as a result of cybercrime, definitive data on the amount of cybercrime is not available. Understanding the impact of cybercrime in the United States is a challenge because reporting of cybercrime is limited. When a cybercrime is detected, entities and individuals can choose to report it to law enforcement or not. They weigh the cost and impact of the incident with the time and effort needed to support an investigation and prosecution. In addition, our work and findings of the Congressional Research Service related to information sharing have shown that businesses do not always want to report problems because there is a perception that their information will be disclosed publicly, which could, in turn, cause harm to their business. Reasons for not reporting a crime to law enforcement include the following: Financial market impacts. The stock and credit markets and bond rating firms react negatively to security breach announcements, which could raise the cost of capital to reporting firms. Even firms that are privately held and are not active in public securities markets can be adversely affected if banks and other lenders judge them to be more risky than previously thought. Reputation or confidence effects. Negative publicity damages a reporting firm’s reputation or brand, and could cause customers to lose confidence, giving commercial rivals a competitive advantage. Litigation concerns. If an organization reports a security breach, investors, customers, or other stakeholders can use the courts to seek recovery of damages. If the organization has been open in the past about previous incidents, plaintiffs may allege a pattern of negligence. Signal to attackers. A public announcement alerts hackers that an organization’s cyber-defenses are weak and can inspire further attacks. Inability to share information. Some private-sector entities want to share information about an incident with law enforcement and other entities; however, once the information becomes part of an ongoing investigation, their ability to share information may be limited. Job security. IT personnel fear for their jobs after an incident and seek to conceal the breach from senior management. Lack of law enforcement action. According to private sector officials, law enforcement entities have failed to investigate cases reported to them, which is a disincentive for them reporting crimes in the future. To improve the reporting of cybercrime, the numerous public/private partnerships (e.g., the National Cyber Forensics and Training Alliance, InfraGard, and the Electronic Crimes Task Forces), as well as the awareness and outreach efforts of law enforcement discussed earlier, are methods for building better relationships and understanding between the public and private sectors. These efforts may increase trust between the public and private sector and encourage better reporting of cybercrimes when they occur. Efforts by law enforcement to investigate and prosecute cybercrime require individuals with specialized skills and tools. According to federal, state, and local law enforcement and private sector officials, it is a challenge to recruit such individuals from a limited pool of available talent, retain them in the face of competing offers, and train them to stay up to date with changing technology and increasingly sophisticated criminal techniques. Federal and state law enforcement organizations face challenges in having the appropriate number of skilled investigators, forensic examiners, and prosecutors. According to federal and state law enforcement officials, the pool of qualified candidates is limited because individuals involved in investigating or examining cybercrime are highly trained specialists requiring both law enforcement and technical skills, including knowledge of various IT hardware and software and forensic tools. According to Defense Cyber Crime Center officials, once an investigator or examiner specializes in cybercrime, it can take up to 12 months for those individuals to become proficient enough to fully manage their own investigations. Further, according to state officials, state and local law enforcement agencies do not have the resources needed to hire the investigators with adequate technical knowledge required to address cybercrime. Law enforcement organizations also find it difficult to retain highly skilled cyberforensic investigators and examiners. According to federal and state officials, the private sector demands individuals with the same skills and successfully attracts them away from their government positions with much higher salaries and better benefits. For example, according to an Assistant U.S. Attorney, several cybercrime experts, including attorneys, federal and state law enforcement agents, and cyberforensic examiners, have left their government positions due to the higher salaries and benefits offered by the private sector. The available pool of experienced federal cybercrime investigators is also impacted by FBI and Secret Service rotation policies. For example, according to FBI officials, new FBI agents, not initially assigned to one of the 15 largest field offices, are required to rotate to one of the these large offices after 3 years in order to have diversified experiences. According to FBI headquarters and field agents, when cybercrime investigators rotate out under this policy, they are not necessarily reassigned to cybercrime investigations in their new field office, and so their extensive cyber background is underutilized. In addition, the agents who rotate in to replace experienced cybercrime investigators may have little or no cybercrime experience or background. Further, according to FBI officials, the pool of experienced senior managers is impacted by the FBI’s current policy that senior field supervisory agents are limited to 5-year terms in their positions and then most move to seek further career advancement. This can include the movement of experienced cybercrime investigators out of senior cybercrime positions. Similarly, according to Secret Service officials, most Secret Service agents, including those with technical, cybercrime investigation expertise, rotate to a protective assignment, which focuses on the protection of the President, Vice President, and others and not on the investigation of cybercrime. In addition, officials stated that there is an investigative career track that allows agents to continue doing investigations, including those related to cybercrime; however, protective assignments are perceived as higher profile and could lead to greater career advancement. FBI and Secret Service officials acknowledged that the rotation policies have at times resulted in these agencies underutilizing staff with cyber expertise. The rapid evolution of technology and cybercrime techniques means that law enforcement agencies must continuously upgrade technical equipment and software tools. Such equipment and tools are expensive, and agencies’ need for them does not always fall into the typical federal replacement cycle. For example, in order for investigators to perform cyberforensic examinations and gather the evidence required to support a prosecution, the examiners and investigators must, in some cases, store and analyze huge amounts of digital data. According to federal law enforcement officials, the amount of data being collected is growing exponentially. However, according to law enforcement officials, state and local law enforcement agencies do not always have the resources to obtain the equipment necessary to analyze large amounts of data. Law enforcement organizations also find that maintaining a current understanding of new criminal techniques and technologies can be difficult. For example, law enforcement agents are required to extract forensic data from IT devices that have only been on the market for months. They also must keep up with innovative criminal techniques and approaches. For example, techniques for assembling and controlling botnets are becoming increasingly sophisticated and difficult to trace, making it difficult to identify certain spamming and phishing schemes. In addition, criminals are increasing their use of encryption techniques. This requires law enforcement to continue to research and develop appropriate countermeasures. Training can help to keep investigators’ skills current, but relevant courses are limited, costly, and time-consuming, and take agents away from the cases that they are investigating. Federal and state law enforcement organizations are taking steps to improve their analytic and technical capabilities. For example, the Secret Service has developed training programs for federal, state, and local law enforcement and DOD’s Defense Cyber Crime Center has a cyberforensic training program for DOD investigators and other law enforcement officials. Further, the FBI’s Cyber Action Teams rapidly provide technical expertise to cybercrime investigations worldwide, when needed. To overcome shortfalls in equipment and electronic storage, the FBI is sponsoring regional computer forensics laboratories to serve the needs of an entire region’s law enforcement. In addition, public/private partnerships, like the FBI’s Infragard and National Cyber Forensics Training Alliance and the Secret Service’s Electronic Crimes Task Forces, provide ways to share expertise between law enforcement, the private sector, and academia. Although it will continue to be a challenge to keep current with the rapid evolution of technology and cybercrime techniques, these DOD, FBI, and Secret Service efforts are positive steps to attempt to keep up with techniques and technology for investigations. Law enforcement organizations face the challenge of investigating and prosecuting cybercrime that crosses national and state borders, and working with laws, legal procedures, and law enforcement entities from multiple jurisdictions. Working in this environment complicates most cyber investigations. Private sector, individual, and law enforcement efforts are complicated by the borderless nature of cybercrime. As discussed earlier, cybercriminals are not hampered by physical proximity or regional, national, or international borders. Cybercriminals can be physically located in one nation or state, direct their crime through computers in multiple nations or states, and store evidence of the crime on computers in yet another nation or state. This makes it difficult to trace the cybercriminals to their physical location. In addition, cybercriminals can take steps to remain anonymous, making it difficult, if not impossible, to attribute a crime to them. Similar to efforts addressing traditional crime, efforts to investigate and prosecute cybercrime are complicated by the multiplicity of laws and procedures that govern in the various nations and states where victims may be found, and the conflicting priorities and varying degrees of expertise of law enforcement authorities in those jurisdictions. Laws used to address cybercrime differ across states and nations. For example, not all U.S. states have antispam laws or antispyware laws. In addition, an act that is illegal in the United States may be legal in another nation or not directly addressed in the other nation’s laws. Developing countries, for example, may lack cybercrime laws and enforcement procedures. Further, jurisdictional boundaries can limit the actions that federal, state, and local law enforcement can take to investigate cybercrime that crosses local, regional, and national borders. For example, state and local officials may be unable to pursue investigations outside of their jurisdiction, so when a cybercrime goes beyond their jurisdiction, they may need to rely upon officials of other jurisdictions to further investigate the crime. Additionally, extradition between states can be complicated depending on the laws of the state where the suspect is located and the knowledge of the states’ law enforcement and judiciary regarding cybercrime. In addition, the United States does not have extradition arrangements with all nations, which makes it impossible to extradite a cybercriminal from certain nations. Extradition from nations having an extradition agreement with the United States can be complicated or impossible if the nation’s laws do not make the action illegal or its magistrate is not knowledgeable about cybercrime. Also, state and local officials are unable to extradite persons from other nations without federal law enforcement assistance. Conflicting priorities also complicate cybercrime investigations and prosecutions. Cybercrime can occur without physical proximity to the victim, and thus a cybercriminal can operate without victimizing a citizen in the jurisdiction or federal judicial district in which the crime originated. With no negative impact on the citizens in that district, there may be no incentive for the local citizens to press their law enforcement officers to investigate the crime. According to state officials, it is difficult to commit resources to crimes where the victims are outside their state or jurisdiction, although the suspected cybercriminal may be prosecuted in the jurisdiction where the victim is located. Federal and state law enforcement organizations are taking steps to help them work in the borderless environment within which cybercriminals operate. For example, federal, state, and local law enforcement organizations participate in cybercrime task forces that combine a region’s law enforcement capabilities to investigate and prosecute cybercrime in the most advantageous way. To address transnational jurisdiction, investigation, and prosecution issues, DOJ and the State Department have established agreements with more than 40 nations through the G-8 High Tech Crime Working Group to address cybercrime cooperatively. The Council of Europe’s Cybercrime Convention is a similar international effort. These and other efforts are essential to addressing the transborder nature of cybercrime and enhancing the ability of law enforcement to capture, prosecute, and punish cybercriminals. A major challenge in mitigating cybercrime is improving information security practices on the part of organizations and individual Internet users. Raising awareness about criminal behavior and the need to protect information and systems is a key activity in addressing cybercrime. Criminals often take advantage of poor computer security practices, which makes maintaining a strong information security posture vital to efforts to stop cybercrime. However, individuals allow easy access for criminals to their personal computers and electronic devices by not enabling security on those devices. Without adequate information security, critical systems and sensitive data are more susceptible to criminal access, theft, modification, and destruction. Further, our audits have shown that federal agencies do not adequately protect the information systems that the government relies upon to deliver services to its customers. In addition, over the last several years, we have identified the challenges associated with the federal government’s efforts to coordinate public and private sector efforts to protect the computer systems that support our nation’s critical infrastructures. As a result, federal information security has been on GAO’s list of high-risk areas since 1997 and cyber critical infrastructure protection since 2003. In addition, we have made numerous recommendations to enhance the security of federal information systems and cyber critical infrastructure protection efforts. Implementation of these recommendations is essential to protecting federal information systems. A major challenge is educating the public in how to recognize cybercrime when it is occurring. Criminals prey on people’s ignorance and susceptibility to ruses. For example, attackers create e-mail and Web sites that appear legitimate, often copying images and layouts of actual Web sites. Some cybercrime techniques also take advantage of combinations of vulnerabilities. For example, phishing entices users to provide the sensitive information desired. However, phishers also use technical methods to exploit software and system vulnerabilities to reinforce users’ perceptions that they are on a legitimate Web site. Despite efforts by public and private entities to raise awareness about the importance of information security and the techniques used by criminals, users continue to not understand the need for protecting their personal information and to recognize unusual requests that could be criminal activity. The types of cybercrime that the media highlight, such as child pornography cases and major companies being hacked, do not tend to undermine people’s trust in the Internet. For example, there continue to be reports of people falling victim to well-known scams such as the Nigerian 4-1-9 fraud. In addition, even as awareness grows, practices are not easily changed. Further, the issues of adequate awareness apply to law enforcement. State and local law enforcement may not be aware of the cybercrime problem that could be impacting their citizens. There are numerous steps being taken to improve security of information systems and raise user awareness. For example, as discussed earlier, information security vendors provide software and services; software developers are attempting to improve the quality and security of their products; public and private entities are working together to identify and mitigate risks, including criminal activities; and federal organizations, such as the FBI, the Secret Service, FTC, and DHS, sponsor programs and organizations to raise user awareness about securing their information and not becoming a victim of cybercrime. These are positive steps to improve security and raise awareness. The actual and potential harms that result from cybercrime attacks in the United States are significant. Although the precise amount of economic loss due to cybercrime is unknown, its impact is likely billions of dollars. In addition, nation-state and terrorist adversaries are seeking ways to attack our nation’s critical infrastructures and steal our sensitive information. While numerous public and private entities—federal agencies, state and local law enforcement, industry, and academia—have responsibilities to address these threats, they face challenges in protecting against, detecting, investigating, and prosecuting cybercrimes. These challenges include reporting cybercrime, ensuring adequate law enforcement analytical and technical capabilities, working in a borderless environment with laws of multiple jurisdictions, and implementing information security practices and raising awareness. Public and private entities are working to address these challenges by expanding public/private partnerships to increase the trust between entities, to improve the quality and quantity of shared information, and to leverage resources and technologies across public and private boundaries. In addition, law enforcement organizations have formed task forces and international agreements to foster working in a borderless environment with laws from multiple jurisdictions. Continued expansion of these efforts is essential. Additionally, more can be done to assure an adequate pool of individuals with the skills needed to effectively combat cybercrime. Although law enforcement agencies must be sensitive to a number of organizational issues and objectives in their human capital programs, current staff rotation policies at key law enforcement agencies may negatively impact the agencies’ analytical and technical capabilities to combat cybercrime. We recommend that the Attorney General direct the FBI Director and the Secretary of Homeland Security direct the Director of the Secret Service to assess the impact of the current rotation approach on their respective law enforcement analytical and technical capabilities to investigate and prosecute cybercrime and to modify their approaches, as appropriate. We received written comments on a draft of this report from the FBI (see app. II). In the response, the Deputy Assistant Director from the FBI’s Cyber Division stated that the FBI Director had approved rotational policies after careful consideration of the viable alternatives provided by analysis and study conducted by the Human Resources Division. Further, he stated that the FBI Director had endorsed the establishment of five distinct career paths for both new and veteran special agents, including a specific designation for cyber matters. According to the Assistant Director, this career path will ensure the FBI recruits, trains, and deploys special agents with the critical cyber skill set required to maintain the FBI on the cutting edge of computer technology and development, and positioned to counter the constantly evolving cyber threat. Despite these efforts to assess and expand analytical and technical capabilities, the current rotational policies may adversely affect the FBI’s use of staff with cyber expertise; therefore, it is important to continually reassess the rotational policies that impact the FBI’s ability to address the cyber threat. In addition, we received written comments on a draft of this report from the Secret Service (see app. III). In the response, the Assistant Director, Office of Inspection, stated that agents who complete the Electronic Crimes Special Agent Program’s computer forensics training course are required to serve a minimum of four years in the program. In addition, he stated that the Secret Service is expanding its Electronic Crimes Special Agent Program and will have approximately 770 trained and active agents by the end of fiscal year 2007. He also stated that the rotation of the Electronic Crimes Special Agent Program agents does not have a detrimental impact on the agency’s cyber investigative capabilities because Secret Service field offices send additional agents through the program prior to a trained agent’s departure, and because the Electronic Crimes Task Forces allow the agency to draw on state and local law officials trained in cyber investigations and computer forensics. While we agree that expanding the Electronic Crimes Special Agent Program and leveraging the relationships and capabilities of the Electronic Crimes Task Forces is important to adequately addressing cybercrime, the current rotational policy may adversely affect the Secret Service’s use of staff with cyber expertise; therefore, it is important for the Secret Service to continually reassess the rotational policies that impact its ability to address the cyber threat. DOD, DOJ, DHS, state and local government, and other officials also provided technical corrections that have been incorporated in this report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Attorney General, the Secretaries of Defense and Homeland Security, the Chairman of the Federal Trade Commission, and other interested parties. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact David Powner at (202) 512-9286, or pownerd@gao.gov; or Keith Rhodes at (202) 512-6412, or rhodesk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Our objectives were to (1) determine the impact of cybercrime on our nation’s economy and security; (2) describe key federal entities, as well as nonfederal and private-sector entities, responsible for addressing cybercrime; and (3) determine challenges being faced in addressing cybercrime. To determine the impact of cybercrime on the U.S. economy and security, we analyzed various government and private-sector reports, surveys, and statistics related to cybercrime and conducted interviews with experts from law enforcement, academia, and information technology and security companies to verify, clarify, and gain a greater understanding of cybercrime’s impact. Further, we interviewed officials and staff at key federal agencies, including the Departments of Defense, Justice, and Homeland Security; and the Federal Trade Commission; and obtained, through structured interview questions, information from 19 federal Office of Inspectors General about the number and frequency of cybercrimes experienced at their respective agencies and the subsequent cost associated with addressing these incidents, among other things. To identify the key public and private-sector entities that work to mitigate and investigate computer crime and prosecute cyber criminals, we analyzed reports, surveys, and studies related to cybercrime. In addition, we held interviews with cybercrime experts from government and the private sector to identify entities and verify the entities identified as being important. To verify information and determine relevant activities, we performed document analysis, held site visits, conducted structured interviews, and received written responses to structured interview questions. The entities contacted during the course of our work include the following: Department of Justice: Computer Crime and Intellectual Property Section; Bureau of Justice Statistics; United States Attorneys, including the Pittsburgh and Seattle Computer Hacking and Intellectual Property units; FBI’s Cyber Division, including the Computer Intrusion Section and the Innocent Images National Initiative unit; FBI’s National Cyber Forensics and Training Alliance; FBI’s Cyber Initiative and Resource Fusion Unit; FBI’s Internet Crime Complaint Center; and FBI’s Pittsburgh and Seattle Field Office units. Department of Homeland Security: Special Agent in Charge of the Secret Service’s Criminal Investigative Division; the National Cyber Security Division’s Deputy Director of the Law Enforcement and Intelligence Section and Deputy Director of the United States Computer Emergency Readiness Center. Department of Defense: Defense Cyber Crime Center; Joint Task Force for Global Network Operations; Defense Criminal Investigative Service; Air Force Office of Special Investigation, Army Military Intelligence, and the Naval Criminal Investigative Service. Federal Trade Commission: Officials from the Divisions of Advertising Practices, Enforcement, and Marketing Practices. In addition, members of the team attended sessions of a Federal Trade Commission sponsored conference that focused attention on cybercrime. Office of Inspectors General: Department of Education’s Computer Crime Division/Office of Inspector General; written responses from structured interview questions from officials from the Inspectors General of the Small Business Administration, Department of Defense, Nuclear Regulatory Commission, Health and Human Services, National Science Foundation, Department of Veterans Affairs, General Services Administration, Department of Labor, Department of Transportation, Agency for International Development, Office of Personnel Management, Department of the Treasury, Department of Justice, Housing and Urban Development, Social Security Administration, Department of Energy, Department of the Interior. Private Sector: Counterpane Internet Security; Cyber Security Industry Alliance; CypherTrust; Guidance Software; InfraGard; Information Technology-Information Sharing and Analysis Center; Microsoft; Postini; SEARCH; Symantec; and other cybercrime experts. State and Local Entities: Office of the Attorney General of Washington; Washington State Highway Patrol’s Computer Crime Unit; Office of the Attorney General of Virginia—Computer Crime Unit; and the National Association of Attorneys General. We also met with representatives from the State Department to discuss the department’s role in addressing cybercrime. However, after meeting with representatives from the department’s Bureau of Resource Management, Political-Military Affairs, International Narcotics and Law Enforcement, and others, we determined that the department’s cybercrime responsibilities were outside the scope of our engagement. In addition, State Department representatives stated that they work closely with the Department of Justice’s Computer Crime and Intellectual Property Section on cybercrime issues and that Justice officials would be a better source to determine the impact of cybercrime on the United States and international efforts to address cybercrime. To determine the challenges being faced in addressing cybercrime, we gathered and analyzed relevant documents, interviewed key government and private-sector officials regarding challenges to fighting cybercrime, and conducted Internet and media research. Based on the information received and our knowledge of the issues, we determined the major challenges impeding efforts to address cybercrime. To observe operations of cybercrime related entities and interview relevant federal, state, and local government and private-sector officials, we performed our work between June 2006 and May 2007 in the Washington, D.C., metropolitan area; Pittsburgh, Pennsylvania; Seattle, Washington; and Fairmont, West Virginia; in accordance with generally accepted government auditing standards. In addition to the individuals named above, Barbara Collier, Neil Doherty, Michael Gilmore, Steve Gosewehr, Barbarol James, Kenneth A. Johnson, Kush K. Malhotra, Amos Tevelow, and Eric Winter made key contributions to this report. | Computer interconnectivity has produced enormous benefits but has also enabled criminal activity that exploits this interconnectivity for financial gain and other malicious purposes, such as Internet fraud, child exploitation, identity theft, and terrorism. Efforts to address cybercrime include activities associated with protecting networks and information, detecting criminal activity, investigating crime, and prosecuting criminals. GAO's objectives were to (1) determine the impact of cybercrime on our nation's economy and security; (2) describe key federal entities, as well as nonfederal and private sector entities, responsible for addressing cybercrime; and (3) determine challenges being faced in addressing cybercrime. To accomplish these objectives, GAO analyzed multiple reports, studies, and surveys and held interviews with public and private officials. Cybercrime has significant economic impacts and threatens U.S. national security interests. Various studies and experts estimate the direct economic impact from cybercrime to be in the billions of dollars annually. The annual loss due to computer crime was estimated to be $67.2 billion for U.S. organizations, according to a 2005 Federal Bureau of Investigation (FBI) survey. In addition, there is continued concern about the threat that our adversaries, including nation-states and terrorists, pose to our national security. For example, intelligence officials have stated that nation-states and terrorists could conduct a coordinated cyber attack to seriously disrupt electric power distribution, air traffic control, and financial sectors. Also, according to FBI testimony, terrorist organizations have used cybercrime to raise money to fund their activities. Despite the estimated loss of money and information and known threats from adversaries, the precise impact of cybercrime is unknown because it is not always detected and reported. Numerous public and private entities have responsibilities to protect against, detect, investigate, and prosecute cybercrime. The Departments of Justice, Homeland Security, and Defense, and the Federal Trade Commission have prominent roles in addressing cybercrime within the federal government, and state and local law enforcement entities play similar roles at their levels. Private entities such as Internet service providers and software developers focus on the development and implementation of technology systems to detect and protect against cybercrime, as well as gather evidence for investigations. In addition, numerous cybercrime partnerships have been established between public sector entities, between public and private sector entities, and internationally, including information-sharing efforts. Entities face a number of key challenges in addressing cybercrime, including reporting cybercrime and ensuring that there are adequate analytical capabilities to support law enforcement. While public and private entities, partnerships, and tasks forces have initiated efforts to address these challenges, federal agencies can take additional action to help ensure adequate law enforcement capabilities. |
Most of the funding in DOD’s fiscal year 1997 aircraft investment strategy is for the procurement of new aircraft such as the F/A-18E/F, F-22, and Joint Strike Fighter (JSF), while some is for the retrofit or remanufacture of existing aircraft, such as the AV-8B and the Longbow Apache. Table 1 describes the 17 aircraft programs and their estimated procurement funding requirements and appendix I provides details on these programs. DOD is pursuing these aircraft programs at a time when the federal government is likely to be faced with significant budgetary pressure for the foreseeable future. This pressure comes from efforts to balance the budget, coupled with funding demands for such programs as Social Security, Medicare, and Medicaid. Consequently, there is likely to be limitations on all discretionary spending, including defense spending, for the long term. This report addresses the availability of funding to support DOD’s aircraft investment strategy as planned prior to the Quadrennial Defense Review, but does not address specific aircraft requirements. Our previous reports have questioned the need for and timing of a number of DOD’s aircraft procurements. (A listing of prior reports is provided at the end of this report.) DOD asserts that its aircraft modernization programs are affordable as planned. On June 27, 1996, DOD officials testified before House Subcommittees that its overall aircraft investment plans were within historical norms and affordable within other service priorities. The officials further explained that the historical norms referred to were based on the aircraft funding experience of the early 1980s. Our review indicated that using the early to mid-1980s, the peak Cold War defense spending years, as a historical norm for future aircraft investments is not realistic in today’s budgetary and force structure environment. As shown in figure 1, DOD’s overall appropriations, expressed in fiscal year 1997 dollars, have decreased significantly from their high point in fiscal year 1985, and the amounts appropriated in recent years are at, or near, the lowest point over the past 24 years. As shown in figure 1, our review of aircraft procurement funding data from fiscal years 1973 through 1996, showed that funding for DOD’s aircraft purchases as a percentage of DOD’s overall budget fluctuated in relation to the changes in DOD’s overall budget. Funding for aircraft purchases increased significantly as DOD’s overall funding increased in the early 1980s and decreased sharply as the defense budget decreased in the late 1980s and early 1990s. In contrast, DOD’s planned aircraft investment strategy does not follow this pattern and calls for significantly increased funding for aircraft purchases during a period when DOD’s overall funding is expected to remain stable in real terms. Funding for DOD’s aircraft purchases was at its highest point, both in dollar terms and as a percentage of the overall DOD budget, during the early to mid-1980s. Figure 2 shows the 24-year funding history for DOD’s aircraft purchases from fiscal years 1973 through 1996. During that period, DOD spending on aircraft purchases fluctuated somewhat but averaged about 4.8 percent of the overall DOD budget. From fiscal years 1982 through 1986, DOD used from 6.0 percent to 7.7 percent of its overall annual funding on aircraft purchases. In contrast, since fiscal year 1973, the next highest level of annual aircraft funding was 5.5 percent in fiscal year 1989 and, in 12 other years, the funding was less than 4.5 percent of the overall DOD funding. Therefore, a long-term average would be more appropriate than early 1980’s historical norms as a benchmark for an analysis of funding patterns, and its use would even out the high aircraft procurement funding of the early 1980s and the lower funding of the post-Vietnam and post-Cold War eras. However, such a benchmark should not be used as a threshold for spending on aircraft purchases because it may not reflect the changed nature of the defense requirements and U.S. strategy that occurred with the end of the Cold War. If DOD’s aircraft investment strategy is implemented as planned and the defense budget stabilizes at DOD’s currently projected fiscal year 2003 level (about $247 billion in constant fiscal year 1997 dollars), DOD’s projected funding for aircraft purchases will exceed the historical average percentage of the defense budget for aircraft purchases in all but 1 year between fiscal year 2000 and 2015. For several years, it will approach the highest historical percentages of the defense budget for aircraft purchases. Those high percentages were attained during the peak Cold War spending of the early to mid-1980s. In fiscal year 1996, DOD spent $6.8 billion, or 2.6 percent of its overall budget, on aircraft purchases. To implement its aircraft investment strategy, DOD expects to increase its annual spending on aircraft purchases significantly from current levels and to sustain those higher levels for the indefinite future. For example, as shown in figure 4, DOD’s annual spending on aircraft purchases is projected to increase about 94 percent from the fiscal year 1996 level to $13.2 billion by fiscal year 2002. Also, for 15 of the next 20 fiscal years beginning in fiscal year 1997, DOD’s projected spending for aircraft purchases is expected to equal or exceed $11.9 billion annually. For 3 years during this period, DOD’s projected annual spending on aircraft purchases will exceed $16 billion (6.5 percent of the budget) and for 1 of those years, it will exceed $18 billion (7.3 percent of the budget). In the current security and force structure environment, the need for that level of additional funding has not been made clear by DOD. Furthermore, other than stating that overall procurement funding in general will be increased, DOD has not identified specific reductions elsewhere within the procurement account or within the other major accounts to offset the significant proposed increases in aircraft procurement funding. Because the overall level of defense funding is expected to be stable, at best, any proposed increase in spending for a particular account or for a project will have to be offset elsewhere within the budget. Historically, acquisition programs almost always cost more than originally projected. Figure 4 is a conservative projection of DOD’s aircraft funding requirements because no cost growth beyond current estimates is considered. Research has shown that unanticipated cost growth has averaged at least 20 percent over the life of aircraft programs. For at least one current program, it appears the historical patterns will be repeated. In January 1997, DOD reported that the procurement cost of the F-22 was expected to increase by over 20 percent and devised significant initiatives to offset that growth. We reported about this potential cost growth in June 1997 and concluded that the initiatives to offset the cost growth were optimistic. In addition, the projected funding requirements shown in figures 3 and 4 may be understated because they do not include any projected funding for other aircraft programs that have not been approved for procurement. For example, potential requirements exist to replace the KC-135, C-5A, F-15E, F-117, EA-6B, S-3B, and other aircraft. Adding any of these requirements to DOD’s aircraft investment strategy would further complicate the funding problems. The amount of funding likely to be available for national defense in the near term has been projected by both the President and the Congress. Both have essentially agreed that the total national defense budget will not increase measurably in real terms through fiscal year 2002. While the Congress has not expressed its sentiments regarding the defense budget beyond fiscal year 2002, last year DOD’s long-term planning for its aircraft investment strategy assumed a real annual growth factor of 1 percent. Accordingly, procurement funding to accomplish the aircraft modernization programs was partially dependent on some level of real growth in the defense budget. However, because of commitments to balance the federal budget by both the President and the Congress, it appears likely that the defense budget will stabilize at current levels or decrease further, rather than increase as DOD’s aircraft investment plans have assumed. According to DOD officials, the long-term planning now assumes no real growth in the defense budget. The impact of this change on DOD’s aircraft programs is not yet clear. DOD plans to increase overall funding for procurement programs over the next few years, and the aircraft programs are expected to be a prime beneficiary of that increased funding. DOD expects to increase procurement spending to a level of approximately $61.2 billion per year, from the current level of about $44.3 billion per year, while keeping overall defense spending at current levels, at least through fiscal year 2002. Of the $39.0 billion cumulative increase in procurement spending that is expected through fiscal year 2002, about $17.7 billion is projected to be used for DOD’s aircraft investment strategy. To increase procurement funding while keeping overall defense spending at current levels, DOD anticipates major savings will be generated from infrastructure reductions and acquisition reform initiatives, as well as increased purchasing power through significantly lower inflation projections. We found, however, that there are unlikely to be sufficient savings available to offset DOD’s projected procurement increases. DOD’s planned procurement funding increase was partially predicated on base closure savings of $17.8 billion (then-year dollars) through fiscal year 2001, a component of infrastructure, and shifting this money to pay for additional procurement. In 1996, however, we found no significant net infrastructure savings between fiscal year 1996 and 2001 because the proportion of infrastructure in the DOD budgets was projected to remain relatively constant through fiscal year 2001. Therefore, through fiscal year 2001, DOD will have less funds available than expected for procurement from its infrastructure reform initiatives. In addition, our ongoing evaluation of acquisition reform savings on major weapon systems suggests that the amount of such savings that will be available to increase procurement spending is uncertain. Our work shows that the savings from acquisition reform have been used by the very programs generating the savings to fund other needs. This raises concern as to whether the latest acquisition reform initiatives will provide savings to realize modernization objectives for other weapons systems within the time frames envisioned. Without the level of savings expected from infrastructure reductions and acquisition reform, DOD will face difficult choices in funding its modernization plans. Finally, based on changes in future inflation factors, DOD calculated in its 1997 future years defense plan (FYDP) that its purchases of goods and services from fiscal years 1997 through 2002 would cost about $34.7 billion (then-year dollars) less than it had planned in its 1996 FYDP. The “inflation dividend” allowed DOD to include about $19.5 billion in additional programs in fiscal years 1997-2001 and permitted the executive branch to reduce DOD’s projected funding by $15.2 billion over the same time period. However, using different inflation estimates, CBO calculated the cost reduction at only $10.3 billion, or $24.4 billion less than DOD’s estimate. Because DOD’s projected funding was reduced by $15.2 billion, CBO’s estimate indicates that DOD’s real purchasing power, rather than increasing, may be reduced by about $5 billion. If true, then DOD may have to make adjustments in its programs. We recently raised an issue on the Air Force’s F-22 air superiority fighter that further complicates the situation. In estimating the cost to produce the F-22, the Air Force used an inflation rate of about 2.2 percent per year for all years after 1996. However, in agreeing to restructure the F-22 program to address the recently acknowledged $15 billion (then-year dollars) program cost increase, the Air Force and its contractors used an inflation rate of 3.2 percent per year. Increasing the inflation rate by 1 percent added billions of dollars to the F-22 program’s estimated cost. We are concerned that the higher inflation rates could have a significant budgetary impact for other DOD acquisition programs. Similar increases on other major weapon programs would add billions of dollars to the amounts needed and further jeopardize DOD’s ability to fund its modernization plans. The basis for DOD’s projections of total annual procurement funding is the cumulative annual funding needs of multiple weapons programs, each of which has typically been based on optimistic assumptions about procurement quantities and rates. Accordingly, DOD’s projections of total annual procurement funding have been consistently optimistic. DOD’s traditional approach to managing affordability problems is to reduce procurement quantities and extend production schedules without eliminating programs. Such actions normally result in significantly increased system procurement costs and delayed deliveries to operational units. We recently reported that the costs for 17 of 22 full-rate production systems we reviewed increased by $10 billion (fiscal year 1996 dollars) beyond original estimates through fiscal year 1996 due to stretching out the completion of the weapons’ production. We found that DOD had inappropriately placed a high priority on buying large numbers of untested weapons during low-rate initial production to ensure commitment to new programs and thus had to cut by more than half its planned full-rate production for many weapons that had already been tested. We also found that actual production rates were, on average, less than half of originally planned rates. Primarily because of funding limitations, DOD has reduced the annual full-rate production for 17 of the 22 proven weapons reviewed, stretching out the completion of the weapons’ production an average of 8 years (or 170 percent) longer than planned. Our work showed that DOD develops weapon system acquisition strategies that are based on optimistic projections of funding that are rarely achieved. As a result, a significant number of DOD’s weapon systems are not being procured at planned production rates, leading to program stretchouts and billions of dollars of increased costs. If DOD bought weapons at minimum rates during low-rate initial production, more funds would be available to buy proven weapons in full-rate production at more efficient rates and at lower costs. If DOD’s assumptions regarding future spending for its aircraft programs do not materialize, DOD may need to (1) reduce funding for some or all of the aircraft programs; (2) reduce funding for other procurement programs; (3) implement changes in infrastructure, operations, or other areas; or (4) increase overall defense funding. In other words, the likelihood of program stretchouts and significantly increased costs is very real. As the Nation proceeds into the 21st century faced with the prospect of a constrained budget, we believe DOD needs to take action now to address looming affordability problems with its aircraft investment strategy. Action needs to be taken now because, if major commitments are made to the initial procurement of all the planned aircraft programs (such as the F/A-18E/F, F-22, JSF, and the V-22) over the next several years, a significant imbalance is likely to result between funding requirements and available funding. Such imbalances have historically led to program stretchouts, higher unit costs, and delayed deliveries to operational units. Further, this imbalance may be long-term in nature, restricting DOD’s ability to respond to other funding requirements. DOD needs to reorient its aircraft investment strategy to recognize the reality of a constrained overall defense budget for the foreseeable future. Accordingly, instead of continuing to start aircraft procurement programs that are based on optimistic assumptions about available funds, DOD should determine how much procurement funding can realistically be expected and structure its aircraft investment strategy within those levels. DOD also needs to provide more concrete and lasting assurance that its aircraft procurement programs are not only militarily justified in the current security environment but clearly affordable as planned throughout their entire procurement. The key to ensuring the efficient production of systems is program stability. Understated cost estimates and overly optimistic funding assumptions result in too many programs chasing too few dollars. We believe that bringing realism to DOD’s acquisition plans will require very difficult decisions because programs will have to be terminated. While all involved may agree that there are too many programs chasing too few dollars, and could probably agree on the need to bring stability and executability to those programs that are pursued, it will be much more difficult to agree on which programs to cut. Nevertheless, the likelihood of continuing fiscal constraints and reduced national security threats should provide additional incentives for real progress in changing the structure and dominant culture of DOD’s weapon system acquisition process. Therefore, we recommend that the Secretary of Defense, in close consultation with the defense and budget committees of the Congress, define realistic, long-term projections of overall defense funding and, within those amounts, the portion of the annual procurement funding that can be expected to be made available to purchase new or significantly improved aircraft. In developing the projections, the Secretary should consider whether the historical average percentage of the total budget for aircraft purchases is appropriate in today’s security and budgetary environment. We also recommend that the Secretary reassess and report to the Congress on the overall affordability of DOD’s aircraft investment strategy in light of the funding that is expected to be available. The Secretary should clearly identify the amount of funding required by source, including (1) any projected savings from infrastructure and acquisition reform initiatives and (2) any reductions elsewhere within the procurement account or within the other major accounts. We further recommend that the Secretary fully consider the availability of long-term funding for any aircraft program before approving the procurement planned for that system. In commenting on a draft of this report, DOD partially concurred with our recommendations and stated that it is fully aware of the investment challenge highlighted in this report. DOD stated that its recent Quadrennial Defense Review addressed the affordability of the modernization programs that it believes are needed to meet the requirements of the defense strategy. The Quadrennial Defense Review recommended reductions in aircraft procurement plans. However, even to modernize the slightly smaller force that will result from the Quadrennial Defense Review, DOD believes that procurement funding must also rise to about $60 billion annually by fiscal year 2001, from about $44 billion in fiscal year 1997. Recognizing that overall defense budgets are not likely to increase substantially for the foreseeable future, DOD indicated that the additional procurement funds would be created by continuing efforts to reduce the costs of defense infrastructure and to fundamentally reengineer its business practices. Our recent reviews of DOD’s previous initiatives to reduce the costs of defense infrastructure and reengineer business practices indicate that the amount and availability of savings from such initiatives may be substantially less than DOD has estimated. If the projected savings do not materialize as planned, or if estimates of the procurement costs of weapon systems prove to be too optimistic, DOD will need to rebalance the procurement plans to match the available resources. This action would likely result in further program adjustments and extensions. Concerning aircraft procurement projections, we continue to believe that a clearer understanding of DOD’s long-term budgetary assumptions—including specific, realistic projections of funding availability and planned aircraft procurement spending—is necessary to determine the overall affordability of DOD’s aircraft investment strategy. Without this information, neither DOD nor the Congress will have reasonable assurances that the long-term affordability of near-term procurement decisions has been adequately considered. We gathered, assembled, and analyzed historical data on the overall defense budget, the services’ budget shares, the procurement budgets, and the aircraft procurement budgets. Much of this data was derived from DOD’s historical FYDP databases. We did not establish the reliability of this data because the FYDP is the most comprehensive and continuous source of current and historical defense resource data. The FYDP is used extensively for analytical purposes and for making programming and budgeting decisions at all DOD management levels. In addition, we reviewed historical information and studies—ours, CBO, and others—on program financing and affordability. We also gathered, assembled, and analyzed DOD-generated data on its aircraft programs and supplemented that, where necessary, with data from CBO. We reviewed DOD’s detailed positions on the affordability of its aircraft modernization programs, as presented to the Congress in a June 1996 hearing. We followed up with DOD and service officials on key aspects of that position. Our analysis included tactical aircraft, bombers, transports, helicopters, other aircraft purchases and major aircraft modification programs. This approach removes any cyclical effects on the investment in aircraft by allowing us to view the overall amount invested, as well as the major subcomponents of that investment. We focused on procurement figures and excluded research and development costs because we could not forecast what development programs DOD will undertake over the course of the next 20 to 30 years. We used DOD’s projections for the costs of these aircraft programs (except for the JSF costs, which are CBO projections based on DOD unit cost goals) and did not project cost increases, even though cost increases have occurred in almost all previous aircraft procurement programs. All dollar figures are in constant 1997 dollars, unless otherwise noted. The National Defense Authorization Act for Fiscal Year 1997 required DOD to conduct a Quadrennial Defense Review. As part of the review, DOD assessed a wide range of issues, including the defense strategy of the United States and the force structure required. As a result, DOD may reduce the quantities procured of some weapons programs. The details of how DOD plans to implement the recommendations of the Quadrennial Defense Review will not be available until the fiscal year 1999 budget is submitted to the Congress. Our analysis, therefore, does not take into account the potential effect of implementing the recommendations of the Quadrennial Defense Review. We performed our work from March 1996 to July 1997 in accordance with generally accepted government auditing standards. As agreed with your offices, we plan no further distribution of this report until 30 days from its issue date unless you publicly announce its contents earlier. At that time, we will send copies to other congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Marine Corps aircraft. A single-piloted, light-attack, vertical/short take-off and landing aircraft used primarily for responsive close air support. This is a remanufacture program that converts older versions to the most recent production version and provides night fighting capability. Air Force aircraft. A new production aircraft that modernizes the airlift fleet. It will augment the C-5, C-141, and C-130 aircraft; carry outsize cargo into austere airfields; and introduce a direct deployment capability. Army helicopter. A new production, 24-hour, all-weather, survivable aerial reconnaissance helicopter to replace the AH-1, OH-6, and OH-58A/C helicopters and complement the AH-64 Apache. A little more than one-third of the total production aircraft will be equipped with Longbow capability. Air Force aircraft. A new production, medium-range, tactical airlift aircraft designed primarily for transport of cargo and personnel within a theater of operations. This model uses latest technology to reduce life-cycle costs and has more modern displays, digital avionics, computerized aircraft functions, fewer crew members, and improved cargo handling and delivery systems. Navy aircraft. A new production, all-weather, carrier-based airborne Combat Information Center providing tactical early warning, surveillance, intercept, search and rescue, communications relay, and strike and air traffic control. Air Force aircraft. A major modification to provide the Air Combat Command with new and improved capabilities for the AWACS radar. It involves both hardware and software changes to the AWACS. Air Force aircraft. A new production, next-generation stealthy air superiority fighter with first-look, first-kill capability against multiple targets. It will replace the F-15C aircraft in the air superiority role. Navy aircraft. A new-production, major model upgrade to the F/A-18C/D multimission tactical aircraft for Navy fighter escort, interdiction, fleet air defense, and close-air support mission requirements. Planned enhancements over the F/A-18C/D include increased range, improved survivability, and improved carrier suitability. It will replace F/A-18C/D models, A-6, and F-14 aircraft. Marine Corps helicopter. An upgrade to the Marine Corps AH-1W attack and UH-1N utility versions of this helicopter to convert both versions from 2-bladed to 4-bladed rotor systems and provide the attack version with fully integrated cockpits. The attack version provides close air support, anti-armor, armed escort, armed/visual reconnaissance and fire support coordination under day/night and adverse weather conditions. The utility version provides day/night and adverse weather command and control, combat assault support, and aeromedical evacuation. Air Force and Army aircraft. (Joint Surveillance Target Attack Radar System) A new production joint surveillance, battle management and targeting radar system on a modified E-8 aircraft that performs real time detection and tracking of enemy ground targets. Air Force and Navy aircraft. A new production, next-generation, multimission strike fighter. It will replace the Air Force’s F-16 and A-10, the Marine Corps’ AV-8B and F-18A/C/Ds, and be a “first-day survivable complement” to the Navy’s F-18 C/D and E/F aircraft. (continued) Air Force and Navy aircraft. (Joint Primary Aircraft Training System) A new production joint training aircraft and ground based training system, including simulators, that replaces the Air Force T-37B trainer aircraft, Navy T-34C trainer aircraft, and their associated ground systems. Army helicopter. A modification program to develop and provide weapons enhancements to the AH-64 Apache attack helicopter. The Longbow program will provide a fire-and-forget Hellfire missile capability to the AH-64 Apache helicopter that can operate in night, all-weather, and countermeasures environments. Navy helicopter. A Block II weapon systems upgrade of the Navy version of the Army Black Hawk to enhance mission areas performance. It is a twin-engine medium lift, utility or assault helicopter performing anti-submarine warfare, search and rescue, anti-ship warfare, cargo lift, and special operations. Navy aircraft. A strike pilot training system to replace the T-2C and TA-4J for strike and E2 and C2 pilots. It includes the T-45A aircraft, simulators, and training equipment and materials. Army helicopter. A new production, twin-engine air assault, air cavalry, and aeromedical evacuation helicopter that transports up to 14 troops and equipment into battle. It continues to replace the UH-1H Iroquois helicopter. Navy, Marine Corps, and Air Force aircraft. A new production, tilt-rotor, vertical take-off, and landing aircraft designed to provide amphibious and vertical assault capability to the Marine Corps and replace or supplement troop carrier and cargo helicopters in the Marines, the Air Force, and the Navy. The following are our comments on the Department of Defense’s (DOD) letter dated June 8, 1997. 1. Although the Quadrennial Defense Review report recommended that adjustments be made to the number of aircraft to be procured and the rates at which they are to be procured, the report projected that additional procurement funding would be made available through base closures and other initiatives to reduce defense infrastructure and reengineer business practices. The details of these initiatives are not expected to be available until the fiscal year 1999 budget is submitted to the Congress. At this time, the availability of savings from planned initiatives is not clearly evident. 2. The Quadrennial Defense Review does not provide sufficiently detailed projections to judge the affordability of DOD’s new aircraft procurement plans by comparing the long-term funding expected to be available with the funding needed to fully implement those plans. We continue to believe that this type of long-term projection is needed by both DOD and the Congress to ensure that DOD’s aircraft procurement programs are clearly affordable as planned through the span of procurement. 3. We continue to believe that the $17 billion increased cost of procuring F/18-E/F aircraft compared to F/A-18C/Ds is not warranted by the limited increases in performance that would be obtained. We recognize that, while the F/A-18E/F will provide some improvements over the F/A-18C/D, most notably in range, the F/A-18C/D’s current capabilities are adequate to accomplish its assigned missions. Our rebuttals to DOD’s specific comment are contained in our report, Naval Aviation: F/A-18E/F Will Provide Marginal Operational Improvement at High Cost (GAO/NSIAD-96-98, June 18, 1996). 4. Although procurement rates for F-22s during the planned low-rate initial production period were to be lowered in accordance with the Quadrennial Defense Review report, we continue to believe that the degree of overlap between development and production of the F-22 is high and that procurement of F-22s should be minimized until the aircraft demonstrates that it can successfully meet the established performance requirements during operational testing and evaluation. There has also been congressional concern about the cost and progress of the F-22 program. The Senate has initiated legislation to require us to review the F-22 development program annually. 5. We clarified the language in the report to more explicitly recommend that long-term projections of the availability of funds should be used as a guide to assess the likely availability of funds to carry out a program at the time of the procurement approval decision. The Quadrennial Defense Review recognized that more procurement dollars were being planned to be spent than were likely to be available over the long term. Our intent in making this recommendation is to recognize the difficulty DOD and the Congress face and to suggest some solid analysis that would aid in evaluating the long-term commitments that are inherent in nearer term decisions to procure weapon systems. A better understanding of the long-term budgetary assumptions underlying near-term decisions would clearly aid both DOD and the Congress in ensuring that needed weapon systems are affordable in both the near and long term. Combat Air Power: Joint Assessment of Air Superiority Can Be Improved (GAO/NSIAD-97-77, Feb. 26, 1997). B-2 Bomber: Status of Efforts to Acquire 21 Operational Aircraft (GAO/NSIAD-97-11, Oct. 22, 1996). Air Force Bombers: Options to Retire or Restructure the Force Would Reduce Planned Spending (GAO/NSIAD-96-192, Sept. 30, 1996). U.S. Combat Air Power: Aging Refueling Aircraft Are Costly to Maintain and Operate (GAO/NSIAD-96-160, Aug. 8, 1996). Combat Air Power: Assessment of Joint Close Support Requirements and Capabilities Is Needed (GAO/NSIAD-96-45, June 28, 1996). U.S. Combat Air Power: Reassessing Plans to Modernize Interdiction Capabilities Could Save Billions (GAO/NSIAD-96-72, May 13, 1996). Combat Air Power: Funding Priority for Suppression of Enemy Air Defenses May Be Too Low (GAO/NSIAD-96-128, Apr. 10, 1996). Navy Aviation: AV-8B Harrier Remanufacture Strategy Is Not the Most Cost-Effective Option (GAO/NSIAD-96-49, Feb. 27, 1996). Future Years Defense Program: 1996 Program Is Considerably Different From the 1995 Program (GAO/NSIAD-95-213, Sept. 15, 1995). Aircraft Requirements: Air Force and Navy Need to Establish Realistic Criteria for Backup Aircraft (GAO/NSIAD-95-180, Sept. 29, 1995). Longbow Apache Helicopter: System Procurement Issues Need to Be Resolved (GAO/NSIAD-95-159, Aug. 24 1995). Comanche Helicopter: Testing Needs to Be Completed Prior to Production Decisions (GAO/NSIAD-95-112, May 18, 1995). Cruise Missiles: Proven Capability Should Affect Aircraft and Force Structure Requirements (GAO/NSIAD-95-116, Apr. 20, 1995). Army Aviation: Modernization Strategy Needs to Be Reassessed (GAO/NSIAD-95-9, Nov. 21, 1994). Future Years Defense Program: Optimistic Estimates Lead to Billions in Overprogramming (GAO/NSIAD-94-210, July 29, 1994). Continental Air Defense: A Dedicated Force Is No Longer Needed (GAO/NSIAD-94-76, May 3, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of Defense's (DOD) aircraft acquisition investment strategy, focusing on: (1) DOD's and the Congressional Budget Office's estimates of the annual funding needed for aircraft programs, as a percentage of the overall DOD budget, and a comparison of that percentage to a long-term historical average percentage of the defense budget; (2) the potential long-term availability of funding for DOD's planned aircraft procurements; and (3) DOD's traditional approach to resolving funding shortfalls. GAO noted that: (1) to meet its future aircraft inventory and modernization needs, DOD's current aircraft investment strategy involves the purchase or significant modification of at least 8,499 aircraft in 17 aircraft programs, at a total procurement cost of $334.8 billion (fiscal year 1997 dollars) through their planned completions; (2) DOD has maintained that its investment plans for aircraft modernization are affordable within expected future defense budgets; (3) DOD had stated earlier that sufficient funds would be available for its aircraft programs based on its assumptions that: (a) overall defense funding would begin to increase in real terms after fiscal year (FY) 2002; and (b) large savings would be generated from initiatives to downsize defense infrastructure and reform the acquisition process; (4) DOD's aircraft investment strategy may be unrealistic in view of current and projected budget constraints; (5) recent statements by DOD officials, as well as congressional projections, suggest that overall defense funding will be stable, at best, for the foreseeable future; (6) DOD's planned funding for the 17 aircraft programs in all but one year between FY 2000 and 2015 exceeds the long-term historical average percentage of the budget devoted to aircraft purchases and, for several of those years, approaches the percentages of the defense budget reached during the peak Cold War spending era of the early-to-mid-1980s; (7) the amount and availability of savings from infrastructure reductions and acquisition reform, two main claimed sources for increasing procurement funding, are not clearly evident today; (8) GAO's recent reviews of these initiatives indicate there are unlikely to be sufficient savings available to offset projected procurement increases; (9) to deal with a potential imbalance between procurement funding requirements and the available resources, DOD may need to: (a) reduce planned aircraft funding and procurement rates; (b) reduce funding for other procurement programs; (c) implement changes in force structure, operations, or other areas; or (d) increase total defense funding; (10) DOD has historically made long-term commitments to acquire weapon systems based on optimistic procurement profiles and then significantly altered those profiles because of insufficient funding; and (11) to avoid or minimize affordability problems, DOD needs to bring its aircraft investment strategy into line with more realistic, long-term projections of overall defense funding, as well as the amount of procurement funding expected to be available for aircraft purchases. |
The September 2001 Quadrennial Defense Review outlined a strategy to sustain and transform the military force structure that had been in place since the mid-1990s. In that review, DOD committed to selectively recapitalize older equipment items, which the department recognized as being neglected for too long, to meet near-term challenges and to improve near-term readiness. DOD is currently conducting a new Quadrennial Defense Review, with the report scheduled to be issued in February 2006. The results of this Quadrennial Defense Review could identify changes to DOD’s future force structure and capabilities, thereby impacting the funding needed for both current and replacement systems. Based on DOD guidance, the services develop a Program Objective Memorandum that details the specific programs and funding needed to meet DOD requirements as determined by the Quadrennial Defense Review. As part of this process, the services analyze alternative force structure, weapons systems, and support systems together with their multiyear resource implications and evaluate various trade-off options. Basically, it is a process for balancing and integrating resources among the various programs according to service and DOD priorities. The annual FYDP contains DOD’s estimates of future funding needs for programs and priorities. Through the FYDP, DOD projects costs for each element of those programs through a period of either 5 or 6 years on the basis of proposals made by each of the military services. The Office of the Secretary of Defense considers the service proposals and the policy choices made by the current administration and, where needed, makes adjustments. For example, in preparing its 2006 budget, DOD made a number of significant changes in its long-term acquisition plans to meet budget targets established by the White House, as documented in Program Budget Decision 753. The 2005 FYDP extended from fiscal year 2005 to fiscal year 2009 and the 2006 FYDP extended from fiscal year 2006 to fiscal year 2011. While the condition of the 30 equipment items we reviewed varied, we found that average fleet-wide readiness rates for most of these items declined between fiscal years 1999 and 2004. The decline in readiness generally resulted from the high pace of recent operations and the advanced age or complexity of the equipment systems. Therefore, we rated the fleet-wide condition of 22 of the selected equipment items as red or yellow. However, 8 of the 30 items—including several tactical fighter aircraft and some newer equipment items such as the Marine Corps’ Medium Tactical Vehicle Replacement—were assessed as green (see fig. 1), indicating that we found no specific problems that warrant additional attention by DOD, the services, or Congress or that problems were already being addressed. DOD is currently conducting a Quadrennial Defense Review that will examine defense programs and policies and may change some equipment requirements. Eighteen of the equipment items we reviewed for this report were also included in our December 2003 report, and 12 of these equipment items received the same condition assessment in both analyses. For example, the surface ships examined in this study, the Navy’s DDG-51 Arleigh Burke Class Destroyer, the FFG-7 Oliver Hazard Perry Class Frigate, and the LPD- 4 Amphibious Transport Dock Ship received yellow condition ratings in both studies, as did the Air Force’s B-2 Spirit Bomber, the C-5 Galaxy Transport Aircraft, and the KC-135 Stratotanker Aircraft. However, for 6 of the items, the assessment changed—3 systems’ fleet-wide condition improved, 2 going from red to yellow and 1 from yellow to green, and 3 systems’ condition degraded, going from green to yellow. The condition assessments for the Marine Corps’ CH-46E helicopter and the Navy’s F/A-18 aircraft and Standard Missile-2 improved due, in part, to additional maintenance efforts and improvements that appear to address the condition concerns noted in our previous report. The condition assessments for the Army’s Abrams tank and Heavy Expanded Mobility Tactical Truck and the Marine Corps’ Light Armored Vehicle went from green to yellow largely as a result of increased use in ongoing operations overseas. For many of the equipment items included in our assessment, average fleet- wide readiness rates have declined, generally due to the high pace of recent operations or the advanced age or complexity of the systems. We assessed the fleet-wide condition of 3 equipment items as red, indicating that immediate attention is warranted by DOD, the services, and/or Congress to address problems or issues. In addition, we assessed the fleet-wide condition of 19 items as yellow, indicating that attention is warranted to address existing problems that, if left unattended, may worsen. Table 1 below shows the primary reasons used to rate selected equipment items’ condition and our assessment of those items as either red or yellow. Although selected equipment items have been able to meet wartime requirements, the high pace of recent operations appears to be taking a toll on selected equipment items and fleet-wide mission capable rates have been below service targets, particularly in the Army and Marine Corps. Further, according to officials, the full extent of the equipment items’ degradation will not be known until a complete inspection of deployed equipment is performed. Elevated flying hours in Iraq and Afghanistan, coupled with the harsh desert environment, have negatively impacted helicopters. For example, our assessment of the Army’s CH-47D/F Chinook helicopter’s condition as red reflects this platform’s mission capable rates, which were consistently below service goals. Officials stated that the aircraft is currently being flown in Iraq and Afghanistan at three times more than planned peacetime rates. This usage has increased the amount of maintenance and number of parts needed to sustain the aircraft, which in turn has negatively impacted overall readiness. Ground equipment has also been affected by high wartime usage. For example, the Marine Corps’ M1A1 Abrams tank fleet, also rated as red for condition, is being negatively impacted by operations in Iraq and a shortage of equipment maintainers due to transfers of personnel to units that are deploying. This system failed to meet its service readiness goals and recent trends indicate a steady decline away from these targets. Several heavily used equipment items included in our review did not have mission capable rates below their target; however, these rates have recently declined, primarily due to the high wartime usage. For example, while the Army’s Abrams Tanks and Bradley Fighting Vehicles met or exceeded the Army mission capable goals, they are both on a downward trend due to a shortage of spare parts and trained technicians. The shortage of spare parts is driven by the number of vehicles either deployed or being reset to a predeployment condition and the shortage of trained technicians is primarily due to the number of deployed National Guard military technicians. Both of these tracked vehicles have experienced high use in operations overseas in the past and will likely do so in the future. Similarly, while the readiness rates of the Marine Corps’ Assault Amphibian Vehicle varied by vehicle type in recent years, the gap between mission capable rates and service goals increased, indicating a decline in the material condition of this equipment. While not all of the equipment included in our review has been heavily used in recent overseas operations, in some cases, the advanced age or complexity of the equipment items have contributed to readiness declines. Many of the selected systems have either a fleet-wide average age of more than 20 years, such as the Navy’s LPD-4 Amphibious Transport Dock Ship, or entered the inventory prior to the 1980s, such as the Air Force’s KC-135 Stratotanker aircraft. These systems are likely to reach the end of their useful lives in this decade unless major modernizations, some of which are planned or underway, are made. Some of the problems degrading the fleet- wide condition of these aging systems include maintenance problems due to parts shortages or obsolescence, shortages of trained maintenance personnel, corrosion, deferred maintenance, and airframe fatigue. For example, the Navy’s P-3 Orion aircraft, while not as heavily tasked as Army and Marine Corps helicopters, have played an important role in overseas operations as a reconnaissance and surveillance asset despite consistently missing their mission capable goals by a significant percentage. The condition of the P-3 fleet, which has an average age of over 24 years, has been primarily degraded by the effects of structural fatigue on its airframe and the obsolescence of communication, navigation, and primary war- fighting systems in this aircraft. Some Air Force equipment also has age-related condition issues that warrant attention and therefore received yellow ratings. For example, mission capable rates for the C-5 Galaxy Transport Aircraft were consistently below Air Force goals between fiscal years 1999 and 2004. Officials stated that the size and age of the C-5 aircraft make it maintenance intensive, and that component items on the aircraft are older, making it difficult to find manufacturing sources for some parts, particularly avionics and engine components. In addition, the KC-135 Stratotanker aircraft has not met its mission capable goals due to issues associated with age and corrosion, such as problems with the landing gear’s steel brakes. Similarly, Navy surface ships examined in this study had a number of issues related to condition and these vessels also received yellow ratings. The Navy is challenged to maintain surface ships that are, in reality, a system of systems. The failure of any one of these complex systems affects the entire ship. For example, the DDG-51 Arleigh Burke Class Destroyer, the FFG-7 Oliver Hazard Perry Class Frigate and the LPD-4 Amphibious Transport Dock Ship all had problematic subsystems, for example, operating with limited communication ability or bandwidth, which affects their day-to-day operations such as on-line training and personnel activities. Older ships, such as the FFG-7 class which is, on average, almost 21 years old and the LPD-4 class with an average age of 37 years, may be more challenging because as the ships age, more maintenance will be required. Other older Navy equipment also had condition issues in need of attention. For example, the EA-6B Prowler consistently missed the Navy’s mission capable goal due to problems with communications equipment and wings. Our analysis showed that the fleet-wide condition of over one quarter of the equipment items included in our review was generally favorable, and consequently, we assessed the condition of 8 of the 30 selected military equipment items as green, as shown in figure 1. Not all equipment has been heavily used for operations in Iraq and Afghanistan, and for some items, use has not increased significantly from that of planned peacetime operations. This was the case for several tactical fighter aircraft. In our assessment, all three selected aircraft that provide this capability, the Air Force’s F-15 Eagle/Strike Eagle and F-16 Fighting Falcon, and the Navy’s F/A-18 Hornet/Super Hornet, were at or near- service mission capable rate goals. Moreover, we found that new equipment that has been heavily tasked in recent operations appears to be performing well. For example, the Family of Medium Tactical Vehicles has exceeded the Army’s fully mission capable rate goals despite operating overseas at a rate that is nine times higher than in peacetime. In addition, the Marine Corps’ Medium Tactical Vehicle Replacement vehicles are being aggressively used in support of operations in Iraq, but also met their mission capable goals for fiscal years 2003 and 2004. These trucks are both relatively new; the Family of Medium Tactical Vehicles is on average 6 years old and the Medium Tactical Vehicle Replacement is on average 3 years old. In addition, we assessed the fleet-wide condition of some older equipment items favorably. For example, the average age of the Army’s OH-58D Kiowa is about 13 years with a life expectancy of 20 years; however, these reconnaissance helicopters have met or exceeded their mission capable goals from 1999 through 2004 while exceeding their planned flight hours in recent operations. With an average age of almost 16 years, the M113 Armored Personnel Carrier has not experienced a significant decline in mission readiness as a result of recent operations in Iraq and Afghanistan. The Army’s High Mobility Multi-Purpose Wheeled Vehicles (HMMWV) are experiencing usage (i.e., operational tempo) that is six times their normal peacetime rate. Despite concerns over the availability of their armored protection, these vehicles exceeded Army readiness goals for the past 6 years and received a green rating. The military services have identified near- and long-term program strategies and funding plans to ensure that most of the 30 selected equipment items can meet defense requirements, but some gaps remain. For the 30 selected equipment items, we found that 20 of the services’ near- term program strategies have gaps in that they do not address capability shortfalls, full funding is not included in DOD’s 2006 budget request, or there are supply and maintenance issues that may affect near-term readiness. Additionally, the long-term program strategies and funding plans are incomplete for 22 of the equipment items we reviewed in that future requirements are not fully identified, studies are not completed, funding for maintenance and technological upgrades may not be available, or replacement systems are delayed or not yet identified. DOD is required to develop sustainment plans in 10 U.S.C. § 2437, but this statute only applies to 9 of the selected equipment items. Although the services have identified near- and long-term program strategies and funding for most of the equipment items we reviewed, the gaps we identified may threaten DOD’s ability to meet some future capability requirements. The services have not fully identified near-term program strategies and funding plans for 20 of the 30 equipment items we reviewed, including 7 of the 9 selected items covered by 10 U.S.C. § 2437. One of the items that will not be covered by this statute, the Marine Corps’ CH-46E Sea Knight helicopter, was the only item we assessed as red for its near-term program strategy and funding plan because it may be unable to meet its near-term requirements. We assessed the near-term program strategies and funding plans of 19 of the 30 equipment items in our review as yellow because the services’ program strategies for sustaining equipment lack sufficient planning or full funding to meet near-term requirements. Alternatively, the services have planned program and funding strategies to correct equipment deficiencies or improve equipment capabilities and safety for 10 of the 30 equipment items in our review so that the equipment items can meet near- term requirements, so we assessed their near-term program strategies and funding plans as green as shown in figure 1. The services’ near-term program strategies to sustain or modernize equipment and address current condition issues include restoring equipment back to its predeployment condition, remanufacturing or recapitalizing equipment, procuring new equipment, improving equipment through safety or technological upgrades, or improving maintenance practices. Table 2 below shows the primary reasons used to rate selected equipment items’ near-term program strategies and funding plans and our assessment of those items as either red or yellow. Without developing complete near-term plans and identifying the associated funding needs to ensure that all key equipment items can be sustained and modernized—and assessing the risk involved if gaps in these strategies are not addressed—DOD may be unable to meet some future requirements for defense capabilities. Some of the services’ near-term program strategies do not address the issues that affect the condition of the equipment in the near-term, thus 5 of the 30 selected equipment items received a yellow rating as shown in table 2. For example, the Marine Corps identified a shortfall in the capability of the Assault Amphibian Vehicle to conduct parts of their war-fighting doctrine; however, instead of upgrading its capabilities, their plan is to return the capability of the vehicle to its original condition while they await its replacement. Although the Navy has a plan to correct serious LPD-4 Amphibious Transport Dock ship class deficiencies, those ships that are within 5 years of decommissioning can, by law, only receive safety modifications, resulting in a wide variance in the condition of ships in the class. Furthermore, while the Navy is making structural inspections and repairs to ensure that there will be sufficient P-3 Orion aircraft to meet day- to-day requirements next year, they have not funded some improvements to communications and defense systems, which will continue to degrade the ability of this aircraft to fulfill all of its missions. The full funding requirements for nine of the Marine Corps and Army near- term strategies we reviewed were not included in DOD’s fiscal year 2006 budget request; therefore, we rated these equipment items as yellow as shown in table 2. According to service officials, the services submit their budgets to DOD and the department has the authority to increase or decrease the service budgets based upon the perceived highest priority needs. As shown in table 3 below, the Marine Corps identified requirements that were not funded in DOD’s 2006 budget request totaling $314.7 million for four of its selected equipment items. The four equipment items for which the Marine Corps did not request funding are a concern because a capability or need that the service identified as a priority may not receive funding unless Congress intervenes. For example, the Marine Corps identified but did not request $113 million in funding needed to complete the standardization of its older Light Armored Vehicles. The Marine Corps also identified funding shortages in its tank remanufacturing program for fiscal years 2006 and 2007, noting that only 33 percent of the plan has been funded. In addition, for five of the selected Army items, DOD has not included funding for part of the near-term program strategies in its regular 2006 budget request. Instead the Army is relying on supplemental appropriations or congressional adjustments to their regular appropriations to fund these activities and we rated these items yellow given the uncertainty of future supplemental appropriations or congressional adjustments. For example, the Army requested $1.4 billion in the fiscal year 2005 supplemental in order to accelerate recapitalization of the Bradley Fighting Vehicles by producing 93 vehicles to replace combat losses and 554 to meet its modernization needs, and has begun planning another request for supplemental appropriations to fund other near-term procurement requirements associated with their transformational objectives. Further, in the past, the Army has consistently relied on supplemental appropriations and congressional adjustments for the M113 Armored Personnel Carrier, and included $132 million in the fiscal year 2005 supplemental funding request to recapitalize vehicles deployed for Operation Iraqi Freedom. Anticipated parts shortages or maintenance issues may affect the services’ ability to maintain adequate condition of 6 of the 30 selected equipment items we reviewed; therefore, we assessed their near-term program strategies and funding plans as yellow or, in one case, red as shown in table 2. Of the 30 equipment items we reviewed, the Marine Corps’ CH-46E Sea Knight helicopter received a red rating for its near-term program strategy and funding plan because the service may be unable to meet its near-term requirements due to potential aircraft and repair parts shortages caused by the age of the aircraft. Because of fielding delays of its replacement aircraft, the MV-22, the CH-46E will not be retired as originally scheduled, which may lead to additional repair parts shortages. The uncertainty in whether the near-term program strategy addresses existing parts shortages is also a concern for items such as the Navy’s F/A-18 Hornet/Super Hornet aircraft and resulted in a yellow rating. Although the Navy is currently able to maintain readiness for the Super Hornet fleet, there is an anticipated shortage for critical spare parts like extra fuel tanks and bomb racks, and the current program strategy does not fund the efforts necessary to ensure adequate replacements. Additionally, we rated the Air Force’s KC-135 Stratotanker aircraft as yellow because officials expect its age-related maintenance issues, such as fuel bladder leaks and parts obsolescence, to increase, resulting in additional maintenance requirements. Officials also stated that the severity of potential problems from newly discovered corrosion remains unknown, so the potential exists for additional maintenance requirements. We rated 10 of the 30 equipment items examined in this review as green, as shown in figure 1, because we did not identify any significant program or funding issues in the near term. The services had identified program and funding strategies to correct these equipment items’ immediate deficiencies, or to improve the platforms’ capabilities. For the selected equipment items that are being heavily used for operations in Iraq and Afghanistan and received a green rating, such as the Army’s AH-64A/D Apache helicopter and the Marine Corps’ AV-8B Harrier jet, the services are using a combination of activities, including restoring the equipment to predeployment status, remanufacturing or recapitalizing the equipment, or procuring new equipment. For example, the Army is restoring the Apache helicopters being used in combat while concurrently remanufacturing the older AH-64A variants into newer AH-64D variants. In some cases, the services have funded plans that upgrade the equipment items to address structural or safety concerns and improve combat capabilities, such as for the Air Force’s F-15 and F-16 fighter aircraft and the Navy’s DDG-51 Arleigh Burke Class destroyers. For other items, the services modified their maintenance practices to increase efficiencies and address concerns. For example, the Air Force modified its stealth maintenance procedures on its B-2 Spirit bomber, thus reducing the steps and time required to conduct it. The services have not developed or fully funded the long-term program strategies for 21 of the 30 selected equipment items. Title 10 U.S.C. § 2437, which requires that DOD develop sustainment plans, applies to only 9 of the selected equipment items. We assessed 7 of the selected equipment items as red, only 2 of which will be covered by this statute, because the services’ program strategies and funding plans to meet long-term requirements are not fully identified, studies to determine future system requirements are not complete, funding for maintenance or technological upgrades may not be available, or replacement systems were delayed or not identified, and in some cases, the selected equipment items may be unable to meet their long-term requirements. We assessed the long-term program strategies and funding plans of 14 of the selected equipment items in our review as yellow because they are experiencing similar gaps in their long-term program strategies and funding plans, but the consequences would be less severe. Alternatively, we assessed the long-term program strategies and funding plans of 9 of the 30 equipment items in our review as green, as shown in figure 1, because the services have program strategies and funding planned to improve or upgrade equipment capabilities and safety or replace the equipment items so that they can meet long-term requirements. Some of the services’ long-term program strategies include improving or modernizing the equipment through upgrades, recapitalizing older models to newer ones, or replacing the equipment with newer, more modern equipment, including those associated with DOD’s force structure changes. Table 4 below shows the primary reasons used to rate selected equipment items’ long-term program strategies and funding plans and our assessment of those items as either red or yellow. As with incomplete near-term strategies, without complete long-term plans to ensure that all key equipment items can be sustained and modernized—and assessing the risk involved if gaps in these strategies are not addressed—DOD may be unable to meet some future defense requirements. At this time, DOD has not fully identified the future requirements or the long-term funding needs for seven of our selected equipment items, resulting in red or yellow assessments as shown in table 4, depending on the urgency or severity of the gaps in program strategies or funding plans. The Army’s lack of identified future requirements and funding plans led us to assess its Bradley Fighting Vehicle and M113 Armored Personnel Carrier as red. In some cases, follow-on system requirements have not been established, but the services have plans to sustain the items until the replacement system is available, so we assessed these items as yellow. For example, the Marine Corps plans to replace its M1A1 Abrams tank and Light Armored Vehicle with the Marine Air-Ground Task Force Expeditionary Force Fighting Vehicle, although at the time of our review, they had not completely identified the program requirements or funding needed for the replacement vehicle. However, they do have plans in place to ensure that both the M1A1 and Light Armored Vehicle are available until the Marine Air-Ground Task Force Expeditionary Force Fighting Vehicle is fielded and have received supplemental funding for these plans. The Army recently finalized the strategy for its wheeled vehicles, such as the HMMWV, but some procurement and recapitalization plans have not been fully funded or specific actions or time frames were not included. Therefore, we assessed the three Army wheeled vehicles’ long-term program strategies and funding plans as yellow. For example, we noted that the Army’s Tactical Wheeled Vehicle and Trailer Modularity and Modernization Strategy showed anticipated procurements for the HMMWV that were not reflected in DOD’s 2006 budget request. Further, while the strategy notes that future block upgrades for the Family of Medium Tactical Vehicles are planned and describes the sustainment programs it will include, it does not identify any specific actions or time frames for these upgrades. DOD has not yet completed studies so that it can fully identify the program strategies and funding plans needed for 2 of the 30 selected equipment items assessed in this review. As shown in table 4, we assessed the long- term program strategy and funding plan for the Air Force’s KC-135 Stratotanker aircraft as red because the congressionally mandated study to determine its replacement has experienced, and may continue to experience delays. Meanwhile the KC-135 fleet, with an average age of about 44 years, continues to experience age-related problems and delays in fielding a replacement further exacerbate problems in maintaining the existing fleet over the long term. We assessed the long-term program strategy and funding plan for the Air Force’s C-5 transport aircraft, with an average fleet age of about 26 years, as yellow because the Air Force remains uncertain about the size of the final C-5 fleet and whether to fund some additional C-5 aircraft upgrades while awaiting completion of DOD’s Mobility Capabilities Study. This study is expected to be completed in the summer of 2005; however, at the time this report was issued, results were not available. The availability of funding for ongoing maintenance and technological upgrades in past and future years may affect the long-term program strategies and funding plans for seven of the selected equipment items. As shown in table 4, we assessed the long-term strategies and funding plans for two items, the Navy’s P-3 Orion aircraft and the Standard Missile-2, as red because the limited funding for maintenance and technological upgrades may have serious consequences, such as negatively affecting their ability to meet war-fighting requirements. The Navy has identified a plan to address the obsolescence of the mission systems in the P-3 Orion aircraft over the long term, but at this time has not officially approved or funded this plan. In addition, the Standard Missile-2, which has recently seen improved readiness ratings because DOD increased operation and maintenance funding, is not scheduled for the same level of funding in the long term, which may reduce the number of available missiles. DOD budget decisions to reduce funding for maintenance and upgrades have the potential for adversely affecting five items, so we assessed the long-term program strategies and funding plans for these items as yellow. For example, decreases in the Navy’s planned operation and maintenance funding across all surface ships in the fleet may result in deferred maintenance and may adversely affect the future material condition of the three classes of ships included in this review, the DDG-51 Arleigh Burke Class destroyers, the FFG-7 Oliver Hazard Perry Class frigates, and the LPD-4 Amphibious Transport Dock ships. Replacement systems have either been delayed or are not yet identified for 5 of the 30 selected equipment items examined in this review and we rated these items as red or yellow as shown in table 4. Two of these items were assessed as red because of the urgency and severity of the delays impact on the services’ capabilities and ability to meet future requirements. For example, we assessed the long-term program strategy and funding plan for the Marine Corps’ CH-53E Super Stallion Helicopters as red because the Marine Corps has not identified a replacement for the CH-53E Super Stallion despite an initial fielding planned for 2015. According to officials, the Marine Corps must maintain enough CH-53E helicopters to support Marine Corps operations until the initial fielding of the Heavy Lift Replacement aircraft. Officials estimate that, if the current high usage rate and expected attrition rates hold true, the number of CH-53E helicopters may fall below the number necessary to remain in service until the Heavy Lift Replacement becomes available. The remaining three items’ long-term program strategies and funding plans were assessed as yellow because the effect of the uncertainties or delays do not appear to be as urgent or severe. In some instances, delays and uncertainties affecting the sustainment programs of selected equipment items are related to DOD difficulties in acquiring their replacements. For example, uncertainty over the potential for delays in the Joint Strike Fighter Program affects the long-term strategy and funding for the Marine Corps’ AV-8B Harrier jet and the Navy’s F/A-18 fighter aircraft and these systems were rated yellow. We determined that 9 of the 30 selected equipment items examined in this review have no significant program or funding issues in the long term and therefore received green ratings as shown in figure 1. For example, the Air Force’s F-15 aircraft upgrades are fully funded and designed to keep the aircraft viable and functioning through at least 2025. In addition, the Marine Corps’ plans provide sufficient numbers of Medium Tactical Vehicle Replacement vehicles to equip all of its units in the long term. Moreover, the Army has reprogrammed funds from the cancellation of the Comanche program to fund other aviation modernization strategies, including those that improve the capability and lifespan of the CH-47D/F Chinook and the AH-64A/D Apache helicopters. Since our last review of the condition of selected military equipment in 2003, overall readiness rates for most selected equipment items have continued to decline and some of the services’ near- and long-term program strategies lack complete sustainment and modernization plans or are not fully funded. Continued high use of these equipment items to support current operations and the advancing ages of the systems suggest that DOD will be challenged in meeting future equipment requirements without significant upgrades to its inventory. Furthermore, because activities to refurbish and replace vehicles, weapons, and equipment used for operations in Iraq and Afghanistan are being funded primarily through supplemental appropriations as opposed to being programmed in DOD’s Future Years Defense Program, future funding is uncertain. Moreover, DOD faces challenges to sustain and modernize its current equipment while continuing these operations and transforming to a new force structure. DOD is currently conducting its Quadrennial Defense Review, which could change the future requirements for some military equipment. In light of these challenges, it is increasingly important that DOD focus its resources on the equipment items that are key to meeting future defense requirements. Without a more focused investment strategy, DOD runs the risk of a continued decline in future equipment readiness. While DOD is required to provide sustainment plans, including time frames and projected budgetary requirements, for some military equipment in accordance with 10 U.S.C. § 2437, this statute does not apply to many key military equipment items we reviewed. For example, those equipment items that do not have a replacement system in development are not covered by this statute. In fact, most of the equipment items that we assessed as red because of long-term strategy and funding issues were not covered by this statute. Without developing complete sustainment and modernization plans and identifying funding needs for all priority equipment items, including those not already covered by law through the end of their expected useful lives, DOD risks not being able to meet some future equipment requirements. Furthermore, without communicating these plans and funding needs to Congress, lawmakers will not have the clear picture of DOD’s progress on equipment sustainment and modernization they need to provide effective oversight over these processes. To ensure that DOD can sustain key equipment items to meet future equipment requirements and to provide greater visibility over key equipment items to Congress, we recommend that, after the department completes its Quadrennial Defense Review, the Secretary of Defense, in consultation with the Secretaries of the Military Services, take the following two actions: Reassess the near- and long-term program strategies for sustaining and modernizing key equipment, particularly those items not covered by 10 U.S.C. § 2437, to ensure that the plans are complete and that the items are sustainable until they reach the end of their serviceable life or a replacement system is fielded. Specifically, this reassessment should detail the strategies to sustain and modernize key equipment systems until they are retired or replaced; report the costs associated with the sustainment and modernization of key equipment and identify these funds in the Future Years Defense Program; and identify the risks involved in delaying or not fully funding the strategies, and the steps the department is taking to mitigate the associated risks, for those strategies that are delayed or are not fully funded. Provide the information in the above recommendation to Congress at the same time the department submits its annual budget request, to ensure that Congress has the visibility it needs to provide effective oversight over DOD's program strategies. Congress should require the Secretary of Defense to report on program strategies and funding plans to ensure that DOD’s budget decisions address deficiencies related to key military equipment. We suggest that this report be provided in conjunction with DOD’s annual budget submissions and reflect the results of the department’s Quadrennial Defense Review. Specifically, as stated in our recommendations, the report should (1) detail the strategies to sustain and modernize key equipment systems until they are retired or replaced; (2) report the costs associated with the sustainment and modernization of key equipment and identify these funds in the Future Years Defense Program and; (3) describe the risks involved in delaying or not fully funding the strategies, and the steps the department is taking to mitigate the associated risks, for those strategies that are delayed or are not fully funded. In written comments on a draft of this report, DOD partially concurred with our recommendation that it should reassess the near and long-term program strategies for sustaining and modernizing key equipment after the department’s Quadrennial Defense Review, but did not concur with our recommendation that the department report these plans to Congress. The department’s written comments are reprinted in their entirety in appendix III. In partially concurring with our first recommendation that it should reassess the near- and long-term program strategies for sustaining and modernizing key equipment, the department stated that, through its current budget processes, it is already executing an annual procedure to assess program strategies to ensure equipment sustainment and modernization that can support the most recent defense strategy. According to the department, these budget reviews consider strategies and costs to sustain and modernize equipment, and the risks incurred by not fully funding these strategies; therefore, the resulting budget reflects the department’s best assessment of a balanced, fully funded budget that most efficiently accomplishes the national security mission within its limited resources. While we acknowledge that these budget processes may provide a department-level review of what is needed to accomplish the national security mission, the department’s budget processes and the Future Years Defense Program do not provide detailed strategies that include identifying both the costs associated with sustaining and maintaining key equipment and the risks involved in delaying or not fully funding the strategies. Without detailed plans, the department does not have sufficient information to ensure that adequate funding is provided or that it is taking the necessary steps to mitigate risks associated with strategies that are delayed or are not fully funded. We continue to believe that the department, in conjunction with the military services, needs to develop a more comprehensive and transparent approach for assessing the condition of key equipment items, developing program strategies to address critical equipment condition deficiencies, prioritizing the required funding, and mitigating risks associated with delaying or not fully funding these strategies upon completion of the Quadrennial Defense Review. Therefore, we continue to believe our recommendation has merit. The department did not concur with our second recommendation that the Secretary of Defense provide detailed strategies and costs of sustaining key equipments items and the associated risks in delaying or not fully funding these strategies in an annual report to Congress to ensure that Congress has the visibility it needs to provide effective oversight over DOD’s program strategies. DOD believes that submitting an additional report concurrent with the annual budget would be a duplication of effort. We believe that the information included in the President’s Budget does not provide Congress with sufficient information on the strategies, funding, and risks associated with maintaining key equipment items until their replacement systems are fielded. In our report, we identify a number of examples of inconsistencies between the program strategies and the funding needed to sustain and maintain key equipment items not reported in the department’s budget documents. The department is not currently required to report sustainment plans for some of these critical items to Congress. We believe that Congress needs to be assured that DOD’s budget decisions address deficiencies related to key military equipment that must be maintained and sustained until the end of their serviceable lives, including those currently not covered by Title 10 U.S.C§ 2437. Therefore, we have added a Matter for Congressional Consideration. Lastly, DOD provided technical comments concerning our assessments of specific equipment items in appendix II. We reviewed and incorporated these technical comments, as appropriate. In some instances, the data the department provided in its technical comments resulted from program and funding decisions that were made subsequent to our review. In one case, we changed our original color-coded assessment of a key equipment item based on these decisions. The Army approved the replacement for the OH- 58D Kiowa Helicopter, the Armed Reconnaissance Helicopter, and therefore we changed our original assessment of the Kiowa’s long-term program strategy and funding plans from a yellow rating to a green rating. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions, please contact me on (202) 512-8365 or by e-mail at solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are included in appendix IV. To update congressional committees on key equipment items that warrant immediate attention by the Department of Defense (DOD) and/or Congress, we conducted an analysis of 30 selected military equipment items. We performed an independent evaluation of the (1) condition of key equipment items and (2) services’ near- and long-term program strategies and funding for the sustainment, modernization, or replacement of these equipment items. This report follows our December 2003 report which assessed the condition, program strategy, funding, and wartime capability of 25 selected military equipment items. The current report increases the number of equipment items to 30, and instead evaluates the condition, near-term program strategy and funding plans, and long-term program strategy and funding plans of each system. These changes reflect the current operational environment, and the critical linkage between a successful program strategy and funding. We examined the near and long terms separately to delineate the impact of current operations on the near term and their possible effect on long-term transformational efforts. To select the 30 equipment items we reviewed, we included 18 of the equipment items reviewed in our December 2003 report, and based upon input from the military services, your offices, and our prior work, we judgmentally selected an additional 12 items. We did not include 7 of the 25 items from our previous review so that we could focus on other selected systems that we believed were more in need of examination. Our final selections included those items that the military services believed were most critical to their missions, and which have been in use for a number of years. The 30 equipment items include 9 from the Army, 6 from the Air Force, 7 from the Navy, and 8 from the Marine Corps. Our observations and assessments were made on active duty inventory as well as equipment in the National Guard and reserve forces; including reserve equipment represents another difference between this review and our December 2003 report. Our assessments apply only to the 30 equipment items we reviewed, and the results of our assessments cannot be projected to the entire inventory of DOD equipment. Because Section 805 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005—which amends Title 10 of the U.S. code (Pub. L. No. 108-375, § 805.)—does not apply to existing systems for which a replacement system will reach initial operational capability before October 1, 2008, we did not assess compliance with this section of the act. Each equipment item was assessed individually on its condition, and near- and long-term program strategy and funding. To determine which equipment items require additional attention by the department, the military services, and/or Congress, we developed an assessment framework based on three criteria: (1) the extent of the existence of a problem or issue, (2) the severity of the problem or issue, and (3) how soon the problem or issue needs to be addressed. To indicate the existence, severity, or urgency of problems identified for the 30 selected equipment items, we used a traffic light approach–red, yellow, or green–as follows: Red indicates a problem or issue that is prevalent and severe enough to warrant immediate attention by DOD, the military services, and/or Congress. Yellow indicates the existence of a problem or issue that warrants attention by DOD, the military services, and/or Congress, and that if left unattended may worsen. Green indicates that we did not identify any specific problems or issues at the time of our review, or that any existing problems or issues we identified are either not severe enough in nature to warrant immediate action, or already being addressed by DOD, the military services, and/or Congress. Individual assessments were based on systematic decisions with clear and, wherever possible, measurable criteria. Input from relevant officials— program managers, unit staffs, operators, maintainers, and engineers—was incorporated in every step of the process. We interviewed officials from components (active, guard, and reserve forces) of all four of the military services, two selected combatant commands, and several major service commands. We visited selected units and maintenance facilities to observe the equipment items during operation or under maintenance. We also discussed deployed and nondeployed equipment condition, program strategy, and funding with program managers and equipment operators and maintainers, and included these indicators where appropriate. The specific military activities we visited or obtained information from include the following: Office of the Assistant Secretary of Defense, Reserve Affairs, Arlington, U.S. Air Force, Air Combat Command, Langley Air Force Base, Va.; U.S. Air Force, Air Mobility Command, Scott Air Force Base, Ill.; U.S. Air Force Materiel Command, Wright Patterson Air Force Base, U.S. Air Force, Ogden Air Logistics Center, Hill Air Force Base, Utah; U.S. Air Force, Oklahoma City Air Logistics Center, Tinker Air Force U.S. Air Force, Pacific Command, Hickam Air Force Base, Hawaii; U.S. Air Force, Plans and Programs, Air Force Headquarters, Arlington, U.S. Air Force Reserve Command, Robins Air Force Base, Ga.; U.S. Air Force, Warner Robins Air Logistics Center, Robins Air Force U.S. Air National Guard, Headquarters, Andrews Air Force Base, Md.; U.S Air National Guard, Hickam Air Force Base, Hawaii; U.S. Army, Headquarters, Arlington, Va.; U.S. Army National Guard, Headquarters, Arlington, Va.; U.S. Army National Guard, 81st Brigade, Washington National Guard, Camp Murray, Wash.; U.S. Army National Guard, Hawaii National Guard, Ft. Ruger, Hawaii; U.S. Army, Anniston Army Depot, Anniston, Ala.; U.S. Army Aviation and Missile Command, Redstone Arsenal, Ala.; U.S. Army, Corpus Christi Army Depot, Corpus Christi, Tex.; U.S. Army, Directorate of Logistics, Fort Lewis, Wash.; U.S. Army, First Army, Ft. Gillem, Ga.; U.S. Army, Fifth Army, Ft. Hood, Tex.; U.S. Army Forces Command, Ft. McPherson, Ga.; U.S. Army, 4th Infantry Division, Ft. Hood, Tex.; U.S Army Headquarters, Arlington, Va.; U.S. Army Material Command, Ft. Belvoir, Va.; U.S. Army, Pacific, Ft. Shafter, Hawaii; U.S. Army Reserve Command, Ft. McPherson, Ga.; U.S. Army Tank – Automotive and Armaments Command, Warren, U.S. Army, III Corps, Ft. Hood, Tex.; U.S. Central Command, McDill Air Force Base, Tampa, Fla.; U.S. Marine Corps, Aviation Plans, Policies, Programs, Budgets, Joint and External Matters Branch, Arlington, Va.; U.S. Marine Corps, Aviation Weapons Systems Requirements Branch, Pentagon, Arlington, Va.; U.S. Marine Corps, Installations and Logistics, Navy Annex, Arlington, U.S. Marine Corps, Logistics Plans, Policies and Strategic Mobility Division, Navy Annex, Arlington, Va.; U.S. Marine Corps, 3rd Marine Air Wing, Miramar, Calif.; U.S. Marine Corps, I Marine Expeditionary Force, Camp Pendleton, U.S. Marine Corps, Marine Forces Pacific, Camp Smith, Hawaii; U.S. Marine Corps, Marine Forces Atlantic Command, Norfolk, Va.; U.S. Marine Corps, Naval Air Systems Command, Naval Air Station, Patuxent River, Md.; U.S. Marine Corps, Programs and Resources, Office of the Deputy Commandant, Pentagon, Arlington, Va.; U.S. Marine Corps, Reserve Command, New Orleans, La.; U.S. Marine Corps Systems Command, Quantico, Va.; U.S. Marine Corps, Army Tank Automotive and Armaments Command, U.S. Navy, Commander Fleet Forces Command, Norfolk, Va.; U.S. Navy, Commander Electronic Attack Wing, Pacific, Whidbey Island, U.S. Navy, Commander Patrol and Reconnaissance Wing 10, Whidbey, U.S. Navy, Commander Strike Fighter Wing, Atlantic, Virginia Beach, Va.; U.S. Navy, Commander Strike Fighter Wing, Pacific, Lemoore, Calif.; U.S. Navy, Commander, U.S. Pacific Fleet, Pearl Harbor, Hawaii; U.S. Navy, Headquarters, Washington, D.C.; U.S. Navy, Naval Air Systems Depot, Jacksonville, Fla.; U.S. Navy, Naval Air Systems Depot, North Island, Coronado, Calif.; U.S. Navy, Naval Air Systems Command, Patuxent River, Md.; U.S. Naval Reserve Command, New Orleans, La.; U.S. Navy, Naval Sea Systems Command Washington, D.C.; U.S. Navy, Naval Surface Force/US Naval Air Force—Atlantic, Norfolk, U.S. Navy, Naval Surface Force/US Naval Air Force—Pacific, San Diego, U.S. Navy, Naval Weapons Station, Seal Beach, Calif.; and U.S. Pacific Command, Camp Smith, Hawaii. Assessments on the condition of these 30 equipment items were based on a comparison of readiness metrics against service goals, and the existence, severity, or urgency of condition problems. We obtained data on equipment age, expected service life, mission capable rates, utilization rates, and various other metrics for fiscal years 1999 through 2004. Readiness metrics, such as material readiness rates and mission capable rates, were a primary component of our assessments. We were particularly cognizant not only of whether the equipment item met its readiness goals but, if it failed to meet this metric, we examined the gap between the readiness achieved by these equipment items and the services readiness objectives-–and the significance of that difference. Equipment items were further evaluated against metrics such as utilization rates, cannibalization rates, failure rates, and depot maintenance data. We gauged the significance of the rates and data as they reflected on the item’s condition. Further, this analysis also evaluated the extent to which each of the equipment items is being used for current operations, and their performance while deployed. Finally, we assessed specific problems with each item that may or may not have been captured in other metrics. Our evaluations of DOD’s near- and long-term program strategy and funding plans were based on the existence of near- and long-term plans, and the extent to which there were gaps in funding for these plans as projected in DOD’s Future Years Defense Program (FYDP). Both near- and long-term plans include sustainment, modernization, or recapitalization of the equipment items in order to meet mission requirements. Near-term plans are those that address current condition problems, as well as those projected until 2007; long-term plans address issues anticipated from 2008 until the replacement system enters the inventory or until the system reaches the end of its expected service life. We first assessed whether near- and long-term plans were realistic and comprehensive. For our short-term assessment, we examined whether the plans meet near-term requirements and address issues related to current condition and the need for near-term technological upgrades. For our long-term evaluation, we considered if modernizations or sustainment plans were sufficient given the timing of the replacement and the expected service life of the equipment item. Next, we determined the extent to which there were gaps in funding for both near- and long-term programs as projected in the FYDP. We then considered whether the strategy and its funding, in the near and long term, addressed other concerns which might significantly affect the program. While we attempted to obtain consistent metrics for each of the three categories across all four of the military services, data availability varied significantly by service and type of equipment. Our assessments are based on the data available from multiple sources, and represent the problems and issues we identified at the specific point in time that we conducted our work. These can change quickly given current events. Although our assessments for each of the three categories-–condition, near-term program strategies and funding plans, and long-term program strategies and funding plans-–are largely qualitative in nature and are derived from consensus judgments, our analyses are based on data and information provided by the military services and discussions with military service officials and program managers for the individual equipment items. We assessed the reliability of the services’ equipment readiness data by (1) comparing key data elements to our observations of equipment items at selected units, (2) reviewing relevant documents, and (3) interviewing knowledgeable officials. We determined that the data obtained from DOD, the military services, and the combatant commands were sufficiently reliable for our use. We performed our review from July 2004 through July 2005 in accordance with generally accepted government auditing standards. For the 30 equipment items, each assessment provides the status of the equipment item at the time of our review. The profile presents a general description of the equipment item. Each assessment area—condition, near- term program strategy and funding, and long-term program strategy and funding—includes a green, yellow, or red rating indicating the existence, severity, or urgency of problems identified based on our observations of each equipment item, discussions with service officials, and reviews of service-provided metrics. First delivered in the early 1980s, the Abrams is the Army’s main battle tank and destroys enemy forces using enhanced mobility and firepower. Variants of the Abrams include the M1, M1A1, and M1A2 and there are a total of 5,848 tanks of all variants in the fleet. The M1A1 and M1A2 have a 120 mm main gun, a powerful turbine engine, and special armor. There are 5,109 M1A1 and M1A2 tanks in the inventory, and their estimated average age is 12 years. Officials state that in the future, the Army is planning to use only a two-variant fleet of the Abrams, consisting of the M1A1 Abrams Integrated Management and the upgraded M1A2 System Enhancement Program—the primary difference being the digital architecture of the System Enhancement Program variant. The M1 variant is expected to be phased out by 2015. The Abrams is expected to remain in the Army’s inventory until at least 2045. In our previous report we assessed the condition of the Abrams Tank as green because it consistently met its mission capable goal of 90 percent from fiscal year 1998 through fiscal year 2002. However, in this review we assessed the condition of the Abrams Tank as yellow because, while it generally met or exceeded the Army’s mission capable goal between fiscal years 1999 and 2003 as shown in figure 3 below, the rates declined between fiscal years 2003 and 2004. According to program officials, the recent downward trend is a result of parts and technician shortages. Officials stated that the shortage in parts is driven by the number of vehicles either deployed or being reset to a predeployment condition and the shortage of technicians is primarily due to the number of deployed National Guard military technicians. Additionally, as of September 2004 there were a relatively small percentage of Abrams tanks, around 5 percent, deployed in support of operations in Iraq. Due to the high use in theater, these operations may accelerate the aging process of the tank fleet. We assessed the near-term program strategy and funding for the Abrams tank as yellow because while the Army possesses plans for resetting tanks as they return from operations in Iraq and recapitalizing the fleet to ensure that the tank’s systems remain updated, they continue to identify shortages of repair parts and technicians as major causes of decreased material readiness. Without adequately addressing these issues, the condition of the Abrams fleet could be significantly impacted in the near term. Another potential issue affecting the Abrams in the near term is a break in the production line, which is being used to retrofit a lesser variant Abrams to the M1A2 System Enhancement Program, occurring in fiscal years 2006 and 2007. Army officials plan to mitigate this issue by providing $40 million to maintain critical skills at the production facilities until production resumes in fiscal year 2008. We assessed the long-term program strategy and funding for the Abrams tank as green because the Army has identified a plan to reduce the current inventory of 5,109 to about 3,000 tanks in keeping with current Army transformation plans and has programmed funding to recapitalize the remaining fleet. The Army plans to move to a two-variant fleet of the Abrams, the M1A1 Abrams Integrated Management and M1A2 System Enhancement Program, which they plan to utilize until at least 2045. Officials believe this plan should reduce maintenance costs as the service will have reduced maintenance and logistics requirements from the current fleet arrangement. As noted in our previous report, the Army reduced the original number of recapitalized M1A2 System Enhancement Program tanks from 1,174 to 588. In the fiscal year 2006 President’s Budget, the Army identified funding to increase the number of M1A2 System Enhancement Program tanks to 803. This increase realigns the recapitalization funding with the Army’s upgrade schedule so that they are on target to meet their current transformation plans. Brought into service in 1981, the Army uses the family of Bradley Fighting Vehicles to provide armored protection and transportation to infantry units. The Bradley is able to close with and destroy enemy forces in support of mounted and dismounted infantry and cavalry combat operations. The Bradley Fighting Vehicle family currently consists of two vehicles: the M2 Infantry Fighting Vehicle and the M3 Cavalry Fighting Vehicle. There are four variations of each of these two vehicles: the A0, A2, A2 Operation Desert Storm, and A3, each having different capabilities and technology. For example, the A3 variants possess all of the capabilities as the A2 variants, but utilize a digital architecture, which is compatible with the Army’s net-centric warfare plans and the M1A2 System Enhancement Program Abrams tank. The Army currently maintains 6,583 M2 and M3 variants of the Bradley in their fleet and plans to use the Bradley Fighting Vehicle until at least 2045. We assessed the condition of the Bradley as yellow because, as shown in figure 5 below, the vehicles nearly met or exceeded the Army’s readiness goal of 90 percent between fiscal years 1999 and 2002; however, the mission capable rates showed a downward trend between fiscal years 2002 and 2004. According to officials, Operation Iraqi Freedom demands and efforts to reset the vehicles to their predeployment status have had a significant impact on repair parts availability. The National Guard has experienced further difficulty with availability of trained maintainers due to the high pace of operations that has resulted in the need to transfer personnel among units to fill shortages. Additionally, program officials stated that the composition of the fleet of Bradley Fighting Vehicles is insufficient to meet all of the Army’s current requirements, especially those associated with training and predeployment exercises. However, the Bradley vehicles are able to meet all of their operational requirements. We assessed the near-term program strategy and funding of the Bradley program as yellow because the Army does not currently possess a funding strategy through regular appropriations for developing the proper composition of the Bradley Fighting Vehicles fleet to meet the Army’s near- term transformation requirements. The Army requested $1.4 billion in the fiscal year 2005 supplemental for the Bradley Fighting Vehicle in order to accelerate the recapitalization of vehicles by producing 93 vehicles to replace combat losses and 554 others for the Army’s modularity needs. Without having funding programmed for the A2 or the A3 variants of the Bradley, Army officials have begun planning for the fiscal year 2006 supplemental in order to fulfill Army transformation plans. Program officials stated that the Army is relying on supplemental funding and Office of the Secretary of Defense (OSD) reprogramming actions in order to meet equipment requirements for the Army’s transformation plans. We assessed the long-term program strategy and funding of the Bradley Fighting Vehicles as red because the Army plans to significantly increase the number of vehicles and change the composition of the fleet but has not established a long-term funding strategy. Officials stated that the Army plans to convert to a fleet of Bradley vehicles which will be aligned with the Abrams tank fleet. The A3, matched with the M1A2 System Enhancement Program tank, and a lesser variant, operating with the M1A1 Abrams Integrated Management tank, will make up the Army’s future Brigade Combat Teams. Neither the A3 variant nor the lesser variants, which officials believe will be the A2 and the Operation Desert Storm variant, have long-term program funding identified. The Army uses the M113 Armored Personnel Carrier in its primary mission of personnel transportation on the battlefield, though there are many other combat support missions for the family of vehicles, including command and control, cargo transportation, and battlefield obscuration. The Army originally introduced the M113 family of vehicles in 1960. The current fleet of M113A2 and A3 Personnel Carriers, totaling 7,579, has an average age of almost 16 years. Prior to operations in Iraq the Army planned to discontinue use of the M113; however, the Army now plans to utilize the M113 Armored Personnel Carrier through 2045 in accordance with its latest modularity plans. The A3 variant of the M113 has a digital architecture, increased suspension, and is capable of carrying add-on-armor kits to provide additional protection for the troops. We assessed the condition of the M113 as green because, as shown in figure 7 below, the mission capable rates have been near the Army’s goal of 90 percent between fiscal years 1999 and 2004. As of September 2004, the Army had 666 A2 and A3 variants in Operation Iraqi Freedom, or roughly 10 percent of the combined fleet. The M113 family of vehicles has not experienced a significant decline in mission readiness as a result of recent operations in Iraq and Afghanistan. The National Guard has generally maintained its M113 vehicles at a higher readiness rate than the active units of the Army. We assessed the near-term program strategy and funding of the M113 as yellow because the Army has consistently relied on supplemental funding, congressional adjustments, and OSD reprogramming actions in order to complete modifications on the M113 family of vehicles. The Army requested $132 million in DOD’s fiscal year 2005 supplemental funding request to Congress in order to recapitalize 368 M113s. This represents about 55 percent of the vehicles that were deployed to Operation Iraqi Freedom in September 2004. Due to its armor protection, the M113 has been used in place of less armored vehicles, such as High Mobility Multi- Purpose Wheeled Vehicles, in Operation Iraqi Freedom. We assessed the long-term program strategy and funding for the M113 as red because the Army has not identified either a long-term procurement or maintenance strategy for the M113. The Army’s funding strategy for the M113 family of vehicles has been impacted by the Army’s plans to remove these vehicles from service. However, according to Army officials, the M113 family of vehicles will continue to play a significant role as the Army transitions into the new modular force. The Heavy Expanded Mobility Tactical Truck (HEMTT) is used extensively to provide transport capabilities for resupply of the combat vehicles and weapons systems used by heavy combat forces and support units. The five basic HEMTT variants are used to transport ammunition, petroleum, oils and lubricants, and missile systems, and can also serve as a recovery vehicle for other vehicle systems. There are approximately 12,700 HEMTTs in the Army’s inventory. The average age of the HEMTT fleet is about 15 years, and although the expected useful life is 20 years, the HEMTTs are expected to remain in the Army’s inventory through 2030. In our previous report we assessed the condition of the HEMTT as green because the mission capable rates were close to the Army’s goal for fiscal years 1998 to 2002. However, in this review we assessed the condition as yellow. While the fully mission capable rates for the HEMTTs were near the Army’s goal from fiscal years 1999 through 2003, the trend since fiscal year 2002 for both active and reserve components has been declining, as shown in figure 9 below. In one of the Army’s fiscal year 2004 readiness reports to DOD, the high pace of operations and aging fleet were cited as factors affecting HEMTT readiness for the active component. The decline in readiness rates for the U.S. Army Reserves was attributed to the lack of maintenance technicians. Program management officials said that the failure to meet readiness goals was also due to parts problems. According to Army officials, approximately 12–15 percent of the HEMTT fleet is in theater and is being used at rates 10 times higher than during peacetime. In a February 2005 statement to congressional committees, Army officials stated that all wheeled vehicles being used in Iraq and Afghanistan would be armored by March 2005. Despite concerns over armor protection, Army officials that we visited stated that the HEMTTs have performed as intended in theater without any significant issues. We assessed the near-term program strategy for the HEMTT as yellow because the Army’s near-term strategy for sustaining, modernizing, and procuring HEMTTs has not been fully funded. In addition, the program received significant funding from supplemental appropriations and a congressional adjustment in fiscal years 2004 and 2005, the remanufacture and upgrade program has continued at a slower pace than planned, and at the time of the fiscal year 2006/2007 budget estimate submission, the impact of modularity changes on the final acquisition objective was still unknown. The Army’s Tactical Wheeled Vehicle and Trailer Modularity and Modernization Strategy has been updated several times and has undergone significant changes. In addition, the strategy states that while investment is not sufficient to meet the Army’s goals, it does address its most critical requirements. In fiscal year 2004 the family of heavy tactical vehicles, which includes the HEMTTs, was authorized an additional $47 million in supplemental funding and another $39 million in congressional adjustments and DOD’s fiscal year 2005 supplemental funding request for $74.3 million was to replace combat losses and procure additional vehicles to equip, backfill, and modularize various Army units. The Army is planning to include another request for additional HEMTTs in DOD’s fiscal year 2006 supplemental budget request. The Extended Service Program, which the Army uses to remanufacture and upgrade existing HEMTTs, has continued at the slower pace noted in our December 2003 report, but the justification for an additional $90.3 million included in the fiscal year 2005 supplemental funding request cited the program’s importance to the Army’s modularization efforts. Finally, while the Army’s acquisition objective for HEMTTs has continued to increase, it still may not meet the Army’s modularity requirements. We assessed the long-term program strategy and funding for the HEMTT as yellow because the Tactical Wheeled Vehicle and Trailer Modularity and Modernization Strategy for both procurement and recapitalizations has not been fully funded and the Army’s plans and funding for procurement, recapitalization, and sustainment of its oldest models are continuing to evolve. The strategy has been updated several times and has undergone significant changes. The June 2005 version of the strategy concluded with a statement that while investment is adequate to address the Army’s most critical requirements, it still falls short of the Army’s goals. In addition, the goals have changed significantly. For example, in the fiscal year 2006/2007 budget estimate, dated February 2005, the Army acquisition objective for HEMTTs was 14,269, but in July 2005 the goal was 17,850. Funding plans also continue to change. In our December 2003 report, we noted that the Army had reduced funding for the recapitalization program which was used to upgrade HEMTTs. Currently, the June 2005 version of the strategy shows that the Army plans to recapitalize 4,726 HEMTTs between fiscal years 2012 and 2018. However, past history shows that this may not occur. For example, while the fiscal year 2003 budget estimate shows that the Army originally planned to recapitalize 608 HEMTTs in fiscal year 2004, the fiscal year 2005 budget estimates show that only 129 were recapitalized. The Army’s plans to eliminate its oldest HEMTT models, which can reduce fleet operating and support costs, have also been scaled down. In a fiscal year 2004 draft, the strategy stated that 9,200 of the oldest HEMTT models would be eliminated by fiscal year 2018. In a 2005 version however, that number was reduced to 7,728. In addition to receiving regular appropriations for procuring, recapitalizing, and sustaining HEMTTs, the Army is also relying on funds received through OSD’s reprogramming actions to support its long-term strategy for HEMTTs. The June 2005 version of the Army’s strategy shows that between fiscal years 2006 and 2011, an additional 3,559 HEMTTs and other heavy vehicles will be procured with funds received through OSD’s reprogramming actions. The High Mobility Multi-Purpose Wheeled Vehicle (HMMWV) is a light, highly mobile, diesel-powered, four-wheel-drive vehicle that has six configurations: troop carrier, armament carrier, shelter carrier, ambulance, missile carrier, and Scout vehicle. There are approximately 120,000 HMMWVs in the Army’s inventory. HMMWVs entered the Army’s inventory in 1985. Currently, the average age of the HMMWV fleet is about 13 years and the expected service life of HMMWVs is 15 years. The HMMWV represents 50 percent of the Army’s total tactical truck fleet. We assessed the condition of the HMMWV as green because, as shown in figure 11, it exceeded the Army’s fully mission capable goal from fiscal years 1999 through 2004 for both the active and reserve components. However, Army officials noted that the HMMWVs supporting Operation Iraqi Freedom are experiencing usage (i.e., operational tempo) that is six times their normal peacetime usage rate. HMMWV production has now transitioned primarily to the Up-Armored platforms to enhance force protection and mobility for deployed units. While the Up-Armored variant is built to support the weight of the vehicle’s armor, Army officials have expressed concern regarding the long-term impact from the stress placed on the frame, engines, and transmissions by the additional weight of add-on armor that the HMMWVs were not reinforced to handle. DOD has initiated a Stress Study to try and quantify the effects of high usage, additional weight, and harsh operating conditions on future maintenance/replacement needs of vehicles such as the HMMWV. In February 2005, in a statement to congressional committees, Army officials stated that all wheeled vehicles being used in Iraq and Afghanistan would be armored by March 2005. We assessed the near-term program strategy and funding for the HMMWV as yellow because the Army has not fully funded its Tactical Wheeled Vehicle and Trailer Modularity and Modernization Strategy and the Army Acquisition Objective continues to change. In the near term, in addition to receiving regular appropriations, the Army has received additional funding for HMMWVs from supplemental appropriations and congressional adjustments. In fiscal year 2004, the Army received $239 million of supplemental funds and about $39 million in congressional adjustments. In fiscal year 2005 the Army requested almost $290 million in supplemental funds to procure HMMWVs to activate units and to supply existing units. An additional $31 million in supplemental funds was requested to replace combat losses and another $123 million was requested to begin recapitalizing older HMMWV models and converting them to newer models capable of accepting the add-on armor. The June 2005 version of the Army’s strategy shows that it is planning to request additional supplemental funding in fiscal year 2006. The Army’s Acquisition Objective for HMMWVs has increased significantly since 2001. In the Army’s fiscal year 2002 amended budget estimate, submitted in June 2001, the Army’s Acquisition Objective for HMMWVs was about 121,000 whereas in an April 2004 version of the Army’s plan, the Army was projecting a need for 145,000 HMMWVs. Currently, neither the fiscal year 2006/2007 nor the June 2005 version of the Army’s plan show the Army’s Acquisition Objective for HMMWVs. Army officials noted in an August 2004 version of the strategy that they had resourced all known Global War on Terrorism (GWOT) requirements and their projected battle losses. We assessed the long-term program strategy and funding for the HMMWV as yellow because again, the Army has not fully funded its Tactical Wheeled Vehicle and Trailer Modularity and Modernization Strategy. In addition, the Army’s initial plans for the HMMWV recapitalization programs have been significantly reduced. The June 2005 version of the Army’s strategy shows anticipated procurement requirements of about 30,000 vehicles between fiscal years 2008 and 2011. However, the Army’s fiscal year 2006/2007 budget request is only for about 18,000 vehicles during those years. The Army is planning on using funds resulting from OSD reprogramming actions to procure the additional 12,000 vehicles needed to meet requirements for those fiscal years. In the long term, according to an August 2004 version of the strategy, the Army plans to eliminate about 45,000 of its oldest models by fiscal year 2018. About 19,000 of the oldest models would be converted to newer models capable of handling add-on armor kits in order to provide better soldier protection through a recapitalization program. However, those plans, and the associated costs, continue to change. For example, a February 2005 version of the strategy estimated a cost of about $1.8 billion to recapitalize 20,114 HMMWVs for fiscal years 2008 through 2011, but the Army’s fiscal year 2006/2007 budget request, dated February 2005, contained about $1.9 billion in order to recapitalize 17,694 vehicles during those years. The June 2005 version of the Army’s plan shows an estimated cost of about $2 billion to recapitalize 16,522 HMMWVs between fiscal years 2008 and 2011. The Family of Medium Tactical Vehicles (FMTV) is a series of vehicles based on a common chassis which vary by payload and mission requirements. It is currently the only medium fleet vehicle that is in production with state-of-the-art technology. The FMTV includes the Light Medium Tactical Vehicle with a 2.5-ton capacity in both the cargo and van models and the Medium Tactical Vehicle with a 5-ton capacity in the cargo, tractor, wrecker, and dump truck models. The FMTV is the replacement for the obsolete and maintenance-intensive 2.5- and 5-ton trucks, some of which have been in the Army’s inventory since the 1960s. The FMTV’s missions include performing local and line hauling, unit resupply, and other missions in combat, combat support, and combat service support units. FMTVs are rapidly deployable and can operate in various terrains and in all climatic conditions. The commonality of parts across the various models is intended to reduce both the logistics burden and operating and support costs. The FMTVs entered the Army inventory in 1996 and currently there are approximately 19,400 vehicles with an average age of about 6 years. The average useful life is expected to be between 20 and 22 years. We assessed the condition of the FMTV as green because, as shown in figure 13, it exceeded the Army’s fully mission capable goal from fiscal years 1999 through 2004 for both the active and reserve components with the exception of the National Guard in fiscal year 2000. Despite operating during GWOT operations at a rate that is nine times higher than in peacetime, officials stated that the FMTVs are not experiencing any problems. However, in response to concerns about armored protection, Army officials, in a February 2005 statement to congressional committees, stated that all wheeled vehicles being used in Iraq and Afghanistan would be armored by March 2005. We assessed the near-term program strategy and funding for the FMTV as yellow because the Army’s Tactical Wheeled Vehicle and Trailer Modularity and Modernization Strategy is not fully funded and states that although planned investment is adequate to address critical requirements, it still falls short of the Army’s goals. According to an April 2004 version of the strategy, the Army did not consider it to be either cost or operationally effective to recapitalize older model vehicles. Instead, the Army’s plan is to meet readiness and operational shortfalls through replacement with newer, technologically improved vehicles. However, this plan will not fill the Army’s goals in the near term. For example, the fiscal year 2005 budget estimate states that procurement of FMTVs through fiscal year 2005 will only fill approximately 32 percent of the Army’s Acquisition Objective for FMTVs. Funding for FMTVs in the near term relies on supplemental appropriations, congressional adjustments, and OSD reprogramming actions, in addition to regular appropriations. For example, in fiscal year 2004, the FMTV program was authorized about $3.4 million from DOD’s supplemental funding request and received another $34 million as a congressional adjustment. DOD’s fiscal year 2005 supplemental funding request included $217 million for the Army to procure FMTV trucks to replace those lost in theater and to support modularity requirements. The Army also added an additional $122.5 million in the fiscal year 2005 supplemental funding request to meet modularity requirements and to replace combat losses of the 2.5-ton FMTV vehicles. The June 2005 version of the plan shows that for fiscal year 2006, the Army is not planning to request supplemental funding, but in fiscal year 2007, they are planning to use additional funds received as a result of OSD reprogramming actions to procure additional FMTVs to fill unit shortfalls. We assessed the long-term program strategy and funding for the FMTV as yellow because again, the Army has not fully funded its Tactical Wheeled Vehicle and Trailer Modularity and Modernization Strategy and although an April 2004 version of the strategy states that an incremental upgrade modernization strategy that will include a combination of field modernizations and new procurement is envisioned, no specific actions or time frames are identified. The June 2005 version of the Army’s plan shows that, between fiscal years 2008 and 2011, the Army is planning to continue using OSD reprogrammed funds to procure FMTVs in order to fill unit shortfalls. In addition, the June 2005 plan shows that the Army plans to remove 25,000 of the old model 2.5-ton and 5-ton trucks from the inventory. FMTV production has been done in phases. As reported in an April 2004 version of the strategy, the first two phases together will deliver over 20,000 vehicles. The third phase is scheduled to begin in fiscal year 2005 and the fourth phase, an outgrowth of vehicle component design and integration under ongoing program technology/insertion efforts, does not have an identified start date but is planned to begin shortly after the completion of the current production contract. The Apache is a multimission aircraft designed to perform rear, close, deep operations and precision strikes, armed reconnaissance, and security during the day, at night, and in adverse weather conditions. There are two Apache variants: the AH-64A, which entered service in 1984, and the AH- 64D Longbow, an improved version of the AH-64A, which entered service in 1998. The Army plans to convert most of the AH-64A helicopters into AH- 64D models, and to improve the safety features on the remaining AH-64A models. In total, there are about 703 Apache helicopters in the Army’s inventory: 263 A models and 440 D models. The average fleet age of the A model is about 13 years, and the average age of the D model fleet is about 4 years. Our assessment of the Apache’s condition as yellow is unchanged since our prior report. As shown in figure 15, the average mission capable rates for the AH-64A models have been below the Army’s goal between fiscal years 1999 and 2004, and the average mission capable rates for the AH-64D fleet have been above goal for 3 of the 6 years. In our December 2003 report, safety restrictions were cited as the cause of not meeting mission capable goals but since then, all of the issues have been addressed throughout the fleet. However, according to officials, elevated flying hours in Iraq and Afghanistan, coupled with the harsh environment, continue to increase demands for limited spare parts and for maintenance for such items as engines and rotor blades. Officials further stated that the peacetime usage rate for the AH-64 is 15 hours a month and the actual number of flight hours is averaging 31 hours per month in Iraq and 55 per month in Afghanistan. Despite these challenges, officials stated that the AH-64 is capable of conducting its mission and, between February 2003 and December 2004, its mission capable rates in both Iraq and Afghanistan exceeded the Army’s goal. We assessed the near-term program strategy and funding for the Apache as green because the Army has recapitalization and sustainment strategies, both of which are funded. As stated in our December 2003 report, the Apache Recapitalization Program addresses cost, reliability, and safety problems, fleet groundings, aging aircraft, and obsolescence. The Army is continuing to remanufacture the AH-64A models to AH-64D models with the remaining helicopters beginning conversion during fiscal year 2005. In addition, Apaches that have been deployed are being returned to predeployment conditions through a combination of unit and contractor actions. The Army received an additional $321.1 million in the fiscal year 2005 supplement to replace 13 Apaches that were lost in theater. We assessed the long-term program strategy and funding for the Apache as green because the Army’s modernization strategy to improve combat capability and aircraft safety appears likely to allow the Apache to remain in service until 2040 and, as we reported previously, is consistent with the Army’s stated requirements. A total of 597 AH-64A models will be converted to AH-64D models by fiscal year 2010. All remaining AH-64A models are scheduled to receive additional reliability and safety modifications as part of the Army’s response to concerns of the Office of the Secretary of Defense and Congress. There are plans for additional upgrades to AH-64D models and funding to support the Army’s current long-term strategy has been programmed through fiscal year 2020. The CH-47 helicopter is a twin-engine, tandem rotor helicopter designed for transporting cargo, troops, and weapons, and is the only Army helicopter that can operate at high altitudes. By 1994, all CH-47 models were upgraded to the CH-47D version, which comprises the Army’s heavy lift fleet. The CH- 47D will be replaced by the CH-47F, a remanufactured version of the CH- 47D with a new digital cockpit and modified airframe to reduce vibrations. The CH-47F was approved for full-scale production in November 2004, and officials state the Army plans to convert the entire fleet by fiscal year 2018. There are 395 CH-47D models and 3 CH-47F models in the Army inventory, but the CH-47F models are not assigned to units. The average age of the CH-47D model is about 17 years. Army aircraft generally have life cycles of 20 years. Our assessment of the CH-47D’s condition as red is unchanged since our prior report. Mission capable rates for the fleet, as shown in figure 17, were consistently below service goals for fiscal year 1999 through fiscal year 2004. Officials stated that the aircraft is currently being flown in Iraq and Afghanistan three times more than planned peacetime rates, with the CH-47D flying 200 hours in 6 months when it was originally planned to fly 200 hours in 18 months. Deployment cycles for the CH-47D are often longer than other equipment. While most Army helicopters remain in theater for about a year, officials report that some CH-47D helicopters have been in theater for almost 2 ½ years. This usage, particularly in a desert environment, has increased the amount of maintenance and number of parts needed to sustain the aircraft, which in turn has negatively impacted overall readiness. According to officials, current shortages of CH-47D helicopters and the requirement to fill nearly simultaneous competing priorities with limited resources may require additional CH-47D helicopters to remain in theater as stay-behind equipment. Despite these challenges, officials state that the CH-47D has proven itself in theater. For example, between February 2003 and December 2004 the CH-47D’s mission capable rates in Afghanistan exceeded the Army’s goal. We assessed the near-term program strategy for the Chinook as yellow because the Army has plans to address the issues affecting the current condition but has yet to implement all of the solutions. Officials stated that the components that affect readiness the most can be repaired at the depot; however, the emphasis (e.g., transportation priorities) is on pushing parts to units but not necessarily returning the broken parts to the depot for repair. According to officials, the Army is working in coordination with depot personnel to become more efficient at identifying and returning the broken parts for repair, as well as developing relationships with original equipment manufacturers to allow for a faster replacement of spare parts. However, whether these efforts will be successful and the degree to which they resolve parts shortages remains to be seen. We assessed the long-term program strategy for the CH-47 as green because the Army has and is funding a modernization strategy to improve the CH-47 capability and lifespan. The Army plans to have a final CH-47F fleet size of 452 aircraft, to include 397 CH-47F aircraft that were remanufactured from the CH-47D and 55 new build CH-47F aircraft. Officials stated that conversion from the CH-47D to the CH-47F adds about 20 years to the service life of an aircraft, as well as improving performance and reducing overall operations and sustainment costs. The Kiowa is a multimission armed reconnaissance helicopter designed to support combat and contingency operations. Deliveries of the OH-58D began in 1985, and the last new one was delivered to the Army in 1999. There are 354 Kiowa helicopters in the Army’s inventory, and their average age is about 13 years. While the expected service life for an OH-58Ds is 20 years, the Army plans to retire the entire OH-58D fleet by fiscal year 2013 and to replace it with the Armed Reconnaissance Helicopter. We assessed the condition of the Kiowa as green because the mission capable rates have been consistently above service goals for calendar years 1999 through 2004. As seen in figure 19, the OH-58D has remained at or above the 80 percent mission capable rate. Further, 96 OH-58Ds have deployed to support operations in Iraq and Afghanistan and have exceeded their planned flight hours; specifically, peacetime average usage for the OH- 58D is about 20 hours per month, but the actual flight hours for deployments has averaged between 80 and 100 hours per month. The Army attributes higher readiness rates of the OH-58D in part to simple design and a lighter airframe. For example, the OH-58D’s mission capable rates in Iraq between February 2003 and December 2004 almost met the Army’s goal. In addition, routine maintenance is performed on the Kiowa after every 40 hours of operation instead of the 300-400 hours for other aircraft, and the original equipment manufacturer, Bell Helicopter, conducts the depot-level repairs. We assessed the near-term program strategy and funding for the OH-58D as green because the Army’s plans and funding will allow the aircraft to meet its requirements in the near term. The OH-58D reset and safety enhancement programs, which are fully funded, include safety enhancements, weight reduction, and other maintenance actions. Funding for the Kiowa since fiscal year 1999 has been near Army requests. Additionally, due to the planned replacement of the Kiowa beginning in fiscal year 2008, public law limits funds that can be spent on the aircraft to basic sustainment, maintenance and safety measures. For that reason Kiowa battle losses are not being replaced; however, according to the Army, the existing fleet is sufficient to meet requirements over the next 1–3 years, even at higher usage rates. We assessed the long-term program strategy and funding for the Kiowa as green because the Army has a funded strategy to field its replacement, the Armed Reconnaissance Helicopter. According to the Army, this aircraft is a relatively inexpensive armed aerial platform that will integrate a commercial off the shelf aircraft with non-developmental mission equipment packages to the extent possible. The Army’s acquisition objective is to have a fleet of 368 armed reconnaissance aircraft. The plan is for the Kiowa to be phased out of the Army’s inventory beginning in fiscal year 2008 as the Armed Reconnaissance Helicopter is fielded. First delivered to the Marine Corps at the beginning of Operation Desert Storm, the Abrams is the Marine Corps’ main battle tank and destroys enemy forces using enhanced mobility and firepower provided by a powerful turbine engine and a 120 millimeter main gun. The Marine Corps possesses only one variant of the Abrams, the M1A1. The M1A1 variant fleet consists of a depleted uranium turret version and a “Plain Jane” version, which lacks the enhanced armor. There are 403 M1A1 tanks in the inventory, and the estimated average age is 16 years. The Marine Corps plans to use the M1A1 as its main battle tank until it is replaced by the Marine Air-Ground Task Force Expeditionary Family of Fighting Vehicles which is planned to be fielded in 2025. We assessed the condition of the Abrams tank fleet as red because the Marine Corps has failed to meet its stated readiness goal of 90 percent on several occasions during the fiscal years 1999 to 2004 period and, as shown in figure 21, recent readiness trends indicate a steady decline away from the readiness goal. Marine Corps officials attribute the decline of the condition of the tank fleet to the demand on equipment as a result of operations in support of the Operation Iraqi Freedom and Marine Corps manpower levels. Since 2003, the Marine Corps has deployed tanks to Operation Iraqi Freedom, a theater where equipment has been used aggressively in rugged environments. Shortages of maintenance personnel are a result of the transfers of personnel to units that are deploying and unit staffing levels. We assessed the near-term program strategy and funding of the M1A1 Abrams tank fleet as yellow because the Marine Corps has not fully funded its tank remanufacture program and has identified additional unfunded priorities for fiscal year 2006. The Marine Corps is conducting a remanufacture program on the M1A1 tank that is intended to improve the quality of the existing equipment by applying all equipment modifications and replacing worn components. The Marine Corps has fully funded the remanufacture of 79 tanks during fiscal year 2005; however, it only identified funding for about 33 percent of the scheduled tank remanufactures during fiscal years 2006 and 2007. Marine officials believe that they will be able to meet all remanufacturing requirements because the tank program has identified similar funding requirement to Marine Corps Logistics Command in the past and has received sufficient funding to meet the remanufacture needs. The Marine Corps identified $77 million in unfunded priorities for the Abrams program in fiscal year 2006. The majority of this amount, $40 million, is to support continued depot maintenance operations and the remainder is to procure Firepower Enhancement Program suites, which increase the detection, recognition, and identification of targets. We assessed the long-term program strategy and funding of the M1A1 Abrams tank fleet as yellow because the Marine Corps has not completely identified the program requirements or funding for its replacement system, the Marine Air-Ground Task Force Expeditionary Family of Fighting Vehicles; however, they are taking steps to increase the service life of the M1A1. The Marine Air-Ground Task Force Expeditionary Family of Fighting Vehicles is scheduled to be the replacement for the Abrams tank and other Marine Corps ground fighting systems. In order to extend the life of the current fleet of Abrams tanks, the Marine Corps has identified funding for 80 percent of the scheduled tank remanufactures during fiscal years 2008 and 2009. The Marine Corps has tentatively established plans to conduct a Service Life Extension Program for the M1A1 fleet in future years. As this program is to start in years beyond the current Future Years Defense Program budget, no funding has been identified currently. The Service Life Extension Program may be essential to ensure that the current fleet of Abrams tanks remains serviceable until the replacement vehicle is fielded. The LAV-C2 Command and Control and the LAV-25 are two variants from the family of Light Armored Vehicles (LAV) that we included in our review. Both variants are all-terrain, all-weather vehicles with night capabilities and can be fully amphibious within 3 minutes. The LAV-C2 variant is a mobile command station providing field commanders with the necessary resources to command and control Light Armored Reconnaissance units. The average age of the LAV-C2 is 18 years and there are 50 in the inventory. The LAV-25 provides rapid maneuverability, armor protection, and firepower to the Light Armed Reconnaissance units. The average age of the LAV-25 is 19 years and there are 407 in the inventory. The family of Light Armored Vehicles is expected to be replaced by the Marine Air-Ground Task Force Expeditionary Family of Fighting Vehicles. In our previous report we assessed the condition of the LAV-C2 as green because the Marine Corps had initiated a fleet-wide Service Life Extension Program to extend the service life of the vehicle. However, in this review we assessed both the LAV-C2 and the LAV-25 variants of the Light Armored Vehicles and we assessed the condition of these vehicles as yellow. While the material readiness rates for the two variants were near the readiness goal of 85 percent between fiscal years 1999 and 2004 (see fig. 23); the overall material readiness rate trend declined during this period. Marine Corps officials stated that despite the fact that the vehicles did not meet the Marine Corps’ readiness goal they were able to fulfill all mission requirements during this period. However, the vehicle’s high usage in support of contingency operations has placed a strain on the supply system and has led to shortages of key Light Armored Vehicle components, such as struts and drive train components. We assessed the near-term program strategy and funding of the Light Armored Vehicle as yellow because although a service life extension program and some upgrades are planned and funded, there are some essential program requirements that remain unfunded. A service life extension program designed to improve the quality and extend the life of the vehicles has already been performed on a majority of the vehicles. In addition to the service life extension program, the LAV-C2 and LAV-25 are planned to receive upgrades to address capabilities deficiencies. The upgrade to the LAV-C2 will enhance communications capabilities, affording more commonality with other vehicles and helicopter systems. The upgrade to the LAV-25 will enhance target recognition and the lethality upgrade will increase the fire power of the vehicle’s 25-millimeter main gun. A program is also funded to address any obsolescence issues. Additionally, as a result of force structure changes, the Marine Corps is establishing five new light armored reconnaissance units and has received fiscal year 2005 supplemental appropriations to purchase new upgraded vehicles to equip these units and begin upgrades on the legacy fleet. However, the Marine Corps has identified $113 million as an unfunded requirement that is needed to complete the standardization of the older LAVs. We assessed the long-term program strategy and funding of the Light Armored Vehicle as yellow because while the Marine Corps has not completely identified the requirements for its replacement system or established associated program strategy or funding, the completion of the near-term plans may help the Marine Corps sustain its fleet of LAVs until the replacement is fielded. Both the LAV-C2 and the LAV-25 upgrades to address capabilities deficiencies have achieved or will achieve initial operational capability by fiscal year 2009. Three variants represent the family of Assault Amphibian Vehicles (AAV). The AAVs are armored full-tracked landing vehicles. The Personnel variant carries troops from ship to shore and to inland objectives and there are 930 in the inventory. The C2 Command and Communications variant provides a mobile task force communication center in water operations and from ship to shore and to inland areas and there are 76 in the inventory. The Recovery variant recovers similar or smaller sized vehicles. It also carries basic maintenance equipment to provide field support maintenance to vehicles in the field. There are 51 Recovery variants in the inventory. The average age of the vehicle is 28.6 years. All of the AAVs will be remanufactured under the Reliability, Availability and Maintainability/Rebuild to Standard upgrade program, which began in 1998 to lengthen the vehicle’s expected service life. The fleet of AAVs is scheduled to be replaced by the Expeditionary Fighting Vehicle beginning in 2010. Our assessment of the condition of the AAV fleet as yellow is unchanged since our prior report. Although the fleet material readiness rates varied by vehicle type and by year, as shown in figure 25, the overall readiness trend for the fleet during the fiscal years 1999 to 2004 period declined. Despite the declining material readiness, Marine Corps officials stated that the AAVs were able to meet all operational requirements. Wartime utilization rates for the vehicles in Operation Iraqi Freedom were as high as 11 times the normal peacetime rate. We assessed the condition of the near-term program strategy and funding of the AAV fleet as yellow because while the Marine Corps is completing the Reliability, Availability, and Maintainability/Rebuild to Standard upgrade on all of the remaining 377 vehicles in its fleet, this upgrade only returns the vehicle to its original operating condition and does not add any upgraded capability. While Marine Corps officials stated that the vehicles have been able to perform all of their operational requirements, the AAVs lack some capabilities in areas such as target acquisition (day and night) and land/water mobility, which are needed to carry out their warfighting doctrine—Operational Maneuver from the Sea. The Reliability, Availability, and Maintainability/Rebuild to Standard upgrade program for 327 of the 377 vehicles has been funded through regular Marine Corps procurement appropriations, supplemental appropriations, and congressional adjustments over the past few years. Funding for the conversion of the remaining 50 vehicles, plus 8 for future replacements of combat losses, was included in the fiscal year 2005 supplemental request. In addition to funding the upgrades, the requested procurement will also fund engineering changes to help sustain the AAV fleet and purchase add-on armor. We assessed the long-term program strategy and funding of the AAV fleet as yellow because while the Marine Corps will have completed an extensive upgrade of the fleet with the Reliability, Availability, and Maintainability/Rebuild to Standard program, the timely fielding of the Expeditionary Fighting Vehicle remains in question. DOD reduced funding, thus delaying the initial fielding of the Expeditionary Fighting Vehicle to fiscal year 2010, 4 years past the original date. Operators and maintainers we spoke with are concerned about the delay because the Reliability, Availability, and Maintainability/Rebuild to Standard program is expected to help the AAV fleet serve until the Expeditionary Fighting Vehicle is fully fielded, but the current rate of usage in Operation Iraqi Freedom could significantly shorten the serviceable life of the current fleet. Officials expect that all Reliability, Availability, and Maintainability/Rebuild to Standard vehicles will need to go through an Inspect and Replace Only As Necessary maintenance program since they will have to stay in the fleet longer than expected. The Medium Tactical Vehicle Replacement (MTVR) is a family of trucks with 7-ton capacity that consists of six variants. Comprising the Medium Tactical Vehicle Replacement fleet is a standard 7-ton cargo variant, an extended bed 7-ton vehicle, a dump truck, and a wrecker. The MTVR began replacing two aging variants of 5-ton vehicles in fiscal year 2002. The MTVR is capable of moving personnel and cargo cross country in support of maneuver units. The MTVR has increased capabilities compared to the 5-ton truck and is capable of being delivered by cargo aviation assets. We assessed the condition as green because, as can be seen in figure 27, the Medium Tactical Vehicle Replacement fleet has met the Marine Corps’ stated material readiness goal of 85 percent for the 2 years that data were available, fiscal years 2003 and 2004. Though the MTVRs are being aggressively used in support of Operation Iraqi Freedom, officials stated that they are fairly easy to maintain and there are sufficient repair parts to meet current requirements. Officials also attribute much of the Marine Corps’ success at keeping high material readiness rates to the maintenance personnel. We assessed the near-term program strategy and funding of the MTVR as green because, despite experiencing some combat losses and the lack of funding to replace these losses, the Marine Corps has made plans to meet 7- ton vehicle demands of the Marine Corps in the short term. The Marine Corps possesses an indefinite delivery/indefinite quantity contract with Oshkosh Trucks, which allows it to increase the number of vehicles in the fleet without negotiating a new contract. The Marine Corps has utilized this contract to procure an additional 1,850 MTVR upgrade armor kits, which will be used to provide additional protection to deploying Marine Expeditionary Units and as a reserve for Marine Expeditionary Brigades. In the fiscal year 2006 Unfunded Programs List, the Marine Corps identified $1.4 million to procure seven new MTVRs that will help replace actual and projected combat losses. We assessed the long-term program strategy and funding of the MTVR as green because the Marine Corps plans provide sufficient numbers of MTVRs to equip all Marine Corps units in the long-term. Marine Corps officials stated that they have plans to reconstitute equipment returning from deployment and may rotate equipment between deploying units and prepositioned forces. The officials believe that these actions could balance out the usage rates of the fleet and therefore maintain the fleet’s life expectancy. As discussed in the near-term program strategy and funding, the Marine Corps possesses an indefinite delivery/indefinite quantity contract with the original equipment manufacturer and program officials believe the Marine Corps can source equipment in the future to meet requirements. The AV-8B Harrier Jet’s mission is to attack and destroy surface targets during day and night conditions and to escort assault support aircraft. It has a short takeoff and vertical landing capability to enable it to deploy and operate from amphibious assault ships and remote tactical landing sites. There are 154 in inventory (131 combat capable aircraft, 17 noncombat capable training aircraft, and 6 aircraft in storage), with an average age of 9 years. The Joint Strike Fighter is expected to replace the AV-8B beginning in 2012. We assessed the condition of the AV-8B as yellow because it consistently failed to meet the Marine Corps’ mission capable rate goal of 76 percent between fiscal years 1999 and 2004 (see fig. 29 below). However, despite missing the mission capable goal, the mission capable trend showed some improvement through fiscal year 2003. Further, the AV-8B aircraft nonmission capable rates for maintenance and supply also showed improvement during that time frame. Marine Corps officials commented favorably on the aircraft’s performance in support of operations in Iraq and Afghanistan, and the aircraft’s wartime utilization was about one and one- half times the normal peacetime rate. A defense panel has analyzed past problems with the aircraft and they have recommended improvements to maintenance cycles and technician availability. We assessed the near-term program strategy and funding of the AV-8B as green because the Marine Corps has several initiatives and programs established and funded to improve the capabilities, safety, and reliability of the aircraft. The Marine Corps has procured, either through upgrades or remanufacture, 93 aircraft with radar/night attack which increases the ability of the aircraft to complete assigned missions in a greater variety of weather and light conditions. Officials also report that they have fully equipped all AV-8B aircraft with LITENING pods which increase image resolution for ground targeting. The Marine Corps has also developed new maintenance practices and policies that will increase readiness and decrease downtime spent in maintenance. According to Marine Corps officials, as a result of these policies the Marine Corps has seen an approximately 66 percent increase in the serviceable life of the aircraft. We assessed the long-term program strategy and funding of the AV-8B as yellow given the potential for delays in the Joint Strike Fighter program which will require the AV-8B to fly longer than expected. Increasing program costs and unproven critical technologies could further delay the Joint Strike Fighter program’s initial entry into service, which is currently planned for 2012 with complete fielding in 2024. The Marine Corps currently plans to fly the AV-8B until the 2011-2020 time frame. Marine Corps officials note that funding is sufficient to execute the long-term program strategy for the AV-8B through the Future Years Defense Program. The AH-1W Super Cobra is a day/night, marginal weather, Marine Corps, attack helicopter that provides en route escort and protection of troop assault helicopters, landing zone fire suppression during the assault phase, and fire support during ground escort operations. There are 179 aircraft in the inventory with an average age of 15 years. As in our prior report, we assessed the condition of the AH-1W Super Cobra as yellow because the aircraft consistently failed to meet its mission capable rate goal of 85 percent during the fiscal year 1999 to 2004 period, as shown in figure 31. Despite the aircraft’s low mission capable rates, officials stated that the AH-1W was able to meet all of its mission requirements during this period. Further, the AH-1W upgrade program may decrease maintenance needs due to parts commonality with other Marine Corps utility helicopters. The AH-1W has served in both Operation Enduring Freedom and Operation Iraqi Freedom and deployed mission capable rates were higher than for those aircraft that were not deployed. The aircraft wartime utilization is about two times that of peacetime operations. The Marine Corps rotates the Super Cobras out of these theaters and conducts depot-level maintenance on the aircraft upon their return. We assessed the near-term program strategy and funding of the AH-1W as yellow because while the Marine Corps has established a program to remanufacture the current fleet of AH-1W Super Cobras to a more capable AH-1Z, they may experience a shortage of AH-1Ws during the remanufacturing process. Officials stated that they may be short as many as 40 AH-1Ws due to operational requirements and forecasts for future attrition. Marine Corps officials further stated that if there is a shortfall, it would largely occur in the reserves due to the current operational requirements for the active squadrons. This could seriously impact the reserve air wing’s ability to train pilots and meet operational requirements in the future. Additionally, the Marine Corps identified $50 million in unfunded requirements for engineering efforts associated with the remanufacturing program. We assessed the long-term program strategy and funding for the AH-1W Super Cobra as yellow because, as discussed in the near-term program strategy and funding, the Marine Corps potentially faces many years of AH- 1W shortages as the remanufacturing effort and future operational requirements place demands on the fleet. According to Marine Corps officials, if the upgrade program remains on schedule, the entire fleet of AH-1W will be upgraded to the AH-1Z by fiscal year 2017. The CH-46E Sea Knight helicopter provides all-weather, day/night, and night vision goggle assault transport of combat troops, supplies, and equipment during amphibious operations ashore. The total inventory of CH-46Es is 223 and the average age is 36 years. The Marine Corps plans to replace the fleet of CH-46E with the MV-22 tilt rotor aircraft beginning in 2007. In our previous report, the CH-46E received a red rating because the aircraft consistently failed to meet mission capable goals. However, in this review we rated the condition of the CH-46E as yellow because although the mission capable rate trend is declining as shown in figure 33, rates were near the goal of 80 percent between fiscal years 1999 and 2004. Further, deployed mission capable rates were higher than nondeployed aircraft. Marine Corps officials noted that the CH-46E was able to meet mission requirements for operations in Iraq and Afghanistan, and the aircraft’s wartime utilization was three times the normal peacetime rate. To help improve the condition of the aircraft, the Marine Corps has completed an analysis on the airframe and all major aircraft subsystems and has established a calendar-based depot maintenance cycle. Additionally, as of the end of August 2005, 234 engine upgrades have been completed. The engine upgrade is expected to improve capability and reduce maintenance requirements. However, Marine Corps officials stated that sustainment of the aircraft is still a concern due to its age and the fact that the aircraft may have to be in service longer due to fielding delays and funding cuts for the MV-22. We assessed the near-term program strategy and funding for the CH-46E as red because the Marine Corps may be unable to meet its near-term operational requirements due to aircraft and potential repair part shortages caused by the age of the aircraft. Despite funding upgrades and modifications to the CH-46E to improve its safety, reliability, and survivability, repair parts may not be available through normal procurement lines because some of the original production lines have been closed. The Marine Corps is planning to rely on retiring aircraft to provide replacement parts for operating aircraft. Due to fielding delays of the MV- 22, the CH-46E will not be retired at the pace anticipated and, according to Marine Corps officials this could lead to some repair parts shortages. Given the continued demands to support operations in Iraq and Afghanistan, pilot training, and the current scheduled fielding of the MV-22, the Marine Corps may be short one CH-46E squadron for a period of 2 years starting in January 2006. Marine Corps officials stated that this squadron is necessary to support current contingency operations and operational plans developed at combatant command headquarters. They also stated that they are considering options to mitigate these issues by engineering repair parts to extend the serviceable life of aircraft components and utilizing other types of aircraft to fill in for the decommissioned squadron. We assessed the long-term program strategy and funding for the CH-46E as red because delays in the MV-22 fielding will force the CH-46E to continue flying much longer than planned and this could impact the Marine Corps’ ability to support future operations. DOD reduced procurement funding for the MV-22 aircraft in its 2006 budget request, which delays the full authorized fielding of all MV-22 squadrons until fiscal year 2016 versus 2011. This delay will force some squadrons, especially in the reserves, to continue to fly the older CH-46E despite the fact that it may not be able to support Marine Corps operational doctrine. Operational Maneuver from the Sea, a Marine Corps war-fighting doctrine, calls for forces to cross great distances to engage an enemy. These requirements currently exceed the capabilities of the CH-46E. The CH-53E Super Stallion helicopter provides assault support by transporting heavy weapons, equipment (such as High Mobility Multi- Wheeled Vehicles and Light Armored Vehicles), supplies, and troops. The CH-53E is capable of in-flight refueling. The average age is 17 years and there are 147 CH-53Es in the inventory. The expected replacement for the CH-53E is the Heavy Lift Replacement, but the requirements are still being determined. The Heavy Lift Replacement is expected to enter service in 2015. We rated the condition of the CH-53E as yellow because the aircraft did not meet its mission capable goal of 70 percent in some years and between fiscal years 2003 and 2004 the mission capable rates declined as shown in figure 35. The aircraft wartime utilization is about two times that of peacetime operations, and the aircraft mission capable rates for deployed aircraft are higher than those for aircraft that are not deployed. According to Marine Corps officials, fatigue issues related to age, as well as structural cracks in the tail boom area of the aircraft, have been ongoing problems with the CH-53E fleet. The higher-than-expected usage rates in Operations Iraqi and Enduring Freedom have accelerated the need to repair these areas. The Marine Corps is addressing the structural cracks and engine upgrades through several programs. The engine upgrades are expected to improve capability and reduce maintenance requirements. Further, despite the declining readiness rates, officials stated that the CH-53E was able to meet all operational requirements. We assessed the near-term program strategy and funding of the CH-53E as yellow because, although the Marine Corps has several initiatives underway that will help sustain and improve capabilities of the CH-53E, some of the upgrades and safety issues are on the service’s fiscal year 2006 unfunded program list. The Marine Corps has received funding from congressional adjustments and supplemental appropriations to fully outfit their CH-53E squadrons with aircraft armor systems, but it lacks sufficient funds to upgrade all engines or completely field diagnostic systems. According to Marine Corps officials, the diagnostic systems assist maintainers by identifying maintenance issues ahead of scheduled maintenance programs and will reduce the maintenance man hours required to support the aircraft. The Marine Corps identified $30.6 million for these diagnostic systems and engine upgrades as unfunded priorities in fiscal year 2006. We rated the long-term program strategy and funding of the CH-53E as red because the requirements for the Heavy Lift Replacement, the replacement aircraft for the CH-53E, are still being established despite an initial fielding planned for 2015. According to officials, the Marine Corps must maintain at least 120 CH-53Es until the initial fielding of the Heavy Lift Replacement in order to support Marine Corps operations. Repair of the structural cracks found in the aircraft is critical to maintaining an adequate inventory of CH- 53Es until the Heavy Lift Replacement becomes operational. Officials estimate that, if the current high usage rate and estimates of attrition hold true, the number of CH-53Es may fall below the number necessary to remain in service until the Heavy Lift Replacement becomes available unless required funding and maintenance are available. Arleigh Burke Class Destroyers (DDG-51 class) provide multimission offensive and defensive capabilities and can operate independently or as part of a carrier strike group, surface action group, or expeditionary strike group. The primary missions of the Arleigh Burke Class Destroyers are the destruction of enemy cruise missiles, aircraft, surface ships, and submarines and to attack land targets in support of joint or combined operations. The first ship of this class was commissioned in 1991. The Navy plans to build 62 ships of this type and 47 of these platforms have been commissioned to date. The average age of DDG-51s in the fleet is 6.45 years. The final DDG-51 class ship will be delivered in fiscal year 2011. We assessed the condition of the DDG-51 class as yellow due to maintenance issues related to major ship systems and bandwidth limitations experienced by this ship class. Similarly, this ship class received a yellow rating in our previous report. Each year a number of ships in the DDG-51 class are evaluated by Navy inspectors; most of the DDG-51 class ships inspected in recent years have done well in important inspection areas, such as the destroyer’s electrical and combat systems. However, areas such as the environmental protection systems and damage control systems performed poorly in these evaluations. For example, watertight doors are problematic in this type of ship and generally are in poor condition in all surface ships. In addition, the DDG-51 also has issues with corrosion, insufficient bandwidth for Web-based communication, and cracks on its bow, or the front, of the ship. Sufficient bandwidth is critical to the ability of these ships to operate with the rest of the Navy, which relies heavily on the internet for day-to-day operations. We assessed the near-term program strategy and funding of the DDG-51 class as green because the Navy has an effective strategy to address near- term condition and concerns and sufficient funding is available for these plans. The Navy strategy identifies classwide deficiencies, prioritizes their importance, and then addresses the most significant issues. The Navy closely monitors corrosion, and has taken preventative measures to reduce its impact. Finally, the Navy will install Super High Frequency capabilities in the DDG-51 class to improve bandwidth limitations, and a review of the bow cracks is in progress. We assessed the long-term program strategy and funding of the DDG-51 class as yellow because the Navy has a strategy to address long-term condition and concerns, but it is not fully funded. The Navy plans a Midlife Modernization to ensure that the DDG-51 class of ships remains a relevant fleet asset for its full life expectancy. The fully funded modernizations are scheduled to begin in fiscal year 2010, and include upgrades to DDG-51 combat systems that may reduce personnel costs. However, the Navy has reduced planned future operation and maintenance funding across all surface ships in the fleet. These reductions have the potential to affect the material condition of the DDG-51 class, and cause higher costs in later years to make up for deferred maintenance. The DDG-51 class is expected to remain in the fleet until fiscal year 2046. Oliver Hazard Perry Class Frigates (FFG-7 class) are surface combatants with antisubmarine warfare and limited antiair warfare capabilities. Frigates conduct escort for amphibious expeditionary forces, protection of shipping, maritime interdiction, and homeland defense missions. There are 30 FFGs in the fleet, with an average age of 20.8 years. The FFG-7 class is expected to remain in service until 2019. There is no planned replacement for this ship; however, the Littoral Combat Ship will perform many of the missions currently performed by these ships. We assessed the condition of FFG-7 class as yellow, the same rating it received in the last report, due to maintenance issues related to major ship systems and bandwidth limitations experienced by this ship class. These frigates operate using diesel engines and these systems’ older engines need more maintenance than modern gas turbine engines. Additionally, the water and ventilation systems must be replaced to ensure that the ship can operate until it reaches the end of its required service life. Each year a number of ships in the FFG-7 class are evaluated by Navy inspectors, and they have also found shortfalls in damage control equipment and the environmental protection systems of these ships. Moreover, these frigates have only a limited amount of bandwidth and this affects their ability to operate with the rest of the Navy, which relies heavily on electronic communications for its day-to-day operations. Naval inspectors determined that other systems on board FFG-7 class ships were in good condition, including its propulsion and combat systems. We assessed the near-term program strategy and funding for the FFG-7 class as yellow because the Navy’s near-term plan to correct FFG-7 class condition problems does not address all issues. The Navy has decided not to install a Super High Frequency communication system on these ships to improve their access to bandwidth and improve their access to Web-based communication. Instead, these ships will continue to operate with a limited amount of available bandwidth in the future, despite the Navy’s increasing use of the internet to share operational, training, and personnel data. We assessed the long-term strategy of the FFG-7 class as yellow because of plans to decrease future operation and maintenance funding. The Navy plan to modify and modernize the FFG-7 fleet will be complete by fiscal year 2011; however, these modifications may not address all of the problems that may arise in the aging FFG-7 class. DOD intends to decrease the amount of operation and maintenance funding available for its surface ships in the future, which may limit the Navy’s ability to address any emerging maintenance issues. While all surface ships will likely have maintenance funding cuts, the FFG-7 class and other older ships may be the most affected by these shortfalls. Austin-class amphibious transport dock ships (LPD-4 class) are warships that embark, transport, and land elements of a Marine landing force and its equipment. Austin class ships also act as helicopter refueling stations and limited casualty receiving and treatment ships. There are currently 11 LPD- 4 class ships in the inventory with an average age of 37 years. The LPD-4 class ships are expected to remain in the fleet until 2014. The San Antonio- class LPD-17 is beginning to replace the LPD-4 class in fiscal year 2005. We assessed the condition of the LPD-4 class as yellow due to maintenance issues related to major ship systems and bandwidth limitations experienced by this ship class. The LPD-4 received the same yellow rating in our previous report. Insufficient air conditioning is a habitability concern on many LPD-4 class ships, especially given current operations in high-temperature regions. The LPD-4 class also has issues with electrical systems, propulsion, and insufficient bandwidth for Web-based communication. Sufficient bandwidth is critical to the ability of these ships to operate with the rest of the Navy, which relies heavily on the internet for day-to-day operations. While these maintenance issues are significant, some LPD-4 class ships have completed an Extended Sustainability Program that addressed the most severe maintenance problems affecting this class. We assessed the near-term program strategy and funding of the LPD-4 class as yellow because the Navy has a plan to address near-term condition and concerns, but it excludes those ships scheduled to decommission within 5 years and the decommission dates have historically slipped. The Navy is in the midst of an Extended Sustainability Program that corrects serious LPD-4 class deficiencies, for example, its inadequate onboard electrical system. The Navy has selected 5 of 11 LPD-class ships for this program, and will have completed 4 by the end of fiscal year 2005. The ships that will not undergo the Extended Sustainability Program are all within 5 years of decommissioning, and therefore can only undergo normal repairs and maintenance. In the past, this decommissioning date has been moved back by several years, but ships have retained their decommissioning status— thus preventing any upgrades or modernizations. This will lead to a wide variance of condition between different ships in the LPD-4 class. We assessed the long-term program strategy and funding of the LPD-4 class as yellow because of plans to decrease future operation and maintenance funding, and uncertainty concerning the LPD-4’s service life. The Navy has reduced planned future operation and maintenance funding across all surface ships in the fleet. These reductions have the potential to affect the material condition of the LPD-4 class. Additionally, procurement of the LPD-4 class replacement, the LPD-17 class, has been reduced from 12 ships to 9. However, the Navy requirement remains the same—the ability to transport two and a half Marine Expeditionary Brigades. The F/A-18 is an all-weather fighter and attack aircraft with 6 models: the F/A-18 A, B, C, and D, also known as the Hornet; and the E and F also known as the Super Hornet. The capabilities of the Hornet and Super Hornet include fighter escort, fleet air defense, force projection, and close and deep air support. The current inventory of F/A-18’s is 914: A, 123; B, 28; C, 396; D, 139; E, 102; and F, 126. The average age in years is: A, 18.5; B, 20.2; C, 12.7; D, 12.1; E, 2.5; and F, 2.4. The Navy plans to gradually replace the Hornet with the Super Hornet and the Joint Strike Fighter. We assessed the condition of the F/A-18 as green given that it generally is available in sufficient numbers to meet Navy requirements and though mission capable goals vary, all models are close to or exceed mission capable goals between calendar years 1999 and 2004, as shown in figure 40. Additionally, all variants of the F/A-18 consistently meet their daily availability requirements. In our previous study, the condition of the F/A-18 was rated yellow because it missed its mission capable goals, but including the daily availability factor in our analysis improved its rating to green. However, both the Hornet and Super Hornet have deficiencies with fuel systems and the Super Hornet also has deficiencies with cockpit canopies, all of which degrade mission capable rates. In addition, the Navy is beginning a long-term effort to replace the overstressed center section of the Hornet fuselage, the center barrel. This effort addresses the predictable rate of wear and deterioration on the aircraft due to factors such as carrier takeoff and landings and increases the expected service life of the aircraft. The aircraft is not available for operations during the 1 year scheduled for this process. According to officials, during this period the Navy takes advantage of aircraft time out of service to conduct scheduled maintenance and other modifications on those aircraft. We assessed the near-term program strategy and funding of the F/A-18 as yellow because some funding is not identified for their program strategy, which includes important Hornet modernizations, and the Super Hornet lacks critical spare parts. The Navy is unable to fund improved detection and targeting systems for the Hornet, for example, the Advanced Targeting Forward Looking Infrared system, and the Joint Helmet Mounted Cueing System. Similarly, the Super Hornet will, in the near term, experience shortfalls in the availability of Government Furnished Equipment: equipment directly acquired by the government and subsequently made available to a contractor. These equipment shortfalls for the Super Hornet include extra fuel tanks and bomb racks. We assessed the long-term program strategy and funding of the F/A-18 as yellow because of the uncertain status of its replacement, the Joint Strike Fighter, and the complexity of the center barrel replacement effort. The Navy has plans to maintain enough operational F/A-18 aircraft to meet the Navy’s tactical air requirements until the Joint Strike Fighter is available by replacing the center barrels of 40 Hornets per year, a challenging goal given the complex nature of this effort. The Joint Strike Fighter is already behind schedule and a number of its critical technologies are immature, indicating that it may be delayed even further. Program officials confirmed that another delay in arrival of the Joint Strike Fighter would require the F/A-18 program to seek other alternatives to meet requirement goals, such as to replace more center barrels on Hornets, manage the normal wear and tear on the aircraft, or procure additional Super Hornets. Moreover, center barrel stress is not the only factor used when determining the expected service life of an aircraft; flying hours and takeoff and landings also impact the F/A 18’s life expectancy. If Hornets are required to operate longer than currently planned, these aging aircraft may not be available in sufficient numbers to meet Navy requirements for tactical aircraft. The EA-6B Prowler provides Electronic Attack and Anti-Radiation Missile capabilities against enemy radar and communications systems. The Prowler’s primary mission is to support strike aircraft and ground troops by jamming enemy radar, data links, and communications. The current inventory is 119 with an average age of 21.9 years. The Prowler fleet consists of carrier-based squadrons, and land-based expeditionary squadrons. The expeditionary capability will be replaced by the Air Force’s B-52 electronic jammer suites, while the EA-18G Super Hornet Airborne Electronic Attack aircraft will begin to replace the carrier-based capabilities in 2009. We assessed the condition of the EA-6B as yellow, as we did in our previous report, because it consistently misses the Navy’s mission capable goal of 73 percent between calendar years 1999 and 2004, as shown in figure 42, due to a number of maintenance problems. However, Navy officials believe the EA-6B will meet its daily availability requirements later this year. Much of this improvement is due to replacement of the center wing, which had shown signs of fatigue due to the stress of operations, on a number of these aircraft. The Navy will have completed this complex effort, which removed aircraft from the fleet for a number of months, on enough Prowlers to meet their requirements. Despite this improvement, the EA-6B’s mission capable rates have been degraded by problems with communications equipment, canopies, and wings. Other problems with fuel cells and environmental control systems have also diminished mission capable rates. The EA-6B’s utilization rates have also been high, given its role as a high-demand asset in current operations. We assessed the near-term program strategy and funding of the EA-6B as yellow because the Navy has not funded all of its short-term requirements. The Navy has plans to address the major degraders of mission capable rates; however, not all of these plans are fully funded. Navy plans include replacing aging center wing sections in the EA-6B fleet. The condition of the Prowler fleet will continue to improve in the near term because of this funded initiative. Additionally, EA-6B officials have been working with manufacturers to correct canopy deficiencies and have resolved this problem. Current Navy plans for purchasing canopies are also fully funded for all aircraft. However, currently no plans or funding have been identified to correct communications equipment issues. Furthermore, the Navy has not fully funded equipment that improves the EA-6B’s ability to use its unique electronic warfare capabilities to counter an emerging threat. We assessed the long-term program strategy and funding of the EA-6B as yellow because only a limited number of aircraft will receive an upgrade that is critical to transition the EA-6B fleet to the EA-18G aircraft. In the long term, the Navy has outlined an effective strategy to modernize and replace the Prowler. This strategy includes wing replacement and Improved Capability III upgrades on the EA-6B. The Prowler’s capabilities will be replaced by the EA-18G and Air Force B-52 electronic jammer suites. However, this strategy has not been fully funded. Specifically, the Navy has a stated requirement to provide the Improved Capability III upgrade on 21 aircraft, but has only funded the upgrade for 14 aircraft. This improved third-generation capability is a significant technology leap beyond the EA-6B’s current jamming capabilities, and according to program officials, an important component in the Navy’s transition to the EA-18G. Moreover, aircraft with this capability will be used by the Marine Corps until 2015, at which time they plan to replace their EA-6B aircraft with a version of the Joint Strike Fighter. The Joint Strike Fighter is already behind schedule and a number of its critical technologies are immature, indicating that it may be delayed even further. The P-3 Orion is a four-engine turboprop antisubmarine and maritime surveillance aircraft. It provides undersea warfare; antisurface warfare; and Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance capabilities to naval and joint commanders. There are 173 aircraft in the fleet and their average age is 24.4 years. The Navy will replace P-3 capabilities with the Multi-mission Maritime Aircraft in 2013, and the Broad Area Maritime Surveillance Un- manned Aerial Vehicle. We assessed the condition of the P-3 as red because it has consistently missed its mission capable goals by a significant percentage, as shown in figure 44, and the Orion is not available in sufficient numbers to meet day- to-day Navy requirements. Overall, the condition of the P-3 has been primarily degraded by the effect of structural fatigue on its airframe and the obsolescence of communication, navigation, and primary war-fighting systems. To address airframe issues, specifically cracks in the aircraft’s wings, the Navy has instituted a special structural inspection and repair program. A number of aircraft are currently undergoing these special structural inspections and repairs, and are not available for fleet operations. Moreover, the obsolescence of the communication, navigation, and war-fighting systems resulted in only about 26 percent of these aircraft being rated as fully capable of performing all of their missions last year. We assessed the near-term program strategy and funding of the P-3 as yellow because the Navy’s near-term plans do not address all condition and obsolescence issues. However, the Navy will have completed enough structural inspections and repairs to ensure that there are sufficient P-3 Orions available to meet day-to-day requirements next year. While this mitigates serious airframe issues, obsolescence of electronics and avionic systems will continue to degrade the ability of this aircraft to fulfill all of its missions. The Navy will address some of these obsolescence issues in the short term, such as installing an improved high-frequency radio. However, other needed improvements have not been funded, for example, efforts to improve the aircraft’s over the horizon communications and upgrades for the aircraft’s missile defense system. We assessed the long-term program strategy and funding of the P-3 as red because while the Navy has identified what can be done to address the obsolescence of the mission systems over the long term, this program has not been approved or funded. The Navy plan, known as the Anti-submarine Maritime Improvement Program, ensures the continued relevance of the P- 3 mission systems until the Multi-mission Maritime Aircraft is operational. This improvement program has not been fully approved, nor has it been funded. The obsolescence of its mission system may have a significant impact on its war-fighting capabilities. Moreover, it is still not certain that the fixes identified for the P-3 airframe will ensure that sufficient numbers of this aircraft will be available until it is fully replaced in 2019. The Standard Missile-2 is a medium to long-range, shipboard surface-to-air missile. The primary mission of the Standard Missile-2 is fleet area air defense and ship self-defense; its secondary mission is antisurface ship warfare. There are four different blocks of the Standard Missile-2 in service (III, IIIA, IIIB, and IV). The inventories of these blocks are classified, but over 88 percent of the inventory is greater than 11 years of age, and some blocks are older and less capable than others The capabilities of the Standard Missile-2 will be replaced by the Standard Missile-6 Extended Range Active Missile beginning in fiscal year 2009. We assessed the condition of the Standard Missile-2 as yellow because it has consistently failed to meet its asset readiness goal of 87 percent. The asset readiness goal is the missile equivalent of the mission capable goal. Timely certifications of missiles—assessments of equipment condition and necessary repair and replacement of missile components—are critical to these readiness rates. These certifications must be done approximately every 4 years and a missile is not ready for issue until certified. These assessments have not been done at a rate sufficient to meet asset readiness goals. In GAO’s previous study, the condition of the Standard Missile-2 was rated red; however, the asset readiness rates of these missiles have improved since that time. We assessed the near-term program strategy and funding of the Standard Missile-2 as green because the Navy has a plan to address near-term condition, and the missile’s inventory meets Navy requirements in the near term. The Navy has been able to deliver a more maneuverable version of this missile—the Block IIIB-MU—ahead of schedule. In addition, the Navy has increased the amount of operation and maintenance funding over the next couple of years to maintain the asset readiness rate close to the goal. Both of these steps will allow the Navy to meet inventory requirements in the near term. We assessed the long-term program strategy and funding of the Standard Missile-2 as red because future funding shortfalls will significantly affect missile availability. The Navy plans to meet requirements by procuring new missiles and modernizing existing ones. However, the new procurements and missile modernizations are not in sufficient numbers to allow the Navy to meet inventory requirements in the long term. Furthermore, the operation and maintenance funding planned for the long term is not sufficient for the Navy to meet asset readiness goals. Funding for certifications was limited until fiscal year 2004; subsequently, increased funding improved assets readiness rates. Funding will be limited again beginning in fiscal year 2007, affecting the long-term availability and ready- for-issue rate of the Standard Missile-2. While the Standard Missile-6 is currently ahead of schedule, this weapon will not be available in a timely manner or in sufficient numbers to allow the Navy to meet long-term inventory requirements. For the six Air Force aircraft in this appendix, fiscal year 2004 data are through July 2004. There are two types of F-15 aircraft: the F-15 Eagle (A-D variants) and the F-15E Strike Eagle. The F-15 A and F-15C are single-seat, supersonic fighter aircraft used for air-to-air combat, and the B and D variants are their dual- seat training counterparts. The F-15E Strike Eagle is a dual-seat, supersonic fighter aircraft used for both air-to-air combat and air-to-ground combat. There are 513 F-15 Eagles and 221 F-15E Strike Eagles in the Air Force inventory, and the average age depends upon the variant, with F-15 Eagles ranging from about 21 years to 26 years, and the F-15E Strike Eagles averaging about 12 years. The F/A-22 is the designated replacement for the F-15 Eagle. The Air Force plans to retire most of the F-15 Eagle fleet, retaining 179 F-15C/D variants beyond 2015 to augment the F/A-22 through 2025, while maintaining the entire fleet of F-15Es through at least 2025. We assessed the condition of the F-15 C/D and F-15E as green because mission capable rates have been near the Air Force’s stated goal and have either improved or remained constant between fiscal years 1999 and 2004. The Air Force’s stated goal depends on the variant, and ranges from 79 percent to 82 percent. As shown in figure 47, mission capable rates for the F-15 C/D and F-15 E variants, which are expected to remain in the fleet after retirement of the older F-15 aircraft, have increased to about 79 percent. Officials stated that cracks and issues related to the age of the aircraft are the most common problems affecting the aircraft, but noted that the Air Force is addressing these issues through programmed depot maintenance. Officials also stated that the F-15 is a viable and capable system, noting that the F-15E models were used during Operation Iraqi Freedom. We assessed the near-term program strategy for the F-15 and F-15E as green because the Air Force has developed and funded a strategy to address known problems, to include retirement of the older F-15 C/D variants. For capability and reliability upgrades, the Air Force has funded and is currently implementing replacements and upgrades for a variety of different systems on 179 F-15C/D variants, to include engines, radars, and various structural improvements. For the F-15E, the Air Force has funded modernization of different systems—computer processors, avionics, and software—which is collectively known as a Suite 5E upgrade. This upgrade, which is fully funded for the F-15E fleet, is scheduled to occur from fiscal year 2006 through fiscal year 2011, and is expected to increase the survivability and weapons delivery capability of the aircraft. Officials also stated that technological obsolescence and diminishing manufacturing sources are a concern for the entire F-15 fleet; however, they also stated that it is manageable, and were confident that the Air Force had the correct procedures to address the issue. We assessed the long-term program strategy for the F-15 fleet as green because the Air Force near-term upgrades are fully funded and designed to keep the aircraft that will remain in the fleet viable and functioning through at least 2025. The F/A-22 Raptor is the F-15 Eagle's designated replacement, and officials stated that delays to its fielding schedule will not impact the retirement schedule of the F-15 fleet. Retirements for the F-15A/B aircraft are scheduled to begin in fiscal year 2005 with a total of 84 retirements expected to occur through fiscal year 2009, and retirement of 26 F-15 C/D aircraft is scheduled to begin in fiscal year 2009. The F/A-22 is expected to achieve operational capability in December 2005, and the entire fleet of 179 aircraft is currently scheduled to be procured through fiscal year 2008. Officials stated that the effect of changes in the F/A-22 fleet composition on further F-15 C/D aircraft retirements beyond fiscal year 2009 remains to be seen, as the total structure of the combat air fleet will be reviewed during the 2005 Quadrennial Defense Review. The F-16 is a compact, multirole fighter with air-to-air combat and air-to- surface attack capabilities. There are four F-16 variants – the A and C models are designed for one pilot, while the B and D models are two-seat tandem cockpit aircraft, which are used for training and can also be flown individually. Of the four variants, the F-16 C and D models incorporate the latest technology and have the capability to suppress or destroy enemy air defenses. The Air Force currently has 1,353 F-16 aircraft in its inventory, and the average age is about 15 years. The Air Force plans to retire the A and B variants because they are not expected to be structurally viable past 2008, although the specific schedule has yet to be published. The Air Force also plans to replace the F-16 with the F-35 Joint Strike Fighter beginning in 2013. Consistent with the findings of our December 2003 report, we assessed the condition of the F-16 as green because mission capable rates have been near the Air Force’s stated goal and have remained relatively constant. For the A/B variants, mission capable rates were about 72 percent with an Air Force stated goal of 75 percent in fiscal year 2004 and, as shown in figure 49, mission capable rates for the C/D variants were about 76 percent compared to a goal of 81 percent. Officials stated that the most significant factor affecting the F-16 is cracks, which occur mostly on older aircraft and because of the stress caused by repeatedly landing without dropping its two 2,000 pound bombs. Despite these concerns, the Air Force has plans to address cracking. Further, although the rates for all variants are below the goals, officials stated the F-16 was able to meet operational requirements. We assessed the near-term program strategy for the F-16 as green because the Air Force has developed and funded a strategy to address known problems. As we noted in our December 2003 report, structural issues related to age and use are affecting the F-16. To address these concerns, the Air Force began a structural augmentation program that strengthens the airframe in areas prone to cracking, namely the wings and fuselage. The structural augmentation program is expected to affect over 1200 of the aircraft and be completed by 2013. Other near-term initiatives that are being implemented to improve combat capabilities include the common configuration implementation program, which incorporates improvements to targeting, communications, and computer systems, and improvements to radar, avionics, and targeting systems. Officials stated that these programs are currently funded and being implemented, although not for the entire F- 16 fleet. We assessed the long-term program strategy for the F-16 as green because current and projected funding for aircraft modernizations identified in the near-term program strategy are designed to ensure longer-term viability for the next 15 years. Although the Air Force has yet to publish an F-16 retirement schedule, officials indicate that the older variants will be retired, as will be reflected in future budget documents. The F-35 Joint Strike Fighter is the designated replacement for the F-16 but, according to officials, the retirement of the older F-16 variants will not be affected by the F-35 fielding schedule, since operational capability of the Air Force F-35 aircraft is not expected to occur until fiscal year 2013 and the exact quantity remains to be determined. The B-1 Lancer Bomber is a long-range, high-speed, large payload global attack aircraft that was originally designed for nuclear missions but was transitioned to a conventional role. In 2002, the Air Force began consolidating the fleet, reducing the B-1 inventory from 93 to 67 aircraft and transferring all remaining B-1 bombers to the active component. Beginning operations in 1986, the average age of the B-1 is about 17 years. The Air Force plans to keep the B-1 in use through at least 2040, so there are no immediate plans to replace the aircraft. We assessed the condition of the B-1 as yellow because mission capable rates were below the Air Force stated goal most of the time between fiscal years 1999 and 2004. As shown in figure 51, mission capable rates have increased between 1999 and 2004. Parts shortages were identified as a reason keeping rates below goals, and officials identified generators, automatic pilot controllers, and various pump and hydraulic systems as the items that were most often in short supply. After consolidation of the B-1 fleet in fiscal year 2002, the numbers of parts in the supply system increased as parts were taken from retired B-1 aircraft. To compensate for the smaller fleet size, the Air Force increased the mission capable goals from 67 percent in fiscal year 2002 to 76 percent in fiscal year 2003. The increase in the goal occurred as B-1 usage to support operations in Iraq and Afghanistan increased. Although the aircraft’s mission capable rate was about 69 percent in fiscal year 2004, the rate for deployed aircraft was 80 percent. Additionally, officials noted that the B-1 is capable of accomplishing the Air Force’s current needs. We assessed the near-term program strategy for the B-1 as green because the Air Force has planned and funded programs to address the near-term sustainment issues affecting the aircraft. According to officials, the forecasted number of flight hours serves as the basis for funding planned maintenance activity and expected number of parts required for the aircraft. The fluctuations in the fleet size, coupled with the increase in usage for operations overseas, caused instability in the supply chain and increased the difficulty in efficient planning of maintenance cycles. For the near term, the Air Force has addressed this concern by increasing the number of forecasted flight hours for fiscal year 2005, which increases the funding for supply and maintenance activities and is expected to correct the disparity. In addition to addressing near-term sustainment issues, the Air Force has already funded and will complete plans to increase the sustainability and wartime capabilities of the B-1, to include the planned fielding of increased munitions capabilities, and upgrades to the central computer systems and radar. We assessed the long-term program strategy for the B-1 as green because the Air Force has a proactive strategy to address anticipated shortages and deficiencies in the aircraft, and has funded modernization efforts to meet requirements. Although a newer system in the Air Force’s fleet, officials stated that technology has advanced significantly since the B-1 was fielded in the 1980s, resulting in a reduction in the number of manufacturers that make the original B-1 component parts. For example, the original computer technology in the B-1 used processors that, while cutting edge at the time, are slower than home computer processors. To address these issues, the Air Force has funded efforts to modernize and upgrade B-1 components, to include cockpit flight instrument displays and navigation systems. In addition to resolving potential supply chain concerns, these upgrades are also expected to enhance the aircraft’s combat capability. The B-2 is a multirole heavy bomber with stealth characteristics, capable of employing nuclear and conventional weapons. The aircraft was produced in limited numbers to provide a low observable (i.e., stealth) capability to complement the B-1 and B-52 bombers. Its unique stealth capability enables the aircraft to penetrate air defenses. The Air Force has 21 B-2 aircraft in its inventory, and the average age is about 10 years. The B-2 was deployed to support both Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom. The B-2 is expected to remain in the Air Force’s inventory until 2058, so there are no immediate plans to replace the aircraft. For the reasons associated with maintenance of stealth characteristics that we identified in our December 2003 report, we continue to assess the condition of the B-2 as yellow. As shown in figure 53, the B-2 did not consistently meet the Air Force mission capable goal of 50 to 51 percent between fiscal years 1999 and 2004. Officials stated that the small B-2 fleet size increases the difficulty in achieving goals, noting that a change in the mission capable status of one aircraft results in about a 7 percent change in the overall mission capable rate; however, when viewing other metrics, the B-2 condition is comparable with other bombers. Maintenance of stealth characteristics continues to be the primary driver of lower mission capable rates, and the Air Force is continuing to implement solutions. Despite difficulties associated with stealth maintenance, the B-2 is capable of accomplishing its wartime missions, achieving mission capable rates of 64 percent for Operation Enduring Freedom and 73 percent during the initial months of Operation Iraqi Freedom. We assessed the near-term program strategy for the B-2 as green because the Air Force has developed and funded the strategy for sustaining the B-2 inventory. The Air Force funded and continues to implement its Alternate High Frequency Material modification, which reduces the number of steps and the overall length of time required to conduct stealth maintenance. Thus far, three aircraft have received the modification, and the entire fleet has been funded and scheduled to receive the upgrade by the end of the decade. Other areas of concern include cracking on the aft deck and cracking in the windshield. For the aft deck cracks, officials stated that the extremely high temperatures from the engine cause the cracking, and have fielded kits to stiffen the aft decks, making them less affected by the extreme heat. For windshield cracks, officials stated that redesigning the spacing of drill holes will address the problem; delivery of the new windshields is scheduled to begin in late 2005. Additionally, the Air Force has funded and is implementing improvements in B-2 connectivity and interoperability, to include integrating advanced weapons. We assessed the long-term program strategy for the B-2 as green because the Air Force is addressing immediate issues for the aircraft while concurrently developing and funding longer-term solutions. In addition to continuing efforts to address stealth maintenance and aft deck cracks, the Air Force is also addressing the issue of diminishing manufacturing sources. Officials stated that technological advancements and the small size of the B-2 fleet are a disincentive for manufacturers to continue making B-2 unique parts. To compensate, the Air Force is modernizing components and systems within the B-2, developing internal processes to contract out management of B-2 unique parts, and closely monitoring all parts to ensure that the supply chain has ample time to adjust. The C-5 Galaxy is the largest Air Force transport aircraft, and can carry large cargo items over intercontinental ranges at jet speeds and can take off and land in relatively short distances. The C-5 is one of only two aircraft that can carry very large military equipment. With aerial refueling, the aircraft’s range is limited only by crew endurance. The first C-5 was delivered in 1970. There are 112 C-5 aircraft in the Air Force’s inventory, and their average age is 26 years. Although the C-5 is expected to remain in service through 2040, the exact length of service and composition of the C- 5 fleet is dependent upon the Mobility Capabilities Study and the Quadrennial Defense Review, which were not completed at the time of our review. For the reasons identified in our December 2003 report, we continue to assess the condition of the C-5 as yellow. As shown in figure 55, mission capable rates for the C-5 consistently remained below Air Force goals between fiscal years 1999 and 2004. Officials stated that the size and age of the aircraft make the C-5 maintenance intensive, and provided the example of fatigued metal and adhesives, which take time to replace. They further stated that the age of the C-5 makes it difficult to find manufacturing sources for some parts, particularly avionics and engine components. Additionally, the avionics systems and engines are noncompliant with upcoming global airspace and air traffic requirements, potentially limiting where and how the aircraft can be used. Despite these challenges, officials stated that the C-5 can currently perform its missions, including providing transport for tsunami relief efforts and moving supplies for operations in Iraq and Afghanistan. We assessed the near-term program strategy for the C-5 as yellow because of delays and funding shortages in key modernization efforts. The two modernizations for the C-5 are to improve avionics and engines; avionics upgrades must occur before engine modernization can begin. When complete, the programs are expected to address many manufacturing source issues, ensure compliance with global air traffic standards, and increase the aircraft’s capability. However, officials stated that the avionics upgrades are experiencing software integration problems, resulting in a delay of at least 3 months and cost increases of $20 million. Since engine modernization is predicated on avionics upgrades, the costs for engine modernization have also increased by $30 million. Additionally, after a projectile attack damaged a C-5 during Operation Iraqi Freedom, officials stated that defensive systems became a top priority, and the Air Force requested and received funding through fiscal year 2005 supplemental appropriations to upgrade defenses against infrared guided surface-to-air missiles for 51 C-5 aircraft. We assessed the Air Force’s long-term strategy for the C-5 as yellow because requested funding is inconsistent with long-term requirements to sustain and modernize the inventory. With upgrades to avionics and engines, officials stated that the C-5 could last through 2040. The Air Force has requested funding for engine upgrades for the entire fleet of 112 C-5 aircraft; however, the Air Force has only funded the procurement and installation of avionics upgrades for 59 aircraft, resulting in 53 aircraft not receiving the necessary avionics upgrades to support the new engines. Officials stated that the Air Force remains uncertain about the size of the final C-5 fleet and whether to fund the remaining C-5 upgrades, but will have a better idea following the completion of the Mobility Capabilities Study. The KC-135 is among the oldest aircraft in the Air Force’s inventory and represents 90 percent of the aircraft in the tanker fleet. Its primary mission is air refueling fixed-wing aircraft and it supports Air Force, Navy, Marine Corps, and allied aircraft. There are three KC-135 variants currently in the fleet: the E, R, and T models. Each model is a reengined version of the original KC-135A. Of these three variants, the E model belongs to the Air Force Reserve and Air National Guard. The first KC-135 was delivered in June 1957. There are 531 KC-135 aircraft in the Air Force’s inventory and the average age is about 44 years. Currently, there is no replacement Consistent with our December 2003 report, we assessed the condition of the KC-135 aircraft as yellow because, as shown in figure 57, it has not met its mission capable goals and issues associated with age and corrosion continue to be a concern. Officials stated that age is the primary driver of KC-135 maintenance issues, and that maintainers discover new problems with the aircraft every time it undergoes scheduled depot maintenance. Age-related issues with the aircraft include fuel bladder leaks, parts obsolescence, and problems with the landing gear’s steel brakes. Corrosion has regularly been discovered in new areas on the aircraft, requiring increased amounts of depot maintenance time. The Air Force has yet to determine the extent of problems caused by newly discovered corrosion. The older variants also have a higher incidence of problems; for example, the Air Force removed 29 KC-135E aircraft from flight status due to engine strut problems and corrosion. We assessed the Air Force’s near-term strategy for the KC-135 as yellow because age-related maintenance issues are expected to increase and the severity of potential age-related issues remains unknown. Although officials stated that maintenance problems with the aircraft are currently manageable during programmed depot maintenance, they expect the number of maintenance man-hours to increase by 2.5 percent each year. Officials also stated that the severity of potential problems from newly discovered corrosion remains unknown, so the potential for additional maintenance requirements is likely to occur. Officials further stated that the effects of KC-135 operations to support Iraq and Afghanistan are still unknown, but the Air Force has instituted additional inspections and procedures to address potential effects associated with higher usage. The only major modification for the KC-135, Global Air Traffic Management avionics system upgrades, remains on schedule and is fully funded. We assessed the Air Force’s long-term strategy for the KC-135 as red because the future of the KC-135 fleet and the Air Force’s tanker strategy are unknown. Before acquiring new tankers, the Air Force must complete a Recapitalization Analysis of Alternatives, which is a study to narrow the field of possible future tanker options. Originally scheduled for completion in December 2004, the analysis has been delayed until at least August 2005. Regardless of the option, officials stated that all recapitalization efforts will require use of the KC-135 in the near term and delays in fielding a replacement exacerbate problems in maintaining the existing fleet over the long term, as well as delaying modernization efforts that are predicated on the replacement time line. In fiscal year 2005, $100 million was appropriated for a tanker replacement transfer fund, and $9.7 billion has been requested for the tanker replacement program in DOD’s 2006 Future Years Defense Program.For the six Air Force aircraft in this appendix, fiscal year 2004 data are through July 2004. In addition to the contact named above, the following individuals also made major contributions to the report: David Schmitt, Assistant Director; Patricia Lentini; Vipin Arora; Janine Cantin; Alissa Czyz; Barbara Hills; Barbara Gannon; Stanley Kostyla; Josh Margraf; Brian Mateja; Kimberly Mayo; Jim Melton; Kenneth Patton; Malvern Saavedra; and John Trubey. | With continued heavy military involvement in operations in Iraq and Afghanistan, the Department of Defense (DOD) is spending billions of dollars sustaining or replacing its inventory of key equipment items while also planning to spend billions of dollars to develop and procure new systems to transform the department's warfighting capabilities. GAO developed a red, yellow, green assessment framework to (1) assess the condition of 30 selected equipment items from across the four military services, and (2) determine the extent to which DOD has identified near- and long-term program strategies and funding plans to ensure that these items can meet defense requirements. GAO selected these items based on input from the military services, congressional committees, and our prior work. These 30 equipment items included 18 items that were first assessed in GAO's 2003 report. While the fleet-wide condition of the 30 equipment items GAO selected for review varied, GAO's analysis showed that reported readiness rates declined between fiscal years 1999 and 2004 for most of these items. The decline in readiness, which occurred more markedly in fiscal years 2003 and 2004, generally resulted from (1) the continued high use of equipment to support current operations and (2) maintenance issues caused by the advancing ages and complexity of the systems. Key equipment items--such as Army and Marine Corps trucks, combat vehicles, and rotary wing aircraft--have been used well beyond normal peacetime use during deployments in support of operations in Iraq and Afghanistan. DOD is currently performing its Quadrennial Defense Review, which will examine defense programs and policies for meeting future requirements. Until the department completes this review and ensures that condition issues for key equipment are addressed, DOD risks a continued decline in readiness trends, which could threaten its ability to continue meeting mission requirements. The military services have not fully identified near- and long-term program strategies and funding plans to ensure that all of the 30 selected equipment items can meet defense requirements. GAO found that, in some cases, the services' near-term program strategies have gaps in that they do not address capability shortfalls, funding is not included in DOD's 2006 budget request, or there are supply and maintenance issues that may affect near-term readiness. Additionally, the long-term program strategies and funding plans are incomplete for some of the equipment items GAO reviewed in that future requirements are not identified, studies are not completed, funding for maintenance and upgrades was limited, or replacement systems were delayed or not yet identified. Title 10 U.S.C. 2437 requires the military services to develop sustainment plans for equipment items when their replacement programs begin development, unless they will reach initial operating capability before October 2008. However, most of the systems that GAO assessed as red had issues severe enough to warrant immediate attention because of long-term strategy and funding issues, and were not covered by this law. As a result, DOD is not required to report sustainment plans for these critical items. For the next several years, funding to sustain or modernize aging equipment will have to compete with other DOD priorities, such as current operations, force structure changes, and replacement system acquisitions. Without developing complete sustainment and modernization plans and identifying funding needs for all priority equipment items, DOD may be unable to meet future requirements for defense capabilities. Furthermore, until DOD develops these plans, Congress will be unable to ensure that DOD's budget decisions address deficiencies related to key military equipment. |
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business; and it is especially important for government agencies, where the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet are changing the way our government, the nation, and much of the world communicate and conduct business. Without proper safeguards, systems are unprotected from individuals and groups with malicious intent to intrude and use the access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. These concerns are well founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks to come. Computer-supported federal operations are likewise at risk. Our previous reports and reports by several agencies’ inspectors general describe persistent information security weaknesses that place a variety of federal operations at risk of inappropriate disclosure, fraud, and disruption. We have designated information security as a governmentwide high-risk area since 1997. Recognizing the importance of securing the information systems of federal agencies, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002. FISMA requires each agency to develop, document, and implement an agencywide information security program for the data and systems that support the operations and assets of the agency, using a risk-based approach to information security management. Information security program requirements to be implemented include assessing risk; developing and implementing policies, procedures, and security plans; providing security awareness and training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; detecting, reporting, and responding to security incidents; and ensuring continuity of operations. Following the stock market crash of 1929, Congress passed the Securities Exchange Act of 1934, establishing SEC to enforce securities laws, regulate the securities markets, and protect investors. To carry out its responsibilities and help ensure that fair, orderly, and efficient securities markets are maintained, the commission issues rules and regulations that promote adequate and effective disclosure of information to the investing public. The commission also oversees and requires the registration of other key participants in the securities industry, including stock exchanges, broker-dealers, clearing agencies, depositories, transfer agents, investment companies, and public utility holding companies. SEC is an independent, quasi-judicial agency that operates at the direction of five commissioners appointed by the President and confirmed by the Senate. In fiscal year 2006, SEC had a budget of about $888 million and staff of 3,590. Each year the commission accepts, processes, and publicly disseminates more than 600,000 documents from companies and individuals, including annual reports from more than 12,000 reporting companies. In fiscal year 2006, the commission collected $499 million in filing fees and $1.8 billion in penalties and disgorgements. To support its financial operations and store the sensitive information it collects, the commission relies extensively on computerized systems interconnected by local and wide area networks. To process and track financial transactions such as filing fees paid by corporations and penalties from enforcement activities, SEC relies on several applications—Momentum, Electronic Data Gathering, Analysis, and Retrieval system (EDGAR), and Case Activity Tracking System 2000 (CATS). Momentum, a commercial off-the-shelf accounting software product, is used to record the commission’s accounting transactions, to maintain its general ledger, and to maintain the information SEC uses to produce financial reports. EDGAR is an Internet- based system used to collect, validate, index, and accept the submissions of forms filed by SEC-registered companies. EDGAR transfers this information to the general ledger nightly. The commission’s Division of Enforcement uses CATS, a modified commercial off-the-shelf database application, to record enforcement data and create management reports. CATS tracks enforcement-related data, including SEC-imposed fines and penalties. In addition, the commission uses these systems to maintain sensitive information, including filing data for corporations, and legal information on enforcement activities. According to FISMA, the Chairman of the SEC has responsibility for providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification or destruction of the agency’s information systems and information. The Chairman of the SEC delegated authority to the chief information officer (CIO) to be responsible for establishing and maintaining a comprehensive information security program and governance framework. As part of its program, the CIO is to (1) ensure that policies, procedures, and control techniques to address all applicable information security requirements are effectively implemented and maintained; (2) work closely with designated authorizing officials to ensure that the SEC-wide program is effectively implemented and managed; and (3) delegate authority to the agency chief information security officer (CISO) to carry out information security responsibilities and to ensure compliance with applicable federal laws, regulations, and standards. The CISO serves as the CIO’s liaison with system owners and authorizing officials to ensure the agency security program is effectively implemented. The CISO also ensures certifications and accreditations are accomplished in a timely and cost-effective manner and that there is centralized reporting of all information security related activities. The objectives of our review were to assess (1) the status of SEC’s actions to correct or mitigate previously reported information security weaknesses and (2) the effectiveness of the commission’s information system controls for ensuring the confidentiality, integrity, and availability of its information systems and information. As part of our assessment of the effectiveness of SEC’s information system controls, we also evaluated the commission’s progress toward meeting the requirements for an agencywide security program mandated by FISMA. We conducted our review using our Federal Information System Controls Audit Manual (FISCAM), a methodology for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized data. Specifically, we evaluated information security controls in the following areas: security management, which provides a framework and continuing cycle of activity for managing risk, developing security policies, assigning responsibilities, and monitoring the adequacy of the agency’s computer- related controls; access controls, which limit or detect access to computer resources (data, programs, equipment, and facilities), thereby protecting them against unauthorized modification, loss, and disclosure; configuration management, which prevents unauthorized changes to information system resources (for example, software programs and hardware configurations); segregation of duties, which includes policies, procedures, and an organizational structure to manage who can control key aspects of computer-related operations; and contingency planning, so that when unexpected events occur, critical operations continue without disruption or are promptly resumed, and critical and sensitive data are protected. For our first objective, we examined supporting documentation and conducted tests and evaluations of corrective actions taken by the commission to correct weaknesses previously reported as unresolved at the conclusion of our 2005 audit. To evaluate the effectiveness of the commission’s information security controls and program, we identified and examined its pertinent security policies, procedures, guidance, security plans, and relevant reports. Where federal requirements, laws, and other guidelines, including National Institute of Standards and Technology guidance, were applicable, we used these to assess the extent to which the commission had complied with specific requirements. We held discussions with key security representatives, system administrators, and management officials to determine whether information system controls were in place, adequately designed, and operating effectively. In addition, we conducted tests and observations of controls in operation using federal guidance, checklists and vendor best practices. SEC has corrected or mitigated 58 of the 71 security control weaknesses previously reported as unresolved at the conclusion of our 2005 audit. Specifically, the commission resolved all of the previously reported weaknesses in security related activities and contingency planning, and it has made significant progress in resolving access control weaknesses. A key reason for SEC’s progress was that its senior management was actively engaged in implementing information security related activities and mitigating the previously reported weaknesses. The commission has addressed 34 of the previously identified access control weaknesses. For example, SEC has implemented controls to enforce strong passwords, and removed excessive rights granted to certain users on their Microsoft Windows servers and workstations; established audit trails on its critical financial systems; reconfigured its internal network infrastructure to be configured securely; implemented virus protection on all of its Microsoft Windows servers; developed and implemented procedures to review employee and contractor access to the data center based on SEC-established criteria; assessed the physical security of each of its 11 field office locations and developed a plan to review each of the offices biannually; and developed an incident response program that includes policies and procedures for handling and analyzing incidents. SEC has also corrected or mitigated all 18 security related activity weaknesses previously reported as unresolved at the conclusion of our 2005 audit. For example, the commission has implemented a risk assessment process; established a process to ensure that effective information system controls exist to safeguard its payroll/personnel system; had 99 percent of employees and contractors complete security awareness training; developed and documented a process to ensure background investigations were conducted for employees and contractors; and established a process to identify and remove computer access rights accounts granted to separated contractors or nonpaid users of SEC systems. In addition, SEC has developed and updated its disaster recovery plans covering major applications. Moreover, the commission has tested its plans throughout the year through a series of disaster recovery exercises covering major applications and various scenarios. A key reason for its progress was that SEC’s senior management was actively engaged in implementing information security related activities and mitigating the previously reported weaknesses. The Chairman has received regular briefings on agency progress in resolving the previously reported weaknesses, and the CIO has coordinated efforts with other offices involved in implementing information security policies and controls at the commission. An executive-level committee with oversight responsibility for the commission’s internal controls was also established and has responsibility for approving programs and policies for internal control assessment and testing as well as developing policies to resolve internal control weaknesses. While SEC has made important progress in strengthening its information security controls and program, it has not completed actions to correct or mitigate 13 previously reported weaknesses. For example, the commission has not mitigated weaknesses in user account and password management, periodically reviewed software changes, or adequately controlled access to sensitive information. Failure to resolve these issues will leave the commission’s sensitive data vulnerable to unauthorized disclosure, modification, or destruction. SEC has not consistently implemented certain key controls to effectively safeguard the confidentiality, integrity, and availability of its financial and sensitive information and information systems. In addition to 13 previously identified weaknesses that remain unresolved, we identified 15 new information security weaknesses in access controls and configuration management. By the conclusion of our review, SEC had taken action to address 11 of the 15 new weaknesses. A primary reason for these control weaknesses is that SEC had not consistently implemented elements of its information security program. As a result, the commission cannot be assured that its controls are appropriate and working as intended and that its financial and sensitive data and systems are not at increased risk of unauthorized disclosure, modification, or destruction. Access controls limit or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and loss. Specific access controls include boundary protection, identification and authentication, authorization, and physical security. Without adequate access controls, unauthorized individuals, including outside intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or personal gain. In addition, authorized users can intentionally or unintentionally modify or delete data or execute changes that are outside their span of authority. Boundary protection pertains to the protection of a logical or physical boundary around a set of information resources and implementing measures to prevent unauthorized information exchange across the boundary in either direction. Organizations physically allocate publicly accessible information system components to separate subnetworks with separate physical network interfaces, and they prevent public access into their internal networks. Unnecessary connectivity to an organization’s network increases not only the number of access paths that must be managed and the complexity of the task, but the risk of unauthorized access in a shared environment. SEC policy requires that certain automated boundary protection mechanisms be established to control and monitor communications at the external boundary of the information system and at key internal boundaries within the system. Additionally, SEC policy requires that if remote access technology is used to connect to the network, it must be configured securely. The commission did not configure a remote access application to include required boundary protection mechanisms. For example, the application was configured to allow simultaneous access to the Internet and the internal network. This could allow an attacker who compromised a remote user’s computer to remotely control the user’s secure session from the Internet. In addition, SEC did not securely configure the systems used for remote administration of its key information technology resources. Consequently, a remote attacker could exploit these vulnerabilities to launch attacks against other sensitive information systems within the commission. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system must also establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. SEC policy requires the implementation of automated identification and authentication mechanisms that enable the unique identification of individual users. The commission did not securely enforce identification and authentication controls on all of its information systems. For example, SEC did not remove default database accounts with known or weak passwords or ensure that these accounts had been locked. In addition, the commission was still unable to enforce strong password management on all of its systems and continued to have weak key-management practices for some of its secure connections. This increases the risk that unauthorized users could gain access to SEC systems and sensitive information. Authorization is the process of granting or denying access rights and privileges to a protected resource, such as a network, system, application, function, or file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and data. It means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need in order to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that are associated with a particular file or directory, regulating which users can access it—and the extent of that access. To avoid unintentionally giving users unnecessary access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. SEC policy requires that each user or process be assigned only those privileges needed to perform authorized tasks. SEC system administrators did not ensure that their systems sufficiently restricted system and database access and privileges to only those users and processes requiring them to perform authorized tasks. For example, administrators had not properly restricted access rights to sensitive files on some servers. Nor did the commission adequately restrict privileges to a system database. In addition, new requests or modifications for user access to the EDGAR system were not reviewed by its system owner; nor was current documentation maintained on user privileges granted to individuals based on their roles and divisions. The commission also continued to experience difficulty implementing a process to effectively remove network system accounts from separated employees and adequately controlling access to sensitive information. These conditions provide more opportunities for unauthorized individuals to escalate their privileges and make unauthorized changes to files. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed and by periodically reviewing the access granted in order to ensure that access continues to be appropriate. At SEC, physical access control measures (such as guards, badges, and locks—used alone or in combination) are vital to protecting the agency’s sensitive computing resources from both external and internal threats. SEC policy requires that specific procedures be followed to protect and control physical access to sensitive work areas in its facilities. SEC procedures for protecting and controlling physical access to sensitive work areas were not always followed. Specifically, the commission had not properly implemented perimeter security at a key location. Guards at the location did not inspect photo identification and expiration dates. In addition, the commission did not adequately restrict physical access to its network in public locations. Until SEC fully addresses its physical security vulnerabilities, there is increased risk that unauthorized individuals could gain access to sensitive computing resources and data and inadvertently or deliberately misuse or destroy them. To protect an organization’s information, it is important to ensure that only authorized applications and programs are placed in operation and that the applications are securely configured. This process, known as configuration management, consists of instituting policies, procedures, and techniques to help ensure that all programs and program modifications are properly authorized, tested, and approved. Specific controls for configuration management include policies and procedures over change control and patch management. Configuration management policies and procedures should be developed, documented, and implemented at the agency, system, and application levels to ensure an effective configuration management process. Patch management, including up-to-date patch installation, helps to mitigate vulnerabilities associated with flaws in software code, which could be exploited to cause significant damage. SEC policy requires vulnerability management of system hardware and software on all of its information systems. SEC continues to have difficulty implementing effective control over changes to software and other applications. For example, the commission lacked procedures to periodically review application code to ensure that only authorized changes were made to the production environment, did not document authorizations for software modifications, and did not always follow its policy of assigning risk classifications to application changes. As a result, unapproved changes to SEC production systems could be made. In addition, the commission did not ensure the application of timely and comprehensive patches and fixes to system software. For example, the commission did not consistently install critical patches for the operating system and third-party applications on its servers and end-user workstations. Failure to keep system patches up-to-date could allow unauthorized individuals to gain access to network resources or launch denial-of-service attacks against those resources. A malicious user can exploit these vulnerabilities to gain unauthorized access to network resources or disrupt network operations. As a result, there is increased risk that the integrity of these network devices and administrator workstations could be compromised. A primary reason for these control weaknesses is that SEC had not consistently implemented elements of its information security program. The effective implementation of an information security program includes implementing the key elements required under FISMA and the establishment of a continuing cycle of activity—which includes assessing risk, developing and implementing security procedures, and monitoring the effectiveness of these procedures—to ensure that the elements implemented under the program are effective. FISMA requires agencies to develop, document, and implement an information security program, which includes the following: developing and implementing policies and procedures; testing and evaluating the effectiveness of controls; and planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies. A key task in developing, documenting, and implementing an effective information security program is to establish and implement risk-based policies, procedures, and technical standards that cover security over an agency’s computing environment. If properly implemented, policies and procedures can help to reduce the risk that could come from unauthorized access or disruption of services. Because security policies are the primary mechanism by which management communicates its views and requirements, it is important to document and implement them. Although SEC has developed and documented information security related policies and procedures, it has not consistently implemented them across all systems. According to SEC policy, heads of office and system owners are responsible for implementing policies and procedures as well as reviewing and enforcing security for their systems. However, our analysis showed that 13 of the 15 newly identified weaknesses were due to the inconsistent implementation of policies and procedures by the system owners and offices. Until the commission can verify that all system owners and offices implement agency policies and procedures, it will not have assurance that requirements are being followed and controls will work as intended. Testing and evaluating systems is a key element of an information security program that ensures that an agency is in compliance with policies and that the policies and controls are both appropriate and effective. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program. Analyzing the results of security reviews provides security specialists and business managers with a means of identifying new problem areas, reassessing the appropriateness of existing controls, and identifying the need for new controls. FISMA requires that the frequency of tests and evaluations be based on risk, but occur no less than annually. SEC did not sufficiently test and evaluate the effectiveness of controls for a major system as required by its certification and accreditation process. When the general ledger system underwent a significant change, agency policy required that the system undergo recertification and reaccreditation, which included system testing and evaluation of controls. However, SEC did not complete recertification and reaccreditation testing of controls for the system. We identified three control weaknesses associated with the change to the general ledger system that SEC had not detected. Since the commission has not completed sufficient testing and evaluation for the general ledger system after it underwent a significant change, it cannot be assured that its security policies and controls are appropriate and working as intended. Remedial action plans are a key component described in FISMA. These plans assist agencies in identifying, assessing, prioritizing, and monitoring the progress in correcting security weaknesses that are found in information systems. According to Office of Management and Budget guidance, agencies should take timely and effective action to correct deficiencies that they have identified through a variety of information sources. To accomplish this task, remedial action plans should be developed for each deficiency, and progress should be tracked for each. Although SEC developed remedial action plans to mitigate identified weaknesses in its systems and developed a mechanism to track the progress of actions to correct deficiencies, it did not consistently take effective and timely action to do so. Our analysis showed that 7 of the 15 new weaknesses had been previously identified in remedial action plans. Of the 7 weaknesses, 4 were not effectively mitigated, although SEC noted that they had been closed in prior year remedial action plans. Another known weakness had been listed in a remedial action plan since April 2004. This existed in part because until recently, system remedial action plans did not have completion dates for all deficiencies. These inconsistencies exist because the commission did not develop, document, and implement a policy on remedial action plans to ensure deficiencies were mitigated in an effective and timely manner. As a result, SEC will have limited assurance that all known information security weaknesses are mitigated or corrected in an effective and timely manner. Public trust is vital to the proper functioning of the securities markets. Because SEC relies heavily on computerized systems to maintain fair, orderly, and efficient securities markets, the security of its financial and sensitive data is paramount. While the commission has made important progress in addressing our previous information security recommendations and strengthening its information security program, both outstanding and newly identified weaknesses continue to impair SEC’s ability to ensure the confidentiality, integrity, and availability of financial and other sensitive data. Accordingly, these deficiencies represent a reportable condition in internal controls over SEC’s information systems. Sustained senior management involvement and oversight are vital for SEC’s newly developed security program to undergo the continuous cycle of activity required for the effective development, implementation, and monitoring of policies and procedures. If the commission continues to have senior management actively engaged and continues to implement a framework and continuous cycle of activity, it will help ensure that outstanding weaknesses are mitigated or resolved and that key controls are consistently implemented. If progress is not sustained, SEC will not have sufficient assurance that its processes can mitigate current weaknesses and detect new weakness, and its financial and sensitive data will remain at risk of unauthorized disclosure, modification, or destruction. To assist the commission in improving the implementation of its agencywide information security program, we recommend that the SEC Chairman take the following three actions: 1. verify that all system owners and offices implement agency security 2. complete recertification and reaccreditation testing and evaluation on the general ledger system; 3. develop, document, and implement a policy on remedial action plans to ensure deficiencies are mitigated in an effective and timely manner. In a separate report designated “Limited Official Use Only”, we also made 18 recommendations to the SEC Chairman to address actions needed to correct 15 information security weaknesses. In providing written comments on a draft of this report, the SEC Chairman and Chief Information Officer agreed that the agency needs to maintain momentum addressing the remaining gaps in its information security program and stated that it is actively working to complete corrective actions for findings that remain open and enhance its information security program by implementing our recommendations. They also identified several actions the agency has completed to resolve known weaknesses and stated that going forward the commission’s primary focus will be on making its information security program more aggressive in identifying and resolving issues as or before they arise, to ensure high levels of security compliance across the agency. Their written comments are reprinted in appendix I. This report contains recommendations to you. As you know, 31 U.S.C. 720 requires that the head of a federal agency submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, GAO requests that the agency also provide us with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendation. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs; Senate Committee on Homeland Security and Governmental Affairs; House Committee on Financial Services; House Committee on Oversight and Government Reform; and SEC’s Office of Managing Executive for Operations; Office of the Executive Director; Office of Financial Management; Office of Information Technology; and the SEC’s Inspector General. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the individual named above, Charles Vrabel and Lon Chin, Assistant Directors; Angela Bell, Jason Carroll, Daniel Castro, West Coile, William Cook, Anh Dang, Kirk Daubenspeck, Valerie Hopkins, Henry Sutanto, Amos Tevelow, and Chris Warweg made key contributions to this report. | In carrying out its mission to ensure that securities markets are fair, orderly, and efficiently maintained, the Securities and Exchange Commission (SEC) relies extensively on computerized systems. Integrating effective information security controls into a layered control strategy is essential to ensure that SEC's financial and sensitive information is protected from inadvertent or deliberate misuse, disclosure, or destruction. As part of its audit of SEC's financial statements, GAO assessed (1) SEC's actions to correct previously reported information security weaknesses and (2) the effectiveness of controls for ensuring the confidentiality, integrity, and availability of SEC's information systems and information. To do this, GAO examined security policies and artifacts, interviewed pertinent officials, and conducted tests and observations of controls in operation. SEC has made important progress toward correcting previously reported information security control weaknesses. Specifically, it has corrected or mitigated 58 of the 71 weaknesses previously reported as unresolved at the conclusion of GAO's 2005 audit. The commission resolved all of the previously reported weaknesses in security related activities and contingency planning, and made significant progress in resolving access control weaknesses. A key reason for its progress was that SEC's senior management was actively engaged in implementing information security related activities. Despite this progress, SEC has not consistently implemented certain key controls to effectively safeguard the confidentiality, integrity, and availability of its financial and sensitive information and information systems. In addition to 13 previously identified weaknesses that remain unresolved, 15 new information security weaknesses were identified. By the conclusion of GAO's review, SEC took action to address 11 of the 15 new weaknesses. A primary reason for these control weaknesses is that SEC had not consistently implemented elements of its information security program. This included inconsistent implementation of agency policies and procedures, not sufficiently testing and evaluating the effectiveness of controls for a major system as required by its certification and accreditation process, and not consistently taking effective and timely action to correct deficiencies identified in remedial action plans. Until SEC does, it will have limited assurance that it will be able to manage risks and protect sensitive information on an ongoing basis. |
Congress passed ERISA to protect the rights and interests of participants and beneficiaries of private sector employee benefit plans. Before the enactment of ERISA, few rules governed the funding of defined benefit pension plans, and participants had no guarantee that they would receive promised benefits. Title IV of ERISA created PBGC to insure plan participants’ benefits and to encourage the continuation and maintenance of private sector defined benefit pension plans by providing timely and uninterrupted payment of pension benefits. Through its two insurance programs, PBGC covers certain private sector defined benefit plans. PBGC is funded through insurance premiums from employers that sponsor insured pension plans as well as investment income and assets from terminated pension plans. ERISA established a governance structure consisting of a board of directors, with the Secretary of Labor as the Chairman of the Board. ERISA provided the Secretary of Labor with responsibility for administering PBGC’s operations, personnel, and budget. The Secretary delegated the responsibility for administering PBGC to an Executive Director through a series of chairman’s orders describing the Executive Director’s responsibilities. For example, one order issued in 1984 authorized the Executive Director to make final decisions addressing legal matters on behalf of the corporation. In 2006, PPA replaced the Chairman of the Board as PBGC’s administrator with a Senate-confirmed director. The PPA established the director position at the same level of the executive schedule as two of the PBGC board representatives—Under Secretaries of Commerce and Treasury as well as the heads of other federal government corporations, such as the Federal Deposit Insurance Corporation (FDIC) and the Export-Import Bank of the United States. In addition, the corporation is aided by a seven-member Advisory Committee appointed by the President to represent the interests of labor, employers, and the general public. This committee has an advisory role but has no statutory authority to set PBGC policy or conduct formal oversight. Under the GCCA, PBGC is a wholly owned government corporation—that is, the government holds all its assets and liabilities. However, the United States is not liable for any obligation or liability incurred by the corporation. (See app. II for a list of selected government corporations). According to public administration experts, a government corporation is appropriate for the administration of governmental programs that are predominately of a business nature, produce revenue and potentially are self-sustaining, involve a large number of business-type transaction with the public, and require greater budget flexibility than a government department or agency. Under ERISA, PBGC is also empowered to sue and be sued; appoint and fix the compensation of officers, employees, attorneys, and agents; and utilize the personnel and facilities of any other agency or department of the U.S. government with or without reimbursement (with its head’s consent). Figure 1 illustrates some of the differences among traditional government departments/agencies, government corporations, government- sponsored enterprises (GSE), and private corporations. With the financial collapse of several large corporations over the past years and the passage of the Sarbanes-Oxley Act of 2002, which outlined a framework for more effective corporate governance, many private sector companies have reassessed their corporate governance practices. Although the Sarbanes-Oxley Act is intended to strengthen the corporate governance of private sector entities, certain corporate governance elements from it may also be relevant to government corporations and government-sponsored enterprises. For example, corporate governance practices suggest that corporations headed by boards of directors should have people in place with the appropriate qualifications, independence, and resources to conduct their responsibilities effectively. (See table 1 for examples of corporate governance practices.) Additional information on corporate governance practices is included in appendix III. PBGC’s financial outlook has improved since 2004, when it reported an accumulated deficit of $23 billion, but PBGC still projects large deficits for its single-employer program. Despite PPA’s provisions to strengthen defined benefit plan funding, PBGC reported an accumulated deficit of $18.1 billion as of September 30, 2006. While PBGC currently has assets exceeding $60 billion, sufficient to meet its responsibilities in the coming years, the single-employer program has had an accumulated deficit for much of its existence—the value of its program assets is less than the present value of benefits and other obligations (see fig. 2). PBGC’s board has limited time and resources to provide policy direction and oversight and has not established procedures and mechanisms to monitor PBGC operations. Although PBGC’s board members have met more frequently since 2003, the three cabinet secretaries composing the board have numerous other responsibilities, and have been unable to dedicate consistent and comprehensive attention to PBGC. In fact, we found that between 1980 and May 2007, a span of 27 years, there were only 18 board meetings, 10 of which were since 2003. The three-member board is also not large enough to ensure diverse skills, such as knowledge in strategic risk assessment and management, are included to direct and oversee PBGC. Since the board has limited time to direct and oversee PBGC, the members have designated officials within their respective agencies to conduct much of the work on their behalf. However, these officials also have limited resources to dedicate to PBGC. Further, the board has not established important mechanisms, such as the use of standing committees, to monitor and review PBGC operations and programs. Instead, the board mostly relies on the Inspector General and PBGC’s management oversight committees to ensure that PBGC is operating effectively. However, there are no formal protocols requiring the Inspector General to routinely meet with the board or its representatives and staff, and PBGC’s management committees are neither independent of PBGC, nor are they required to routinely report all matters to the board. As a result, the effectiveness of the board’s oversight may be limited, because it cannot be certain that it is receiving high-quality and timely information about all significant matters facing the corporation, even though PBGC officials report that they informally communicate with the board representatives weekly. PBGC’s board members have numerous other responsibilities in their roles as cabinet secretaries and have been unable to dedicate consistent and comprehensive attention to PBGC. ERISA charges the PBGC board with directing and overseeing PBGC management in several ways. The board is required to approve final decisions on policy matters that could affect many American employers and their workers. The board is also responsible for reviewing and approving PBGC’s budget, monitoring financial performance, approving the corporation’s strategic plan, and evaluating the effectiveness of its managers, among other responsibilities. Beyond their roles as heads of executive agencies and sitting on PBGC’s board, two of the cabinet secretaries are also members of other boards. For example, the Secretary of the Treasury serves on the boards of the Millennium Challenge Corporation, the Community Development Financial Institutions Fund, and is a managing trustee of the Social Security and Medicare trust funds. The Secretary of Commerce is on the board of the Export-Import Bank of the United States. The Secretary of Labor is also a trustee of the Social Security and Medicare trust funds. According to some corporate governance guidelines, boards should have no fewer than 5 members and no more than 15. With only 3 members, PBGC’s board may not be large enough to include the knowledge needed to direct and oversee PBGC, such as expertise in accounting, management, or strategic risk assessment. According to corporate governance guidelines, the board of directors should be large enough to provide the necessary skill sets, but also small enough to promote cohesion, flexibility, and effective participation. We did not identify any other government corporations with a similar board size as PBGC. Government corporations’ boards averaged about 7 board members, with one having as many as 15. For example, the Overseas Private Investment Corporation’s board of directors consists of 15 members—8 from the private sector and 7 from the federal government, as shown in table 2. PBGC’s board structure does not guarantee that the board represents a diverse set of interests and contains areas of expertise particular to PBGC. According to corporate governance guidelines published by The Conference Board, corporate boards should be structured so that the composition and skill set of a board is linked to the corporation’s particular challenges and strategic vision, and should include a mix of knowledge and expertise targeted to the needs of the corporation. Boards of directors should include certain expertise in accounting and finance, strategic risk assessment, management, and industry knowledge, among other factors. PBGC’s board members represent the interests of three government agencies—DOL and the Treasury share responsibility for ERISA, and Commerce represents the interests of business and economic sectors. While having these interests represented on PBGC’s board is important and the members can draw on the expertise within their respective agencies, PBGC’s governance structure does not necessarily guarantee that board members will have a range of diverse expertise needed to specifically address PBGC’s policy and oversight because the current structure only consists of members who serve by virtue of their position in the federal government. Our review of other governance structures found that many government corporations’ boards of directors consist of a variety of individuals reflecting a mix of knowledge, perspectives, and political affiliations. For instance, the FDIC board includes a full-time Chairman as well as the directors of the Office of the Comptroller of the Currency, the Office of Thrift Supervision, and two other directors with specific banking expertise, such as state bank supervision. In addition, because PBGC’s board is composed of cabinet secretaries, PBGC’s board members typically change with each administration, limiting the board’s institutional knowledge of the corporation. Other government corporations have integrated staggered term limits to avoid such gaps. For example, OPIC’s directors may be appointed for a term of no more than 3 years, and the terms of no more than 3 of the 15 directors can expire in any 1 year. Since PBGC’s inception, the board has met infrequently. While corporate governance guidelines do not specify either frequency or duration of board meetings, the literature states that the appropriate number of hours to be spent by a director on his or her duties and the frequency and length of the meetings depend largely on the complexity of the corporation and its operations. Longer meetings may permit directors to explore key issues in more depth, whereas shorter but more frequent meetings may help the directors stay up to date on emerging corporate trends and business and regulatory developments. However, as shown in figure 3, PBGC has only recently begun to meet regularly. In 2003, after several high-profile pension plan terminations and with the urging of PBGC’s Executive Director, PBGC’s board agreed to begin meeting twice a year to discuss PBGC matters. As a result, between July 2003 and May 2007, the PBGC board met 10 times. PBGC officials told us that it is a challenge to find a time when all three cabinet secretaries are able to meet. As a result, the board members’ representatives officially met in their place 3 of the 10 times. Government corporations’ boards vary in the number of times they meet, but our review found that on average many government corporations meet about 5 times per year, with some meeting more often. For example, we found that the Export-Import Bank of the United States’ board generally met more than twice a month between 2004 and 2006. While the PBGC board is now meeting twice a year, it appears that very little time is spent on addressing PBGC’s strategic and operational issues. According to corporate governance guidelines, boards should meet regularly and focus principally on broader issues, such as corporate philosophy and mission, broad policy, strategic management, oversight and monitoring of management, and company performance against business plans. However, our review of the board’s recorded minutes found that although some meetings devoted a portion of time to certain strategic and operational issues, such as investment policy, the financial status of PBGC’s insurance programs, and outside audit reviews, the board meetings generally only lasted about an hour. Since the board members have limited time to direct and oversee PBGC, they have designated officials and staff within their respective agencies to conduct much of the work on their behalf. These officials are referred to as board representatives and act as liaisons between their cabinet secretaries and PBGC. They hold the rank of assistant secretary or above. Yet PBGC’s board representatives have no policy-making authority under ERISA. Under PBGC’s bylaws, however, a representative may represent a board member at a board meeting, and take action on behalf of the board member if the board member ratifies the representative’s actions in writing within a reasonable time. PBGC officials told us that the board representatives meet regularly— several times a year—and generally provide staff with broad policy direction and oversight on behalf of the cabinet secretaries. They also receive briefings on emerging issues and matters requiring the board’s attention. However, we found limited documentation of such meetings. In fact, we were informed that no formal minutes were kept of these meetings, and the only documentary evidence we found of the board representatives’ meeting was when they represented their respective board members at select board meetings. Each representative has a dedicated staff person whose assignments include working on PBGC matters. Although the board representatives can draw on the expertise of other staff within their respective agencies as needed, these staff persons have other job responsibilities, which could limit the amount of time they can dedicate to PBGC. Consequently, limited time and attention may be dedicated to PBGC matters. Neither the board nor PBGC has developed formal procedures to ensure information is elevated to the board on all pertinent policy matters. Further, likely because of its small size, the board has not established standing oversight committees. As a result, the board may be unaware of significant PBGC management actions. According to corporate governance guidelines, corporate boards should have mechanisms to monitor and review operations, assess progress against performance measures, and manage risks to the institution, and boards should operate using committees to assist them. The board has not established formal policies and procedures describing the types of policy matters that should be raised to the board’s attention. Rather, the board relies mostly on PBGC’s management to inform the board of pending issues when management believes it’s appropriate, which is done through weekly communications to the board representatives. While officials believe that this process has generally worked well, in some cases board members have not received information in a timely manner. For example, in 2005, PBGC’s Inspector General found that the board members and their representatives were not told of certain actions taken by PBGC’s management regarding a large bankruptcy settlement until after the case had been settled. In response, PBGC drafted a protocol to govern communications with the board representatives about potential settlements. At this writing, the board is also revising the PBGC’s bylaws, which establish board governing procedures. However, since there are no formal policies and procedures describing what other policy matters should be elevated to the board’s attention, the board may be unaware of other significant actions of PBGC’s management. The board has not established standing committees, such as audit and ethics committees, to perform certain oversight and monitoring functions. A committee structure permits the board to address key areas in more depth than may be possible in a full board meeting. In prior years, the board established certain committees—staffed with individuals from PBGC’s Advisory Committee—to probe specific issues. However, the board has not used this approach since the early 1990s. Instead, the board has generally relied on PBGC’s Inspector General and its executive management to provide oversight of PBGC’s operations. As of May 2007, PBGC’s Inspector General reports directly to the board and conducts reviews of PBGC’s operations and financial condition and monitors PBGC’s contractors. Even though the current board has required the Inspector General to brief it at its now semiannual meetings, there are no formal protocols describing the Inspector General’s interaction with the board or its representatives and staff. Consequently, if the Inspector General or the board were to change, it is unclear whether the Inspector General or the board would be aware of this informal protocol. Further, the board relies on PBGC’s executive management committees and working groups for monitoring and reviewing PBGC’s operations. However, these committees and working groups are not independent of PBGC’s management and are not required to routinely report to the board. Some government corporations, such as FDIC, the Export-Import Bank of the United States, the Overseas Private Investment Corporation, and the National Railroad Passenger Corporation (Amtrak), have established standing committees to conduct certain oversight functions to assist their boards of directors. For example, FDIC’s board of directors established standing committees, such as the Case Review Committee and the Audit Committee, to conduct certain oversight functions. FDIC’s committees are governed by formal rules that cover areas such as membership, functions and duties, and, in some cases, submission of activity reports to the board. While ERISA provides the board, the Secretary of Labor as Chair, and PBGC’s Director the authority to oversee and administer PBGC, no formal guidelines articulate the different roles and responsibilities of the board and PBGC management. ERISA established PBGC “within the Department of Labor” and provided the Secretary of Labor administrative authority over the corporation. As a consequence, the Secretary has been responsible for overseeing PBGC’s operations, including overall supervision of PBGC’s personnel, organization, and budget practices. As a result, DOL officials consider PBGC to be a DOL agency and have required the corporation to follow its policies and procedures. However, under its authorities, PBGC has also developed its own policies, procedures, directives, and systems separate from DOL, and it does not rely on DOL- wide services, such as legal, procurement, and information technology. As a result, DOL and PBGC disagree over the extent to which PBGC is a separate and distinct executive agency. A November 2005 PBGC memorandum stated that Congress’ intention in placing PBGC within DOL was to provide PBGC with a physical location and was not meant in an organizational or operational sense. Some PBGC managers now view the language as an anachronism. One former PBGC Executive Director also noted that PBGC could not be just like any other DOL agency, because if it were, the Secretaries of the Treasury and Commerce, by a two-vote majority, could theoretically direct policies of another federal cabinet department. Further, federal agencies, including DOL, recognize PBGC’s separateness either directly or indirectly through various types of reporting requirements that are required of PBGC and the board. For example, PBGC is responsible for representing itself in matters before other agencies, such as the Equal Employment Opportunity Commission, the Federal Labor Relations Authority, and the Merit Systems Protection Board. In another instance, DOL’s Office of the Solicitor stated in a March 2007 letter to the Department of Justice that while PBGC was “within the Department of Labor,” the two agencies have historically operated with separate administrative structures and should be considered separate for matters relating to postemployment ethics. The uncertainty of PBGC’s status has resulted in confusion over the extent to which DOL has the authority to manage PBGC’s operations. According to our internal control standards, agencies should ensure that key areas of authority and responsibilities are defined and communicated. However, neither the board, DOL, nor PBGC has developed formal policies and procedures to define its authorities and responsibilities. Instead, PBGC officials typically react to DOL’s periodic written and oral communications, which PBGC officials said sometimes become a part of PBGC’s operational framework. DOL and PBGC provided us with memorandums and e-mail correspondence outlining some of these administrative requirements. The following are examples of the confusion and disagreement resulting from the uncertainty related to PBGC’s status: In December 2006, the Office of the Secretary of Labor, without consulting with officials from the Departments of the Treasury or Commerce, orally directed PBGC to obtain DOL’s clearance before it could advertise for or select individuals to fill three vacant executive management positions, even though DOL had not required this in prior years. According to PBGC officials, this resulted in hiring delays. DOL officials stated that this requirement was needed to oversee PBGC’s hiring activities only while PBGC has an interim director. A May 2006 DOL memorandum to all its agency heads, including PBGC, provided guidance for the preparation and submission of information technology investments in DOL’s fiscal year 2008 budget request. However, because OMB considers PBGC’s information technology program independent from DOL’s, there has been confusion not only between DOL and PBGC officials, but also among DOL officials, over the role that DOL’s Chief Information Officer has in PBGC’s information technology program and whether DOL’s guidance is applicable to PBGC on this issue. DOL and PBGC have also disagreed on management approaches to PBGC’s operations. For example: During the fiscal year 2007 budget process, DOL and PBGC officials disagreed over the amount of money included in PBGC’s budget for the development of a new system for pension plan sponsors to file their required annual reports to DOL electronically. While PBGC benefits from these annual reports, PBGC’s Inspector General reviewed PBGC’s fiscal year 2007 budget request, which included $7 million to cover these costs. After investigating, the Inspector General concluded that the requested increase was disproportionate to PBGC’s usage of the annual reports. However, DOL officials disagreed with the Inspector General’s findings and said that the Inspector General’s methodology for determining the percentage usage was flawed. Further, the board representatives from the Departments of the Treasury and Commerce were unaware of DOL and PBGC’s actions until they were brought to their attention by PBGC’s Inspector General. In May 2007, the direction to transfer funds was enacted by Congress and PBGC is providing $7 million to DOL as part of a fiscal year 2007 supplemental appropriation. In January 2007, DOL officials orally directed PBGC to have no direct contact with OMB without DOL’s approval, a condition that PBGC officials believe has strained the relationship between DOL and PBGC budget offices. In previous years, PBGC’s budget office worked directly with OMB examiners to resolve matters related to its annual budget submissions, even though PBGC submitted its budget to OMB through DOL’s Office of the Assistant Secretary for Administration and Management. DOL now closely monitors PBGC’s interactions with OMB by attending meetings and participating in telephone calls. DOL officials said that such action is needed to coordinate with PBGC in order to provide OMB examiners with a consistent message. OMB officials said that DOL’s review of PBGC’s budget submission was useful. DOL and PBGC officials have also disagreed over PBGC’s authority to explore and establish an independent compensation system for its employees. In the early 1990s, PBGC officials requested approval from DOL to establish a new compensation system (outside of the federal government’s “general schedule” pay system and merit pay), arguing that PBGC employees should be exempt from these pay systems because their compensation was not wholly from appropriated funds. In a 1992 memorandum, DOL cited the absence of an explicit exception for PBGC employees, the legislative history of ERISA, and prior rulings by the Federal Labor Relations Authority and the United States Court of Appeals for the District of Columbia to argue against such an exemption. Consequently, some PBGC officials believe PBGC is limited in attracting and retaining the types of expert financial and actuarial staff it needs. As PBGC continues to navigate the challenges presented by the changing defined benefit pension environment, ensuring that the corporation is soundly governed and efficiently managed is essential to the thousands of Americans who rely on PBGC for their retirement income. Since 1974, the private sector pension industry has evolved and corporate governance models have changed. Yet, PBGC is still directed and overseen by one of the smallest and least diverse boards of directors, even though it is financially one of the largest corporations within the federal government. While the current board members recognize PBGC’s importance and are meeting more frequently than before, the limited amount of time they can dedicate to PBGC is troubling. In fact, if PBGC’s board of directors were held to private sector standards, the corporation could be considered vulnerable to mismanagement. Because the Secretaries change with each administration, the board may also have limited institutional knowledge. This could weaken PBGC’s governance further, since the ever-changing board membership may not understand the corporation’s business or the vulnerabilities it faces. Even though each agency has a variety of staff who may be able to fill the gaps in institutional knowledge, each board agency has only assigned a board representative and one staff person, both of whom have other job responsibilities, a fact that may limit the time and attention given to PBGC. As a result, oversight of this $60 billion corporation that provides pension benefits to over a half a million participants in terminated pension plans may be limited. Because the Secretary of Labor has historically had the authority to administer PBGC, DOL has, in some ways, filled the void in accountability. However, the confusion resulting from the lack of clarity over who is responsible for certain matters has raised additional questions about the extent to which DOL should be involved in directing PBGC’s activities. Board representatives from the Departments of Commerce and the Treasury have often deferred to DOL on administrative matters and not generally questioned DOL on its actions. Perhaps some aspects of the relationship between DOL and PBGC could be clarified in the revised bylaws currently being prepared, but it remains essential that the board exercise its authority to oversee PBGC and coordinate with DOL and each other not only on major policy issues, but also on the oversight of PBGC’s activities. PBGC’s management staff should also work with the board to ensure that all significant matters are formally elevated to the board’s attention. This will become even more critical in the coming months as the new Senate-confirmed Director begins to work with the board to clarify the Director’s role in administering PBGC. To strengthen PBGC’s policy direction and oversight, Congress should consider expanding PBGC’s board of directors. If Congress decides to expand the board, it would be helpful to appoint additional members of diverse backgrounds who possess knowledge and expertise useful to PBGC’s responsibilities and can provide the attention that would be needed. This revised board structure could resemble those at other government corporations, such as the Federal Deposit Insurance Corporation, the Export-Import Bank of the United States, or the Federal Crop Insurance Corporation. Further, dedicating staff, independent of PBGC’s executive management, with relevant pension and financial expertise, to solely support the revised board’s policy and oversight activities may be warranted. To improve overall accountability and oversight of PBGC, we recommend that the Secretaries of the Treasury, Labor, and Commerce, as PBGC’s board of directors, establish policies, procedures, and mechanisms for providing oversight of PBGC that are consistent with corporate governance guidelines and establish formal guidelines that articulate the authorities of the Board Chair and the Department of Labor, the other board members and their respective departments, and PBGC’s Director. We obtained written comments on a draft report from the Secretary of Labor, on behalf of the PBGC board of directors, and from the interim director of PBGC. Their comments are reproduced in appendixes IV and V, respectively. In addition, the Departments of the Treasury, Labor, and Commerce, as well as PBGC, provided technical comments, which were incorporated in the report where appropriate. In response to our draft report, the PBGC board of directors recognized that the current law establishes an unusual corporate structure for PBGC, and stated that a number of corporate structures are possible for addressing PBGC’s unique purpose and authority under the law. The board members added that if Congress considers making changes to PBGC corporate structure, they would be pleased to discuss the merits of various corporate governance proposals. Further, the board reiterated its continued commitment to improving the corporate governance of PBGC within the current statutory structure, and stated that in addition to the board members meeting regularly, the board representatives and their staffs of resident experts in pension and financial matters meet frequently throughout the year to address PBGC matters. The board also stated that the review and revisions of PBGC bylaws will help delineate the respective roles, responsibilities, and authorities of PBGC’s board and Director in the management of PBGC. The PBGC interim director stated that PBGC management is committed to working with the board to enhance PBGC’s governance processes on issues identified in our review. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretaries of the Treasury, Labor, and Commerce as well as the Director of PBGC and other interested parties. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please contact me on (202) 512-7215. Key contributors are listed in appendix VI. To address Pension Benefit Guaranty Corporation’s (PBGC) governance structure, we interviewed board representatives, board agency officials, former PBGC Executive Directors, former PBGC General Counsels, senior PBGC management officials, former and current Executive Directors, officials from the Office of Management and Budget, and outside experts to obtain their perspectives on the board’s governance structure and its effect on management and operations. To encourage open communication, we met with many officials separately, and in all cases, subordinate employees were interviewed separately from their managers. Additionally, we spoke to PBGC’s Inspector General as well as PBGC’s union representatives. We were unable to attend a PBGC board meeting to observe what types of issues the board members discussed during their biannual meetings, because the PBGC board does not open its meetings to the public or others. To identify the extent to which PBGC’s governance structure provides policy direction and oversight, we reviewed previous GAO work on the governance of private sector and government corporations and PBGC’s single-employer and multiemployer insurance programs and management challenges. We also identified key provisions of the Employee Retirement and Income Security Act of 1974 (ERISA), the Pension Protection Act of 2006 (PPA), and the Government Corporation Control Act (GCCA) that outline the authority of PBGC’s board of directors as well as the administrative responsibilities of PBGC’s Director. Further, our review examined the governance structures of similar federal government corporations listed in the GCCA to determine the extent to which they had similar sizes, compositions, activities, policy mandates, and oversight functions. In addition, we also reviewed our reports and other available literature, such as The Conference Board’s Corporate Governance Handbook 2005, on the characteristics of private sector boards of directors to identify common practices. We also consulted our standards for internal control in the federal government to determine how delegations of authority affect an agency’s internal control environment. To understand the board of directors’ role, we reviewed documentation related to the board members’ activities. We collected and reviewed available board meeting minutes from 2000 to 2006 to identify what types of actions the board members had considered and taken. In addition, we requested documentation on board representative meetings, however, we were told that no formal documentation existed. Also, we reviewed board meeting information dating back to 1974, including summations of board resolutions. We also collected and reviewed memorandums from PBGC officials and other information concerning previous efforts by PBGC staff to evaluate the issue of PBGC’s governance structure. To assess how PBGC’s governance structure affects its ability to conduct efficient operations, we identified and reviewed key legal interpretations of ERISA, PPA, and corresponding regulations that outline the relationship between PBGC’s board of directors, the Secretary of Labor as Board Chair, and PBGC’s Director. We reviewed available policies and procedures regarding PBGC’s interaction with the board members’ agencies, and we collected and reviewed the policies and procedures from PBGC and DOL. Given the Secretary of Labor’s role as Board Chair, we reviewed available documentation on DOL and PBGC protocols to determine the extent to which guidance existed on how they should interact on specific administrative activities. Appendix II: List of Selected Federal Government Corporations Created to stabilize, support, and protect farm income and prices. CCC also helps maintain balanced and adequate supplies of agricultural commodities and aids in their orderly distribution. Assists in financing the export of goods and services between the United States and international markets. The Export-Import Bank of the United States is the official export credit agency of the United States. Improves the economic stability of agriculture through a sound system of crop insurance and provides the means for the research and experience helpful in devising and establishing such insurance. Preserves and promotes public confidence in the U.S. financial system by insuring deposits in banks and thrift institutions for up to $100,000 per depositor; by identifying, monitoring and addressing risks to the deposit insurance funds; and by limiting the effect on the economy and the financial system when a bank or thrift institution fails. Established to centralize and reduce the cost of federal borrowing, as well as federally assisted borrowing from the public. To employ and provide skills training to the greatest practicable number of inmates confined within the Federal Bureau of Prisons and produce goods for sale to the federal government. The Financing Corporation (FICO) serves as a financing vehicle for the Federal Savings and Loan Insurance Corporation (FSLIC) Resolution Fund (formerly the Federal Savings and Loan Insurance Corporation) by issuing debentures, bonds, and other obligations. A corporation that guarantees, with the full faith and credit of the U.S. government, full and timely payment of all monthly principal and interest payments on the mortgage-backed securities of registered holders. National Railroad Passenger Corporation (AMTRAK) Provides passenger train service in the United States. Helps U.S. businesses invest overseas, fosters economic development in new and emerging markets, assists the private sector in managing risks associated with foreign direct investment, and supports U.S. foreign policy. Established to encourage the continuation and maintenance of private sector defined benefit pension plans, provide timely and uninterrupted payment of pension benefits, and keep pension insurance premiums at a minimum. Established to protect, preserve, and enhance the Presidio as a resource for the American public and as a national historic landmark. Established by Congress to raise funds for the activities of the Resolution Trust Corporation. Established in 1971 to obtain supplemental funds for use in making loans to eligible telecommunications companies and cooperatives. Established to construct deep water navigation works in the Saint Lawrence Seaway. Created in May 1933 to provide navigation, flood control, electricity generation, fertilizer manufacturing, and economic development in the Tennessee Valley. Established to provide postal service to the United States. Created to manage, provide administrative services, collect funds, and coordinate with federal and state governments on behalf of the Valles Caldera National Preserve. In February 2005, the President’s fiscal year 2006 budget proposed the dissolution of the Rural Telephone Bank. After 6 months of discussion and deliberation, the board of directors unanimously approved resolutions to liquidate and dissolve the bank. On November 10, 2005, the liquidation and dissolution process was initiated with the enactment of the 2006 agriculture appropriations bill. Appendix III: Examples of Corporate Governance Practices In carrying out their duties, directors should fulfill their fiduciary duties of care, loyalty, and good faith, and act in the best interests of the corporation and its shareholders. Boards usually delegate the day-to-day management of the company to the chief executive officer (CEO) and other senior management, but the board retains responsibilities for oversight and monitoring of these delegated functions. A director’s actions must fulfill three fiduciary duties: the duty of care to make decision that are informed, the duty of loyalty to act without conflict and always to put the interests of the corporation before those of the individual director, and the duty to act in good faith in accordance with evolving corporate governance best practices. A strong and effective board should have a clear view of its role in relationship to management. How a board organizes itself and structures its processes will vary with the nature of the business, business strategy, size and maturity of the company, and talents and personalities of the chief executive officer and the board. The board should focus principally on guidance and strategic issues, choice of the CEO, other senior management, oversight and monitoring of management and company performance, and adherence to legal requirements. The board should have a set of written guidelines in place to articulate corporate governance principles and the roles and responsibilities of the board and management. These guidelines should be reviewed at least annually and help the board and individual directors understand their obligations and the general boundaries within which they will operate. A constructed set of governance guidelines will, in part, delineate responsibilities of the board, management, directors, and committees; be reviewed regularly, at least annually, and revised as appropriate; and be made publicly available. Guidelines should also include information on director orientation and continuing education. Such orientation should entail a thorough briefing on the company and its businesses and industries, organizations, people, strategies, key issues, and risks. Further, guidelines should include continuing education requirements for board members, which can be fulfilled through the use of subject matter experts or belonging to professional organizations that offer training courses and publish information pertaining to their industry’s operating environment. The effectiveness of the board depends on the quality and timeliness of the information each director receives. The board and management should agree on the important information needed for board oversight and monitoring and to enable the board to make informed decisions. Directors’ access to management and, as necessary and appropriate, independent advisors. For purposes of having information that is timely and relevant, boards need to have both formal and informal channels of communication with the appropriate officers and other individuals within the company that enable directors to perform their oversight functions. Boards should consider the following best practices to generally ensure effective decision making and exchange of information and ideas: Directors should be able to place items on the agenda, with time for adequate discussion and consideration. The lead director should take responsibility to surfacing issues that affect the business. Management should provide information that effectively explains the corporation’s operating and financial status, as well as other significant issues facing the corporation and the board. Meetings should be structured to encourage participation and dialogue among the directors. Directors should attempt to attend all board meetings and actively participate in the meetings, including asking hard questions of management. should promote open dialogue among the members and free exchange of ideas, perspectives, and information, have a ‘feedback’ mechanism to the CEO for important issues that may arise, and be supplemented by additional off-line informational channels to help build trust and relationships among the directors. The composition and skill set of the board should be linked to the company’s particular challenges and strategic vision. As companies develop and experience changed circumstances, the desired composition of the board may be different and should be reviewed. Regardless of their mix of background and skills, all directors should: possess knowledge and expertise to fulfill an appropriate role given the mix of exercise diligence, including attending board and committee meetings and coming prepared to provide thoughtful input at the meetings and during communications between meetings; and be independent in their judgment and committed to the long-term interests of the company. The composition of the board should be tailored to meet the needs of the company at its stage of development, but there should be a mix of director knowledge and expertise in areas such as The size of the board will vary depending on the corporation’s needs and requirements. Boards need to be large enough to accommodate the necessary skill sets, but small enough to promote cohesion, flexibility, and effective participation. According to a private sector research center, in 2004, the median private sector board size ranged from 11 to 15 total members, with the number of outside directors ranging from 8 to 9. Boards should adopt a structure that provides the nonmanagement directors with the leadership necessary for them to act independently as well as function effectively. This structure could include separating the positions of chairman and CEO, creating a lead independent director, or appointing a presiding director from among the independent directors. Any structural alternative, a private sector board wishes to adopt should strengthen the independence and oversight role of the board; provide the nonmanagement directors with the ultimate authority over information flow to the board; and improve the relationship and flow of information between the board, CEO, and senior management. Boards should establish committees that will enhance the overall effectiveness of the board by ensuring focus and oversight on matters of particular concern. Committees can enhance board effectiveness by permitting closer focus, oversight, and monitoring of sensitive areas. In the private sector, certain statutes and standards require that companies maintain a number of standing committees, such as an audit committee, a nominating committee, and a compensation committee. In addition, boards have established committees, such as risk, technology, pension and benefits, public policy, and corporate governance, which focus on substantive issues or particular concerns to the company or the board. Audit committee: Is responsible for the appointment, compensation, and oversight of the work of any registered public accounting firm employed by that issuer; and must be composed entirely of independent directors, meaning that a director may not, other than in his or her capacity as member of the audit committee, the board of directors, or any other board committee, accept any consulting, advisory, or other compensatory fee from the issuer or be an affiliated person of the issuer or its subsidiary. Governance committee: Is designed for the purpose of monitoring and implementing the governance structure of the corporation. This committee of independent directors is charged, in part, with ensuring that the board is informed of new and emerging governance practice being employed. Boards must play an active role in the area of internal controls by ensuring that the company has an effective internal control framework in place. This should include the assessment and management of key financial and nonfinancial risks and an effective monitoring and oversight process, supported by timely and accurate information and clear communication channels. Internal controls are processes designed to provide reasonable assurance that an organization is achieving its objectives by helping to ensure it is not overly exposed to risk, improve the reliability of internal and external reporting, promote compliance with applicable laws and regulations, and improve the effectiveness and efficiency of operations. The following team members made key contributions to this report: Blake Ainsworth, Assistant Director; Jason Holsclaw; Joe Applebaum; Kisha Clark; Monika Gomez; Jean McSween; Charles Willson; and Craig Winslow. PBGC’s Legal Support: Improvement Needed to Eliminate Confusion and Ensure Provision of Consistent Advice. GAO-07-757R. Washington, D.C.: May 18, 2007. Federal Deposit Insurance Corporation: Human Capital and Risk Assessment Programs Appear Sound, but Evaluations of Their Effectiveness Should Be Improved. GAO-07-255. Washington, D.C.: February 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Corporate Governance: NCUA’s Controls and Related Procedures for Board Independence and Objectivity Are Similar to Other Financial Regulators, but Opportunities Exist to Enhance Its Governance Structure. GAO-07-72R. Washington, D.C.: November 30, 2006. Private Pensions: Questions Concerning the Pension Benefit Guaranty Corporation’s Practices Regarding Single-Employer Probable Claims. GAO-05-991R. Washington, D.C.: September 9, 2005. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenges. GAO-05-772T. Washington, D.C.: June 9, 2005. Government-Sponsored Enterprises: A Framework for Strengthening GSE Governance and Oversight. GAO-04-269T. Washington, D.C.: February 10, 2004. Pension Benefit Guaranty Corporation: Single-Employer Pension Insurance Program Faces Significant Long-Term Risks. GAO-04-90. Washington, D.C.: October 2003. Pension Benefit Guaranty Corporation: Statutory Limitation on Administrative Expenses Does Not Provide Meaningful Control. GAO-03-301. Washington, D.C.: February 2003. GAO Forum on Governance and Accountability: Challenges to Restore Public Confidence in U.S. Corporate Governance and Accountability Systems. GAO-03-419SP. Washington, D.C.: January 2003. Pension Benefit Guaranty Corporation: Contracting Management Needs Improvement. GAO/HEHS-00-130. Washington, D.C.: September 2000. Government Corporations: Profiles of Existing Government Corporations. GAO/GGD-96-14. Washington, D.C.: December 1995. | The Pension Benefit Guaranty Corporation (PBGC) insures the pensions of millions of private sector workers and retirees in certain employer-sponsored pension plans. It is governed by a board of directors consisting of the Secretaries of the Treasury, Labor, and Commerce, who are charged with providing PBGC with policy direction and oversight. This report assesses (1) the extent to which PBGC's governance structure provides PBGC with policy direction and oversight, and (2) whether administrative responsibilities among the PBGC board, Department of Labor (DOL), and PBGC management are clearly defined. We examined corporate governance practices, select federal government corporations, and reviewed documents on PBGC's structure. We interviewed officials from all board member agencies and PBGC, among others. Although PBGC's board has provided greater attention to PBGC since 2003, the board has limited time and resources to provide policy direction and oversight and has not established comprehensive written procedures and mechanisms to monitor PBGC operations. Because PBGC's board is composed of three cabinet secretaries, who have numerous other responsibilities, the board structure does not guarantee that PBGC's board is active and diverse. For example, since 1980, a span of 27 years, there were only 18 official board meetings. Further, the board has not established formal procedures to ensure that PBGC management provides it information on all policy matters nor has it developed standing committees to oversee operations. Instead, the board relies on PBGC's Inspector General and management's oversight committees to ensure that PBGC is operating effectively. However, there are no formal protocols concerning the Inspector General's interaction with the board, and PBGC internal management are not independent and are not required to routinely report all matters to the board. Even though PBGC uses informal channels of communication to inform its board members, the board's oversight may be limited, because it cannot be certain that it is receiving high quality and timely information about all significant matters facing the corporation. PBGC's lack of formal guidelines to articulate the administrative roles and responsibilities among the board, the Secretary of Labor as the board chair, board members' agencies, and the PBGC Director has led, at times, to confusion and inefficiencies. The board has not addressed uncertainty over the extent to which PBGC is a separate and distinct executive agency, a fact that has resulted in confusion over when DOL has the authority to manage PBGC's operations. Further, neither DOL nor PBGC has developed formal policies and procedures to define the board's authorities and responsibilities. Instead, PBGC officials typically react to DOL's periodic written and oral communications, which PBGC officials said sometimes become a part of PBGC's operational framework. For example, PBGC is required to incorporate its budget request with DOL's budget request, and over the years, DOL has taken a more active role in reviewing PBGC's budget. However, PBGC officials believe that DOL has in some cases overstepped its role. For instance, DOL and PBGC officials disagreed over the inclusion of a funding request in PBGC's fiscal year 2007 budget. |
Veterans submit their disability compensation claims to 1 of VBA’s 57 regional offices. These claims contain, on average, five disabling medical conditions that the veteran believes are service connected. For each claimed condition, VA must determine if credible evidence is available to support the veteran’s contention of service connection. VA grants service connection for an average of three of the five conditions claimed by a veteran. Key sources of evidence for determining service connection are veterans’ military service medical and personnel records. To determine service connection in some cases, VA also may need to obtain information from DOD historical military records for the units in which veterans served. VBA’s regional offices face a complex task in obtaining veterans’ military service records because (1) service records consist of numerous types of records that can originate from numerous sources within or outside DOD, (2) the process for collecting and storing service records has varied substantially for different groups of veterans over time, (3) service records cannot always be found at the expected storage locations, and (4) the service records of many veterans were destroyed by a fire in 1973 at the National Personnel Records Center, a primary repository for service personnel and medical records. For detailed information on military service records, including the types and locations of the records and the process for collecting and storing them, see appendix II. Once a claim has all the necessary evidence, the regional office evaluates the claim and determines whether the claimant is eligible for benefits. If a veteran disagrees with a regional office’s decision on any of the issues in his or her claim, the veteran may file an appeal with the Board of Veterans’ Appeals, requesting a more favorable decision. In many cases, the board finds it cannot make a final decision on a veteran’s appeal until VBA does additional work on the case. In such cases, the board sends (remands) the case back to VBA to perform the necessary additional work. The additional work required for remands can include making initial or follow- up attempts to obtain relevant records in accordance with the requirements of the Veterans Claims Assistance Act. Under the act, if relevant records—such as military service records—are believed to be in the custody of a federal agency, VBA’s regional offices must continue requesting the records until either the agency provides the records or the regional office is reasonably certain the records do not exist or that further efforts would be futile. VA’s regulations state that the regional office cannot discontinue its efforts unless it has obtained a statement from the agency advising VA that the records either do not exist or are not in the agency’s possession. For detailed information on VA’s disability compensation claims and appeals process, see appendix III. VA’s internal assessments indicate that regional offices generally comply with the requirements of the Veterans Claims Assistance Act for obtaining veterans’ military service records. However, VBA does not have a system for assessing the reliability and accuracy of research done on behalf of regional offices by a VBA unit located at the National Personnel Records Center, where the service records of many veterans are stored. The VBA quality review unit that evaluates the accuracy of regional office decisions on compensation claims has found that that less than 4 percent of these decisions contain errors involving regional offices’ failing to obtain military service records as required by law. Similarly, of all the compensation appeals cases decided by the Board of Veterans’ Appeals during November 2004–January 2006, the board remanded less than 3 percent of these cases to VBA for rework due to deficiencies in obtaining military service records. However, because VBA does not systematically evaluate the quality of the research done on behalf of regional offices by the VBA unit at the National Personnel Records Center, VBA does not know the extent to which the information that this unit provides to regional offices is reliable and accurate. VBA maintains a quality review program known as the Systematic Technical Accuracy Review (STAR) program. VBA selects random samples of each regional office’s compensation decisions and assesses the regional office’s accuracy in processing and deciding such cases. For each decision, the STAR quality review unit reviews the documentation contained in the regional office’s claim file to determine, among other things, whether the regional office complied with claims assistance act duty-to-assist requirements for obtaining relevant records, made correct service connection determinations for each claimed condition, and made correct disability rating evaluations for each condition determined to be service connected. An error in any of these decision elements has the potential to result in a different decision outcome. One of VBA’s fiscal year 2007 performance goals is that 88 percent of compensation decisions should contain no errors that could affect decision outcomes, and the long-term strategic goal is 98 percent. STAR data from reviews of regional office decisions made during the first half of fiscal year 2006 showed that less than 4 percent of the cases reviewed contained any type of error related to the law’s requirements for developing evidence. Because military service records are only one component in the overall body of evidence that regional offices must develop, the percentage of cases with errors related to military service records would be even smaller than the 4 percent error rate. While the STAR database does not capture statistical data on specific types of errors in evidence development, it does contain quality reviewers’ narrative comments on the nature of errors found. A VBA analysis of these narrative comments showed that over half of all evidence development errors were due to regional offices not obtaining VA medical examinations or opinions when needed and using inadequate medical examinations. Thus, on the basis of STAR data, one would conclude that errors related to military service records account for less than half—or about 2 percent—of all evidence development errors. Since November 2004, when the Board of Veterans’ Appeals began tracking whether remands are the fault of regional offices, it has remanded relatively few cases—less than 3 percent—because of regional office deficiencies in obtaining military records. For example, as of January 2006, the board had made decisions on 41,517 compensation cases and had remanded at least one issue in 44 percent of these cases (see table 1). However, of the 41,517 cases, 25.6 percent contained issues that had been remanded for reasons considered to be the fault of the regional office, and only 2.8 percent contained issues remanded specifically because of deficiencies in obtaining military service records. For each case decided by the appeals board, it also tracks the outcome of each contested issue in the case—for example, a veteran may have contested the denial of service connection for a specific medical condition and also may have asked for a higher disability rating on another condition for which the regional office granted service connection. The 41,517 compensation cases decided by the board contained a total of 88,156 contested issues, of which 39 percent (34,351) were remanded to VBA. However, of the total contested issues, 23 percent (20,191) were remanded for reasons considered to be the fault of the regional offices. For the 20,191 issues remanded because of regional office deficiencies, the board identified a total of 36,812 reasons for remanding these issues (see table 2). Of these remand reasons, only 7.6 percent were related to inadequacies in obtaining military service records (service medical records, 3.5 percent; service personnel records, 2.4 percent; and military unit historical records, 1.6 percent). The predominant reasons for remands were deficiencies in obtaining medical examinations or opinions and nonmilitary records and in providing proper due process. Focusing only on issues in which veterans asked the appeals board to grant service connection for a medical condition that the regional office had denied, the board identified about 12 percent of the reasons for remanding service connection issues as being related to inadequacies in obtaining military service records. To obtain service records stored at the National Personnel Records Center, regional offices submit requests to a VBA unit located at the center, asking the VBA unit to provide copies of service records and/or provide information contained in the records. This unit responded to such requests from regional offices for about 290,000 cases in calendar year 2005. For certain types of compensations claims, such as herbicide exposure and PTSD claims, VBA’s written procedures instruct regional offices not to request a copy of the veteran’s entire service personnel record, which can be voluminous. Instead, regional offices are supposed to rely on the VBA unit at the National Personnel Records Center to obtain the veteran’s files, perform a physical search of the files for relevant records, provide copies of only certain specified records, analyze certain types of records, and provide regional offices with narrative answers on the results of their research and analyses. Thus, regional offices rely on the VBA unit at the National Personnel Records Center to do thorough and complete searches of records, do reliable analyses of records, and provide accurate and clear narrative reports on the results. VBA, however, does not have a systematic quality review program that evaluates the accuracy of the work that the VBA unit at the National Personnel Records Center performs on behalf of the regional offices. Such a program is needed as part of an adequate system of internal management controls for VBA’s administration of the compensation program. An example of why the records research done by VBA employees at the National Personnel Records Center must be reliable is provided by disability claims based on exposure to herbicides in Vietnam. Under the Agent Orange Act of 1991, VA presumes that any veteran who had set foot on land in the Republic of Vietnam at any time during the Vietnam era (January 9, 1962, to May 7, 1975) was exposed to herbicides such as Agent Orange. If any such veteran files a claim for certain specified diseases that have been determined to be attributable to herbicide exposure, VA must presumptively grant service connection to the veteran for such diseases. If a veteran claims that he or she was officially stationed on land in Vietnam during that period, the VBA unit at the National Personnel Records Center should be able to verify this fact by examining standard personnel forms in his or her service personnel file. However, if a veteran who was not officially stationed on land in Vietnam claims that on some occasion he or she did set foot on land in Vietnam during that period, VBA may encounter more difficulty obtaining the evidence needed to verify the veteran’s claim because standard personnel forms would not document such occasions. In such cases, VBA procedures instruct regional offices not to ask for the veteran’s entire service personnel file, but instead, the regional office must ask the VBA unit at the National Personnel Records Center to search the veteran’s personnel file for any evidence that might corroborate his or her claim of having set foot on land in Vietnam. One regional office that we visited provided an example of how the VBA unit at the National Personnel Records Center could overlook corroborating evidence contained in the file and cause a significant delay of benefits for a veteran. In this particular case, an Air Force veteran claimed that he had been assigned to an aircraft that had landed and spent a short time on the ground in Vietnam during the presumptive period. The VBA unit at the National Personnel Records Center did not provide the regional office with evidence supporting this claim, and the regional office ultimately denied the claim. However, the veteran appealed the decision to the Board of Veterans’ Appeals, which remanded the case to the regional office and ordered the regional office to obtain and review the veteran’s entire personnel file. After obtaining the entire file from the National Personnel Records Center, the regional office found documents in the file that provided sufficient evidence to conclude that the veteran’s claim was credible. If the VBA unit at the National Personnel Records Center had found and reported this evidence to the regional office during the initial claims process, the veteran’s claim could have been granted without his having to go through the appeals process. Also, for many PTSD claims, regional offices potentially must rely on the VBA unit at the National Personnel Records Center to do thorough research of personnel records. PTSD results from personal exposure to traumatic events (stressors) that can occur during combat events; noncombat events—such as plane crashes, ships sinking, explosions, burn ward duty, or graves registration duty—and personal assault. For such claims, if evidence substantiates that a veteran engaged in a combat event, the veteran’s own testimony is sufficient to substantiate the occurrence of a claimed stressor associated with that event. If engagement in combat is not substantiated, then the regional office must seek other evidence substantiating the occurrence of the stressor claimed by the veteran. Only for PTSD claims involving personal assault do VBA’s procedures instruct regional offices to request a copy of the entire personnel file from the National Personnel Records Center. Routinely requesting the entire file for personal assault cases is permitted because such cases can involve personal and sensitive incidents that sometimes are not officially reported. Therefore, the entire file needs to be examined for indications of changes in behavior or performance that may have been related to the alleged rape or assault. For all other types of PTSD stressors claimed by veterans, the documents that regional offices may routinely request from the veterans’ service personnel files do not include performance reports or written justifications for awards and commendations. According to regional office officials, however, these documents sometimes can contain evidence that supports a veteran’s PTSD claim. As a result, the regional offices depend on the VBA employees stationed at the National Personnel Records Center to read such documents and report any supporting evidence to the regional office. Officials of VBA’s Records Management Center—which oversees the work of the VBA unit at the National Personnel Records Center—informed us they are considering implementing a systematic program for reviewing the quality of all types of research work performed by this unit. Although a quality review function is already in place, only one analyst has been responsible for reviewing a 3 percent random sample of each employee’s work products. Given the volume of work products and limited time because of other duties, the analyst told us he examined few actual service record files to assess the accuracy of the work done by the employees. Instead, the analyst had resorted to using professional judgment to assess whether the content of the responses that employees provided to regional offices appeared reasonable in light of the nature of the request to which they were responding. Only if the analyst thought the response content looked questionable did he actually obtain the service record files and examine the records to determine the accuracy of the response. For example, the analyst told us that in a recent month he had reviewed actual service record files for only 17 of the approximately 700 responses randomly selected for review. According to officials of the VA Records Management Center, they are considering establishing a team of three or four full-time quality review specialists that would report to the director of the VA Records Management Center. If implemented, this team would review the quality of work done by VBA employees at the National Personnel Records Center and at the VA Records Management Center. The team would continue to randomly select a 3 percent sample of each employee’s completed work products prepared in response to regional office requests. However, unlike the current review, to determine accuracy, the new team would be able to review the actual service record files for all responses selected for review. A quality review specialist position description has been developed, but at the time of our review, implementation milestones for the new system had not been established. VBA potentially could improve its procedures and reduce the time required to process some veterans’ PTSD claims. During fiscal years 1999-2004, the number of veterans receiving compensation benefits because of PTSD increased by about 80 percent, from about 120,000 to almost 216,000. VBA potentially could improve its procedures to reduce the time required to process some veterans’ PTSD claims. To verify the occurrence of claimed stressors, regional offices sometimes cannot find needed evidence in the veteran’s personal service records and must turn to information contained in the military historical records of DOD. While regional offices are able to directly access and search an electronic library of such records for many Marine Corps veterans, they must rely on a DOD research organization—the U.S. Army and Joint Services Records Research Center (JSRRC)—to research such records for all other service branches. JSRRC’s average response time to regional office requests for such research approaches 1 year; by contrast, VBA’s average processing time strategic goal for claims involving disability compensation issues is 125 days. The opportunity may exist for VBA to establish an electronic library of DOD military historical records for the other service branches and greatly reduce the time required to process the PTSD claims of many veterans. According to VBA’s procedures, if the regional office verifies that a PTSD claimant engaged in combat or was a prisoner of war, the claimant’s own personal testimony is sufficient evidence to verify the occurrence of a stressor associated with the combat or the prisoner-of-war experience. Otherwise, the regional office must obtain other credible evidence to verify the claimed stressor. For Marine Corps veterans from the Vietnam era and the Korean conflict, the regional office can electronically view and search a set of compact discs provided by the Marine Corps University Archives. These discs contain Marine Corps historical records for the Vietnam era (1960-1975) and the Korean conflict. Officials of regional offices we visited estimated that, on average, they can perform these electronic searches of Marine Corps records in less than a day. If the regional office cannot find the needed corroborative evidence on the compact disks, the regional office must ask the Marine Corps University Archives to search its records for any evidence corroborating the veteran’s claim, and only if the Marine Corps University Archives cannot find corroboration may the regional office deny the veteran’s PTSD claim. By contrast, for veterans of armed service branches other than the Marine Corps, DOD has not created an electronic historical library of records that regional offices can search when the veteran’s service medical or personnel records do not provide evidence to verify engagement in combat or to verify the claimed stressor. Instead, VBA’s procedures call for regional offices to ask JSRRC to conduct research of military historical records of the units in which veterans served in order to provide the needed corroboration. Many of the records that JSRRC may search are voluminous, are not stored electronically, and must be searched manually (see app. V for information on such records). After conducting its research, JSRRC provides the regional office a summary of its findings but does not evaluate evidence, render opinions, make conclusions, or decide the merits of a claim. According to its Director, the center has 13 full-time- equivalent employees and a steady backlog of about 4,000 cases, of which about 85 percent come from VBA regional offices; the remaining requests are submitted by individual veterans and veterans service organizations. In our visit to VBA’s Oakland regional office, we learned that the regional office recently had begun a local initiative in which the regional office had designated three employees who—when other decision-making duties permit—search an electronic library of unclassified historical military records compiled by the Chicago regional office’s military records specialist. According to the Chicago regional office’s military records specialist, several other regional offices also have been provided this electronic library. The Oakland regional office employees doing this research and the Chicago regional office military records specialist stated that they have been able to find sufficient evidence in the electronic library to grant service connection for a substantial portion of PTSD cases that otherwise would have required that the regional office ask the JSRRC to search for evidence corroborating the veteran’s claim. According to these officials, they can complete these searches within a few weeks after being asked to do the search. These regional offices now request searches by JSRRC for PTSD cases only if sufficient evidence cannot be found in the electronic library to grant service connection. The Director of JSRRC told us that such research by regional offices could greatly reduce JSRRC’s backlog of research requests and reduce the average response time, assuming JSRRC’s staffing level remained constant. A related issue is that some veterans may not be willing to disclose to regional offices certain details needed to process their PTSD claims because the claimed stressful event occurred during classified operations. For example, to alleviate the possibility of such reluctance on the part of hundreds of thousands of veterans who had participated in classified atmospheric atomic testing and possibly been exposed to nuclear radiation, the Secretary of Defense issued a memorandum in 1996 authorizing such veterans to divulge to VA the name and location of their command, duties performed, dates of service, and related information necessary to validate exposure to nuclear radiation. Similarly, in PTSD cases for which regional offices cannot find sufficient evidence in veterans’ service records to grant the claims, if the veterans, because of concerns about classified operation, will not provide the regional office with certain minimum details, the regional office will not be able to submit requests to JSRRC to search military historical records for corroborating evidence. We discussed the classified operations issue with the Director of JSRRC, who stated that he personally had talked with veterans who had directly contacted his organization and who maintained they could not divulge to him the details of their participation in classified operations. He said that after he explained to them that the entire JSRRC staff are DOD employees and have appropriate security clearances, the veterans were willing to provide him with the details needed to conduct searches of DOD records, including any pertinent classified records maintained by DOD. While the extent of the classified problem is unknown, the Director had no objections to regional offices advising veterans to directly contact JSRRC if they are unwilling to disclose sufficient details to the regional office to process their claims because their disabilities allegedly were incurred during classified operations. VA is responsible for providing reasonable assurance that it is complying with applicable laws and regulations. While VA’s internal assessments indicate that its regional offices generally comply with the requirements of the Veterans Claims Assistance Act for obtaining military service records, VA does not have a systematic quality review program for ensuring the reliability and accuracy of records research done on behalf of regional offices by the VBA unit located at the National Personnel Records Center. As a result, VA cannot reasonably ensure the quality of the research on which regional offices rely to assist many veterans in obtaining service records relevant to their compensation claims. PTSD claims have been a growing portion of the claims processed by regional offices. Many present challenges in obtaining the evidence needed to process them, resulting in veterans having to wait for long periods for their claims to be decided. VBA’s establishment of a claims-processing timeliness performance goal demonstrates that high-quality service should result not only in correct decisions, but also decisions rendered in a reasonable length of time. The experience of several regional offices suggests that VBA could improve its timeliness in deciding the PTSD claims of many veterans nationwide if VBA systematically utilized an electronic library of historical military records such as the one compiled by the Chicago regional office. The average time for the Joint Services Records Research Center to respond to such requests is about 1 year; by contrast, officials in some regional offices have found that using the online library compiled by the Chicago regional office enabled them to find sufficient evidence in a matter a few weeks to grant the PTSD claims of many veterans. We recommend that the Secretary of the Department of Veterans Affairs direct the Under Secretary for Benefits to take the following actions. To adequately ensure the quality of the records research done on behalf of regional offices by the VBA unit at the National Personnel Records Center, VBA should move forward in implementing a systematic quality review program that evaluates and measures the accuracy of the unit’s responses to all types of regional office research requests. To improve its timeliness in deciding PTSD claims, VBA should assess whether it could systematically utilize an electronic library of historical military records, such as the one compiled by the Chicago regional office, to identify veterans whose PTSD claims can be granted on the basis of information contained in such a library, rather than submitting all research requests to the Joint Services Records Research Center. In its written comments on a draft of this report (see app. VI), VA agreed with our findings and concurred with our recommendations. VA stated it had increased the number of VBA quality reviewers at the National Personnel Records Center in order to better ensure the quality of responses provided to regional offices. VA also noted that VBA will determine the feasibility of regional offices’ using other databases to research cases in order to reduce the number of cases sent to the JSRRC. We believe these are positive steps toward ensuring the quality of the records research done by the VBA unit at the National Personnel Records Center and improving timeliness. As agreed with your office, unless you publicly announce it contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. The report will also be available at GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix VI. To identify Veterans Benefits Administration (VBA) procedures for obtaining relevant military service records, we obtained and analyzed Department of Veterans Affairs (VA) regulations governing the processing of compensation claims; VBA’s written procedures, user guide for the automated system for requesting military records, training materials, and other VBA instructions for directing regional offices’ efforts in obtaining military records; locally written procedures and guides developed by regional offices to direct their employees in obtaining military records; and information electronically available to regional offices through VBA’s internal network. To gain an operational context for the information obtained from these sources and to obtain stakeholders’ views on the effectiveness of VBA’s procedures for obtaining relevant military service records, we interviewed officials of VA’s Board of Veterans’ Appeals and Office of Inspector General; VBA’s Compensation and Pension Service, Office of Field Operations, Appeals Management Center, Records Management Center, VA Liaison Office at the National Personnel Records Center, and regional offices located in Atlanta, Georgia, Baltimore, Maryland, Oakland, California, and St. Petersburg, Florida; custodians of military records and organizations that research military records on behalf of VBA’s regional offices, including Department of Defense (DOD) U.S. Army and Joint Services Records Research Center, Defense Threat Reduction Agency, DOD Joint Requirements and Integration Office, and National Personnel Records Center, which is operated by the National Archives and Records Administration; and veterans’ advocacy groups, including Disabled American Veterans, American Legion, Veterans of Foreign Wars, Paralyzed Veterans of America, AMVETS, National Veterans Legal Services Program, and state and county veterans service agencies. As part of our review of the results of VA’s internal assessments of regional offices’ compliance with Veterans Claims Assistance Act requirements for obtaining military service records, we assessed the reliability of fiscal year 2006 data compiled by VBA from its Systematic Technical Accuracy Review (STAR) program for regional office decisions involving compensation issues. In earlier GAO work on STAR data reported for fiscal year 2004, we reported that regional offices had failed to send any case files to the STAR unit for hundreds of cases randomly selected for quality review, which meant the possibility existed that if the STAR unit had actually been able to review the files for these cases, the accuracy scores for some individual regional offices could have been lower than those reported for fiscal year 2004. Subsequently, the STAR unit began tracking the receipt of cases randomly selected for review. For our current work, we followed up with the STAR unit to determine the extent to which regional offices now send to the STAR unit all cases selected for quality review. We obtained data from the STAR unit and concluded that the numbers of cases requested, received, and reviewed for the first half of fiscal year 2006 provided nationwide data that were sufficiently reliable for our reporting purposes. Even so, the STAR unit did not receive about 6 percent of the cases selected for review during the first half of fiscal year 2006; therefore, because the STAR unit might have found additional VCAA development errors if it had had the opportunity to review these cases, the percentage of cases actually containing Veterans Claims Assistance Act (VCAA) development errors may have been larger than indicated by the fiscal year 2006 data reported by the STAR unit. Also, as part of our review of VA’s internal assessments of regional offices’ compliance with VCAA requirements for obtaining military service records, we assessed the reliability of data recorded in the Veterans Appeals Control and Locator System (VACOLS) by the Board of Veterans’ Appeals on the results of its reviews of veterans’ appeals on compensation decisions made by regional offices. We obtained data as of January 31, 2006, on all compensation cases decided by the board since November 1, 2004, when the board began recording in VACOLS whether its remands of decisions to VBA for rework were due to regional office deficiencies. To assess the reliability of the VACOLS data, we interviewed knowledgeable board officials, performed electronic testing of pertinent VACOLS data elements, and reviewed existing information about the data and the system that produced them. We determined that the data were sufficiently reliable for the purposes of this report. We analyzed these data to create summary statistics on the disposition of compensation cases and issues decided by the board. VBA’s regional offices face a complex task in obtaining veterans’ military service records because (1) service records consist of numerous types of records that can originate from numerous sources within or outside DOD, (2) the process for collecting and storing service records has varied substantially for different groups of veterans over time, (3) service records cannot always be found at the expected storage locations, and (4) the service records of many veterans were destroyed by a fire in 1973 at the National Personnel Records Center, a primary repository for service personnel and medical records. The cumulative service medical records and service personnel records of individual service members contain numerous types of records that can originate in varying organizations and geographic locations of DOD’s activities as service members migrate from assignment to assignment during their military service (see table 3). Historically, when service members separated from active duty, all DOD service branches forwarded all service medical records and service personnel records to the National Personnel Records Center in St. Louis, Missouri. However, beginning in the early 1990s, separation point military installations began sending service medical records to VA’s Records Management Center, also located in St. Louis. The timing of this changeover varied among service branches, but as of May 1998, all branches had begun sending service medical records to the VA Records Management Center for service members who are discharged from active duty and have no remaining military reserve or National Guard obligation (see table 4, col. 2). Also, in 1996, the Navy became the first DOD service branch to store service personnel records electronically in optically imaged files, which permitted the Navy to discontinue sending these records to the National Personnel Records Center. As of November 2005, all DOD service branches were storing service personnel records electronically and had discontinued sending such records to the National Personnel Records Center (see table 4, col. 3). When service members have military reserve or National Guard obligations remaining at the time of their release from active duty, the service branches may not route their service records in the same way that they route the records of those who do not have such an obligation when released from active duty. For service members who still have reserve or guard obligations at the time of their release, the disposition of their service records varies depending on their service branch, whether their obligation is a reserve versus guard obligation, and whether or not they are assigned to an active unit at the time of release from active duty. VA and DOD jointly initiated a Benefits Delivery at Discharge program that enables service members still on active duty to file disability compensation claims within 6 months before separating from active military duty. Under this program, VBA arranges for a physical examination of the claimant, and the service branch provides a VBA liaison with a copy of the claimant’s service medical records. The liaison sends these records to one of the two VBA regional offices (Winston-Salem and Salt Lake City) that process all claims filed under this program. The regional office prepares a rating decision prior to the claimant’s discharge from active duty, and after the claimant’s discharge, the service branch sends the regional office a copy of the claimant’s DD Form 214 (Report of Release from Active Military Service), and the regional office immediately authorizes benefits. As of April 2005, 141 military installations worldwide were participating in the Benefits Delivery at Discharge program, and in fiscal year 2004, and VBA processed 39,000 claims under this program. Additionally, if a service member not participating in this program submits a VA disability claim form to his or her service branch before separating from active duty, the service branch retains the claim form until the individual separates from active duty and then forwards his or her claim form, DD Form 214, and service medical records to the regional office having jurisdiction over the individual’s permanent address. To request veterans’ service records, regional offices rely primarily on a VBA system known as the Personnel Information Exchange System (PIES). This system provides regional offices with a menu of record request codes, each of which is defined in terms of the types of service records and/or information being requested by the regional office. On behalf of the regional offices that input such requests into the PIES system, the VA Records Management Center prints and mails requests to custodians of records maintained in paper form, and the PIES system electronically routes requests to custodians of service personnel records maintained in optically imaged files. However, for a variety of reasons, the custodians whom regional offices expect to be in possession of requested records cannot always provide the records (see fig. 1). The service records of many older veterans were destroyed by a fire in 1973 at the National Personnel Records Center. The fire destroyed the records of approximately 80 percent (16 million to 18 million) of the Army veterans who served during November 1912 through January 1, 1960, and the records of 75 percent of the Air Force veterans with surnames Hubbard through Z who were discharged between September 25, 1947, and January 1, 1964, and were not in a retired or reserve status at the time of the fire. For some of these veterans, the National Personnel Records Center has resources that can help reconstruct some of their service medical information. For example, the center has Army morning (sick) reports for November 1912 to December 1974 and Air Force morning reports for September 1947 to June 1966. Also, in 1988, the National Personnel Records Center obtained magnetic tapes containing limited information extracted by the Surgeon General’s Office from about 10 million hospital admission records for veterans admitted to military hospitals during 1942-1945 and 1950-1954. Another alternative is for VA to ask the veteran’s service branch to search sick logs, morning reports, and records of military organizations, hospitals, and infirmaries. Other alternative sources for medical information can include statements from service medical personnel; buddy certificates or affidavits; state or local police accident reports; employment physical examinations; medical evidence from hospitals, clinics, and private physicians that may have treated the veteran during or soon after separation; letters written by the veteran during service; photographs taken during service; pharmacy prescription records; and insurance examinations. For each contested issue, board makes one of three decisions, as shown below (or) Board denies the benefits requested by the veteran (or) VBA obtains more evidence but denies the requested benefits and resubmits the contested issue to the board for a final decision (or) The following individuals made important contributions to the report: Irene Chu, Assistant Director; Marta Chaffee; Martin Scire; Ira Spears; Vanessa Taylor; and Walter Vance. | The Ranking Democratic Member, House Committee on Veterans' Affairs, asked GAO to determine (1) whether VA's internal assessments indicate its regional offices are complying with the requirements of the Veterans Claims Assistance Act (VCAA) of 2000 for obtaining military service records for veterans' disability compensation claims and (2) whether VBA could improve its procedures for obtaining military service records for claims involving post-traumatic stress disorder (PTSD). The Department of Veterans Affairs' (VA) internal assessments indicate its regional offices generally comply with VCAA's requirements for obtaining military service records for veterans' compensation claims. For example, of the decisions made by regional offices on compensation claims during the first half of fiscal year 2006, Veterans Benefits Administration (VBA) quality reviewers found that less than 4 percent contained errors involving failure to obtain military service records. Similarly, of the appealed compensation cases decided by the Board of Veterans' Appeals during November 2004-January 2006, the board remanded less than 3 percent to VBA for rework due to deficiencies in obtaining military service records. However, VBA does not systematically evaluate the quality of research done on behalf of regional offices by a VBA unit at the National Personnel Records Center, where the service records of many veterans are stored. Regional offices rely on this unit to do thorough and reliable searches and analyses of records and provide accurate reports on the results. Without a systematic program for assessing the quality of this unit's work, VBA does not know the extent to which the information that this unit provides to regional offices is reliable and accurate. VBA potentially could improve its procedures and reduce the time required to process some veterans' claims for PTSD, which may result after a veteran participates in, or is exposed to, stressful events or experiences (stressors). Regional offices sometimes must turn to information contained in the military historical records of the Department of Defense (DOD) to verify the occurrence of claimed stressors. While regional offices are able to directly access and search an electronic library of such records for many Marine Corps veterans, they must rely on DOD's U.S. Army and Joint Services Records Research Center (JSRRC) to research such records for all other service branches. The JSRRC's response time to regional office requests approaches an average of 1 year. However, by building on work already done by several regional offices to establish and use an electronic library of DOD military historical records for the other service branches, VBA may be able to greatly reduce the time required to process many veterans' PTSD claims. |
In 1999, FCC established the Enforcement Bureau to investigate potential violations of applicable statutes and Commission regulations and orders that are within FCC’s mission of protecting consumers, promoting competition, ensuring responsible use of the public airwaves, and addressing risks to public safety. Prior to the establishment of the Enforcement Bureau, the Compliance and Information Bureau handled enforcement of matters currently handled by the field offices, and individual policy bureaus, such as the Media Bureau handled enforcement within their bureau’s responsibilities. When the Commission created the Enforcement Bureau, it consolidated most of these responsibilities to streamline enforcement. Currently, the Enforcement Bureau has five divisions that conduct investigations (see fig. 1 below). As of August 2017, there are approximately 199 employees (full time equivalents) in the Enforcement Bureau. Enforcement Bureau officials conduct reviews of potential violations and open enforcement cases if they determine an investigation is warranted. According to FCC officials, information about potential violations comes from a variety of sources including: (1) consumer complaints; (2) industry and/or public safety complaints on interference, such as weather or cell tower interference; (3) referrals from other FCC bureaus such as the Media Bureau that administers broadcast licenses; (4) congressional interest/direction; and (5) trade/news reports on potential company violations. Figure 2 below shows the general process the Enforcement Bureau uses once it decides to open a case and pursue an investigation. In most instances, cases conclude with one of the three following outcomes: the Enforcement Bureau determines there is no violation, and the case is closed without action; the Enforcement Bureau and the company reach a settlement; or FCC issues an enforcement action, which can include a monetary penalty. If an investigation for an enforcement case reveals a potential violation, the Enforcement Bureau may issue a non-monetary or monetary enforcement action. Non-monetary actions include written warnings such as a notice of unlicensed operation or a notice of violation. For example, FCC can notify a party that it is operating a radio station without a license and warn that continued operation could result in more severe penalties such as a fine, seizure of equipment, and imprisonment. Below are the three main enforcement actions that could involve a monetary penalty: Notice of Apparent Liability (NAL): A notice to inform the party of an investigation of a violation that FCC believes has occurred and forfeiture in a specified dollar amount is warranted. The subject of an NAL may elect to pay the proposed forfeiture, ending the proceeding, or file a response making legal or factual arguments that the proposed forfeiture should be modified, reduced, or cancelled. Consent Decree: An agreement between FCC and the party of an investigation that sets forth the terms and conditions in exchange for closing the investigation. This can include a plan for reaching compliance and an agreed upon civil penalty payable to the U.S. Treasury. Forfeiture Order: An order that requires the monetary forfeiture proposed in an NAL be paid. If a party does not pay the forfeiture, the case is referred to the U.S. Department of Justice, which may bring an enforcement action in district court to recover the forfeiture. From calendar years 2014 through 2016, the Enforcement Bureau opened 3,075 cases. Of these, 2,591—approximately 84 percent—were field office cases, which are under the Office of the Field Director. Many of the cases handled by the field offices relate to wireless spectrum interference, such as, a radio station operating outside of its licensed spectrum and interfering with other radio communications. For the number of cases opened and closed by each division from calendar years 2014 through 2016, see table 1 below. FCC closes most cases the Enforcement Bureau investigates without monetary penalty. In calendar years 2014 through 2016, FCC closed 3,732 cases (see table 2). Of these cases, 359 (approximately 10 percent) had a monetary penalty in the form of a NAL, Consent Decree, or Forfeiture Order. In this same period, FCC closed 1,509 cases (approximately 40 percent) through non-monetary enforcement actions such as written or verbal warnings or notice of unlicensed operation. FCC closed the remaining 1,864 cases (approximately 50 percent) without an enforcement action. FCC recently improved the collection of data for its enforcement program by implementing a new enforcement data system and consumer informal complaint portal. Enforcement Bureau Activity Tracking System (EBATS): EBATS is a new data system that serves as the system of record for the Enforcement Bureau. EBATS captures data inputs for investigations (such as key dates and close out status), and contains pertinent notes and documents investigators obtain or create related to a case. Prior to the implementation of EBATS, there were five distinct data systems, one for each division within the Enforcement Bureau. In 2008, we reported that FCC’s separate data systems and the limitations with each hampered FCC’s ability to use data to inform management of the enforcement program, and we recommended the data systems be improved. EBATS addresses these previously reported issues by unifying the databases and capturing key Enforcement Bureau data and information in a manner that we found, during our current review, to be generally reliable beginning with calendar year 2014. Managers across FCC’s divisions use data from EBATS to monitor ongoing work. FCC officials told us that managers of each division review reports based on available data on a weekly basis, which includes the number of cases closed and opened during that week. FCC officials said Enforcement Bureau managers such as the deputy chief of the Enforcement Bureau and assistant bureau chiefs, review EBATS data on a monthly basis. FCC officials noted that the monthly review focuses mainly on cases with pending Commission reviews and external deadlines such as referring debt collection of an issued fine to the U.S. Department of Justice, which is the final step by FCC if a party does not pay an ordered fine. FCC officials said EBATS allows for closer management of dates that resulted in improved case efficiency and provided two examples. First, officials said FCC has decreased the use of tolling agreements, in which FCC requests that parties waive the statute of limitations. Data on the number of tolling agreements from 2014 through 2016 showed FCC used tolling agreements in approximately 1–2 percent of the cases investigated each year. Although FCC officials told us this is an improvement over previous years, they could not provide reliable data on tolling agreements used before 2014. The second example provided by FCC officials is a decrease in the number of backlogged cases, which are those cases considered overdue for resolution. However, when we requested information on the total number of backlogged cases over the last 5 years FCC officials informed us that they do not currently track this information over time. While the database improvements should increase the availability and reliability of data FCC officials can use to assess the program, the agency’s current enforcement performance goals are not quantified, as is discussed later in this report. Consumer Informal Complaints Portal: FCC implemented a new consumer complaints portal in December 2014 at a development cost of $297,514. This portal allows consumers to receive e-mail updates on the status of their complaints as well as ask questions and receive answers related to the complaint. The Consumer and Governmental Affairs Bureau (CGB) manages and houses the consumer informal complaints portal. FCC officials stated that most complaints are directly addressed by CGB officials through actions such as providing information to consumers and/or forwarding the complaint to the service provider. FCC officials told us that most complaints do not become enforcement cases because most do not represent a violation of a federal statute or commission regulation. For example, in calendar year 2016 consumers filed 344,045 complaints through the portal; these complaints resulted in 402 enforcement cases, according to FCC officials. Regardless of whether a complaint initiates an enforcement case, FCC officials have access to the information in the consumer complaint portal and officials stated that they can use this data to help identify trends and determine whether to review a particular company or practice. FCC recently updated its enforcement processes by developing an enforcement handbook and reorganizing the field office division of the Enforcement Bureau to enhance efficiencies. Enforcement Handbook: The enforcement handbook is an internal guidance document for the Enforcement Bureau. The handbook contains and organizes previously disparate policy guidelines and added additional guidelines. According to FCC officials, in 2014, there was an agency wide process reform effort and an Enforcement Bureau specific reform effort that resulted in the creation of the enforcement handbook. Officials stated that this document helped in meeting several goals of the reform effort including increasing efficiencies and consistency across divisions and improving effectiveness of allocated resources. The handbook sets explicit timelines for major case milestones. The handbook also provides a case priority rating system to improve efficient use of resources. The case priority rating system is used by most divisions to determine how to prioritize investigations to improve efficient use of resources with the exception of the field office division, which conducts the majority of investigations. The Office of the Field Director has a separate priority rating system to prioritize public safety interference, such as interference to emergency communication networks, above all other cases. FCC officials said they found the case priority rating system helpful for day-to- day management of the enforcement program. To improve consistency across the divisions, the handbook also has standardized templates for various forms and official documents. Previously, each division had its own guidance for preparing these documents. Field Office Reorganization: FCC contracted for a study to review its field offices and received the results in March 2015. In July 2015, FCC issued an order stating it could achieve efficiencies through reorganizing and closing some field offices. As of January 2017, FCC had closed 11 of 24 field offices (see fig. 3), and FCC officials stated that this reduction included eliminating 16 of 21 management positions, and reducing the total number of staff from 108 to 54 employees. FCC officials estimated the reorganization would cost $2 to $4 million and would save $9 to $10 million per year. FCC had originally proposed further cuts to the field office division, but while FCC was considering the field office reorganization in 2015, some stakeholders—ranging from members of Congress, to industry groups, and private companies—raised concerns that the decrease in field staff would hinder FCC’s effectiveness and timeliness of response to interference. Two stakeholders (one expert and one industry association) we spoke with for our review remain concerned about this issue. For example, one expert whom we interviewed stated that with the current transition in technology there is likely to be an increase in spectrum interference. According to this expert, wireless broadband is important to the economy because it is used in so many different ways and without effective interference enforcement, wireless’ potential could be undermined. The expert added that if there is an increase in interference FCC will need more, not fewer, field resources to resolve interference between spectrum users. Similarly, the industry association representatives we spoke to, expressed concerns that FCC’s ability to effectively respond to interference issues has diminished since the field office reorganization. According to FCC officials, to help mitigate concerns about responsiveness to interference issues FCC has employed mobile “tiger teams.” These tiger teams are currently located in the Columbia and Denver field offices where FCC officials stated they can be quickly deployed to support high-priority initiatives of the Enforcement Bureau or other entities from headquarters. FCC officials also told us that they are taking steps to use the anticipated cost savings from the field office reorganization to invest in training, equipment, and technology updates that will improve efficiency. For example, some efforts already under-taken or planned, according to officials include the following: Training: FCC officials said field office staff completed a 3-day training on Long-Term Evolution (LTE) wireless networks in July 2016 to increase staff knowledge of this technology. FCC officials said that LTE is increasingly being used for wireless communication and that it is important for FCC officials in the field to understand what it is and how interference with it can cause harm. Officials stated that they plan to conduct more training in the future. Equipment: FCC is in the process of purchasing a remote radio location detector system, which officials stated will act as a “force multiplier” because the detectors can be easily deployed and left in place to measure interference over time. Previously field personnel had to collect this type of data in-person. FCC officials stated they have conducted hands on evaluations of the top four vendors and are developing purchase recommendations. Also, according to FCC officials, FCC recently purchased mobile direction finding equipment for use on rental vehicles and is in the process of purchasing equipment such as amplifiers, filters, and spectrum analyzers to improve the technological capability of field office personnel. Technology: FCC officials are working on a new complaint portal for businesses and public safety officials to use when they experience interference. FCC planned to implement this portal in spring 2016 but has faced delays. Given the recent changes, it is too early to determine the impact these actions will have on enforcement efforts. We found that most of FCC’s enforcement program goals as published in its Annual Performance Reports and Budget Estimates to Congress are missing key elements that could improve oversight and performance evaluation. The Government Performance and Results Act (GPRA), as enhanced by the GPRA Modernization Act of 2010 requires agencies to develop objective, measurable, and quantifiable performance goals and related measures and to report progress in performance reports in order to promote public and congressional oversight as well as improve agency program performance. OMB guidance on implementation of the GPRA Modernization Act of 2010 states that performance goals should include a specific measure with a targeted level of performance to occur over a defined timeframe. In comparing FCC’s recently published goals to OMB guidance, FCC has only one enforcement performance goal that partly meets OMB’s guidance and FCC’s remaining seven performance goals do not have associated measures with target levels and timeframes, see table 3 below. Our review of FCC’s annual performance report found that it includes descriptions of enforcement actions taken against companies, but does not include quantified performance measures. FCC officials stated that narrative examples, rather than quantified goals and related measures, were the most appropriate way to report on FCC’s efforts to help consumers and protect the public through its enforcement program. In 2008, FCC reported two additional performance measures related to the number of cases they investigated and the length of time that it took to close cases. FCC officials told us the Chairman’s Office made the decision in 2009 to stop reporting data driven measures and to replace them with narratives of the types of investigations performed, penalties issued, and examples of what it considers bad behavior. According to FCC officials, it is difficult to develop effective performance goals and measures for its enforcement program because enforcement is usually in reaction to the activities of companies. As a result, in lieu of performance goals and measures, FCC’s Fiscal Year 2016 Annual Performance Report contains descriptions of specific settlements or proposed fines issued; including one fine in excess of $34 million assessed to a company that illegally imported jamming devices that overpower, jam, or interfere with authorized communications. However, the Enforcement Bureau’s new database—EBATS—described earlier in this report, has the data that FCC could use to help establish and report on objective, measurable, and quantifiable performance goals and related measures. Three other regulatory agencies with inherently reactive enforcement programs similar to FCC’s have developed objective, measurable, and quantifiable performance goals. During our review we spoke with officials from the Securities Exchange Commission (SEC), Commodities Futures Trading Commission (CFTC), and Federal Trade Commission (FTC), which all have enforcement programs and have developed objective, quantifiable, and measureable goals for their programs. Officials from these agencies agreed that it is difficult to measure enforcement performance in part due to the reactive nature of enforcement as well as the difficultly of quantifying deterrence. However, they believe there are performance measures—timeliness, monetary outcomes, and enforcement actions taken in relation to consumer complaints, among others—that can capture essential program information. Examples of three agencies’ performance goals for their enforcement programs are shown in table 4 below. We have previously reported that a key element in an agency’s efforts to manage for results is its ability to set meaningful performance goals and to measure progress towards those goals. We have also found that communicating what an agency intends to achieve and its approach for doing so are fundamental aims of performance management. Without developing meaningful, quantifiable goals and related measures for the enforcement program, FCC (1) lacks important tools for assessing and reporting on the progress of its enforcement program and determining whether changes should be made to improve performance, and (2) may be missing an opportunity to help promote transparency about its program and support congressional oversight. We interviewed stakeholders with a wide range of perspectives however most agreed that FCC’s enforcement is important for deterring violations of federal statutes and FCC regulations. Fourteen of the 22 stakeholders explicitly stated that FCC enforcement is important to deter violations and/or provided examples of appropriate FCC enforcement against violators. Ten of the stakeholders highlighted the importance of Enforcement Bureau actions in helping to protect consumers. For example, two telecommunications experts said FCC enforcement actions against prepaid calling card companies for deceptive marketing practices have protected consumers. In calendar years 2010 through 2015, FCC investigated and issued separate $5 million dollar fines to six companies for deceptive marketing of pre-paid calling cards. FCC’s announcement of the fines said that in each case, companies sold cards that advertised hundreds or thousands of minutes for international calls at a low cost but consumers were only given a small fraction of the advertised time unless they used all of the minutes in a single call. FCC officials we spoke with also cited these cases as instances in which they believed there may be a deterrent effect to other companies that might consider conducting similar practices. Four stakeholders said Enforcement Bureau actions help address interference with emergency communications systems or violations of public safety regulations. For example, representatives of one public interest group and one industry association said FCC is quick to respond and resolve interference issues with communications systems used by first responders, which ensures that risks to public safety are minimized. One telecommunications company official said FCC’s fines to companies for insufficiently lighted radio towers appear to be effective because according to this official, the number of aircraft accidents involving towers appears to have decreased. Most stakeholders expressed concerns regarding the transparency, fairness, or emphasis on publicity in the enforcement process. Of the 22 stakeholders we interviewed, 17 mentioned at least one of these concerns. Lack of Transparency: When asked about their perception regarding the transparency of the enforcement process 16 of the 22 stakeholders we interviewed expressed concern that the enforcement process was not transparent. As an example, stakeholders noted their unsuccessful attempts to obtain information during an investigation. Eight stakeholders said companies are unable to obtain information from the Enforcement Bureau about the potential violations under investigation until the final stages of an investigation. Of these 8 stakeholders, 4 said that this situation is different from their interactions with other regulatory enforcement agencies that inform companies of the specific violation the agency is investigating earlier in the process. FCC officials we spoke with said their lack of transparency during the course of an investigation is, in part, to protect the reputation and business interests of the target in the event that no violation is found and ensure that sensitive information that could undermine a case is not revealed to the party being investigated. Perceived Unfair Process: Fourteen stakeholders said the enforcement process was not always fair. Stakeholders provided the following examples. Ten stakeholders said that requests for information from the Enforcement Bureau can be broad and burdensome and require a lot of time and resources from the company to comply. FCC officials told us they often work with parties they are investigating to narrow the scope of a request like a letter of inquiry, which can reduce the resource burden on the party and the Enforcement Bureau. FCC officials added, however, that they are careful to avoid overly narrowing the scope of a letter of inquiry because doing so could preclude the Enforcement Bureau from gathering information about all potential violations by the party. Nine stakeholders said the Enforcement Bureau issued fines when there was no clear violation of the regulations. In addition, seven stakeholders commented that FCC is using enforcement actions to set precedent and effectively create new policy. Two industry associations stated that they believe this type of action bypasses the notice and comment requirements in the Administrative Procedure Act (APA). For example, one industry association cited a 2015 FCC enforcement policy statement that adopted a treble damages approach to calculating fines for companies not making their full contributions to FCC administered funds such as the Universal Service Fund. Four industry associations filed a joint petition with FCC that asked FCC to reconsider the policy statement because they considered it a substantial change, issued without public notice and therefore in violation of the APA. FCC officials stated that this petition for reconsideration is pending at the FCC. When we asked FCC officials about claims that FCC issued fines where there was no clear violation of the regulations, officials directed us to a written response to a question for the record for a 2015 congressional oversight hearing, where former FCC Chairman Wheeler stated that penalties may be issued in the absence of an agency regulation governing such conduct. He stated that this is because the Communications Act of 1934, as amended, demonstrates Congress’s intent that certain conduct be prohibited and the act as established by Congress does not require the additional creation of an agency regulation. He further stated that because the Commission has the choice to decide whether to carry out their activities through rulemaking or adjudication under the APA, FCC may use adjudication to interpret and apply statutes Congress has directed FCC to enforce. Nine stakeholders said industry participants have lost the incentive to self-report potential violations because it does not appear that the Enforcement Bureau will treat them fairly. Of these 9 stakeholders, 4 said they know of companies that acted quickly to correct and report violations, but the Enforcement Bureau still issued significant penalties. FCC officials stated that industry’s self-policing is important to an effective enforcement regime and that pursuant to FCC regulations, good faith or voluntary disclosure can factor in decisions of leniency on parties. Emphasis on Generating Publicity through Large Proposed Fines: Fifteen of the stakeholders expressed concerns that there has been an emphasis on generating publicity by proposing high dollar fines through NALs. In addition, 10 of the 15 stakeholders said fine amounts appeared to be calculated arbitrarily and without rational basis. To determine whether there has been an increase in the amount of FCC issued NALs, we reviewed FCC data on NALs from calendar years 2012 through 2016. As shown in table 5 below, the average dollar amount of NALs issued by FCC increased from approximately $180,000 in 2012 to approximately $6,300,000 in 2016. The median NAL fine amount has also generally increased over this same period though not to the same extent. From 2012 through 2016 median fines increased from $15,000 to $25,000. Also, the total number of NALs decreased from 111 in 2012 to 24 in 2016. Compared to previous years FCC has recently issued a small number of fines with very high dollar amounts. In 2016, FCC issued two of these high fines compared to none in 2012 (see table 6). When asked about the apparent increase in proposed fines, FCC officials acknowledged fines have increased. However, these officials stated that they have recently focused resources to investigate difficult cases they believe have the biggest impact on consumers and that the fine amounts they issue are appropriate for the violation. FCC officials also stated that publicity and large fines can be effective deterrents and can alert consumers that certain activities are unlawful. Seven stakeholders agreed that publicity and headlines can be effective tools for enforcement. FCC currently has a variety of ways it communicates with stakeholders varying in terms of formality and public versus private communications but does not have a clear communication strategy for its enforcement activities (see list of all communications in table 7 below). Instead, FCC tailors the extent of its communications to stakeholders on a case-by- case basis. FCC officials told us they use this approach because they have concerns that in some cases sharing too much information about their enforcement processes or case sensitive information could help parties of investigations undermine FCC’s case. For example, FCC does not publish an enforcement manual or similar overall policy document on its website outlining their enforcement policies and processes. In contrast, other agencies such as FTC and SEC publish enforcement manuals on their websites to provide information about the enforcement process as well as brief summaries to explain agency regulations to help ensure the clarity of both. FCC officials also told us they work under very strict time deadlines because of the one-year statute of limitations that applies to many cases that the Enforcement Bureau investigates. FCC officials also stated that at least some courts have found that this one-year deadline begins at the time of the violation—not at the time the Enforcement Bureau learns of the violation—further reducing the time the Bureau has to negotiate the scope of the investigation. During this one-year period FCC officials say they must make sure that communications with the parties do not jeopardize the agency’s ability to act within the one year statutory deadline. However, as described earlier, FCC can use tolling agreements, which allows FCC—with agreement from investigated parties—to waive the statute of limitations in cases where it needs additional time to conduct or complete the investigation. Despite the communication efforts outlined in table 7, 16 of 22 stakeholders we spoke with expressed concern that the enforcement process was not transparent or fair. Additionally, 10 of these 16 stakeholders said there is a perceived lack of communication between stakeholders and the Enforcement Bureau. Clear communication strategies are important to promoting transparency particularly in the case of enforcement activities. In a publication on the Best Practice Principles for Regulatory Enforcement, the Organisation for Economic Co-operation and Development (OECD) states that government should ensure a clear and fair process for enforcement. The Best Practice Principles include clearly informing parties of what rights and obligations they have in the process, how to challenge and appeal the conclusions, where and how to obtain compliance assistance and/or report any abuses. Federal internal control standards state that management should design appropriate control activities for programs including externally communicating the necessary quality information to achieve the entity’s objectives. In the case of enforcement, agencies can help promote compliance by establishing strategies that foster open two-way communications with external parties to help ensure the clarity of regulations and processes for enforcement. A communication strategy would serve to relay the purposes, objectives, and processes the Enforcement Bureau employs to achieve its mission as well as the rights and expectations of those under investigation. Furthermore, the creation of a communication strategy to provide necessary and quality information to external stakeholders could (1) clarify aspects of the enforcement process that are not transparent or are confusing to stakeholders, and (2) promote clear, fair, and consistent enforcement. For example, such a strategy could include clearly identifying the rights and obligations parties have in the enforcement process, and where and how to obtain additional information regarding questions about the enforcement process, or to report any abuses, without revealing information considered sensitive during the course of an investigation. Recently, FCC has taken steps toward improving the transparency of Commission processes to external stakeholders. Since January 2017, the new FCC Chairman has implemented six changes including two intended to improve external transparency. One of the newly implemented changes is for FCC to publicly release, in advance of monthly Commission meetings, the text of all agenda items that the Commission will vote on during the monthly meeting. Previously, FCC’s practice was to release the full text of agenda items only after the Commission voted. However, an exception will be made for enforcement actions that are to be voted on before the Commission; FCC officials explained that the information contained in enforcement actions is considered law enforcement sensitive until it has been voted upon by the Commissioners. The other change intended to improve transparency is releasing a one-page fact sheet that summarizes the text of the meeting’s agenda items. FCC’s statement released about this change said that the one-page summaries will improve the public’s accessibility to Commission information. Although these policies have a limited direct impact on enforcement, increased focus on external transparency for the enforcement program could improve stakeholder perceptions of FCC actions and help promote the perception of a fair process as well as greater industry cooperation and compliance with FCC regulations. In recent years, FCC has taken certain actions to improve the efficiency of its enforcement program. However, the extent to which FCC’s Enforcement Bureau is achieving its mission of protecting consumers, promoting competition, ensuring responsible use of the public airwaves, and addressing risks to public safety is difficult to determine because FCC has not developed performance indicators, targets, and timeframes that would enable a meaningful assessment of its enforcement program. Furthermore, without quantifiable performance goals and related measures, Congress does not have information needed to fulfill its oversight role, and industry and consumers lack information that would provide transparency regarding FCC’s enforcement priorities. Similarly, without a communications strategy that publicly outlines the purposes, objectives, and processes used by the Enforcement Bureau in carrying out its mission, FCC may be missing an opportunity to improve transparency for industry and consumers and to further engage with both to improve their understanding of FCC’s enforcement process. The Chairman of the FCC should establish quantifiable goals and related measures—performance indicators, targets, and timeframes— for its enforcement program and annually publish the results to demonstrate the performance of this program and improve transparency regarding FCC’s enforcement priorities. (Recommendation 1) The Chairman of the FCC should establish, and make publically available, a communications strategy outlining the agency’s enforcement program for external stakeholders, to improve engagement with the telecommunications community on the purposes, objectives, and processes the Enforcement Bureau employs to achieve its mission. (Recommendation 2) We provided a draft of this report to the Federal Communications Commission for review and comment. FCC provided written comments that are reprinted in appendix II. In written comments, FCC stated that it agreed with both of our recommendations and noted steps it plans to take when making changes to implement quantifiable performance goals and increase transparency regarding the enforcement process. FCC also provided technical comments that we incorporated as appropriate. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report addresses (1) actions taken by FCC in the last 5 years to update its enforcement program; (2) performance goals and measures for FCC’s enforcement program; and (3) selected stakeholders’ views on FCC’s enforcement program and FCC’s communication with these stakeholders. To identify trends in enforcement actions and outcomes, we analyzed calendar years 2014 through 2016 summary data from FCC’s Enforcement Bureau Activity Tracking System (EBATS). Although EBATS was implemented in 2012, a rolling implementation and major system upgrades limited the available reliable data to 2014 through 2016. We determined that these data were sufficiently reliable for our purposes by reviewing documentation related to how the data were collected and processed, by reconciling the publicly accessible data, and by interviewing FCC officials on their data validation efforts. To describe what actions FCC has taken in the last five years (calendar years 2012 through 2016) to update its enforcement program we reviewed FCC documentation, such as policies and reports related to internal improvement efforts. In addition, we interviewed FCC officials from the Enforcement Bureau, Consumer and Government Affairs Bureau, as well as the Office of Managing Director. We also interviewed FCC officials located in two of FCC’s field offices, Columbia, MD, and Dallas, TX, because these field offices represent 2 of the 3 regions and in the past have conducted greater numbers of investigations than some other field offices. In Dallas, we accompanied FCC officials on a field investigation to observe officials use equipment to locate sources of interference. To determine what performance goals and measures are in place for the enforcement program we reviewed FCC’s annual performance reports, budget estimates to congress, and strategic plans from 2008 to present and interviewed FCC officials. We evaluated FCC’s performance goals and measures as listed in FCC’s Fiscal Year 2015 Annual Performance Report and the Fiscal Year 2017 Budget Estimates to Congress, against criteria for developing federal agency performance goals and measures as established in the GPRA Modernization Act of 2010 and OMB guidance related to implementing performance measures. We also reviewed documents including OMB’s Circular A-11, Part 6, Section 200, and GAO’s Federal Internal Control Standards related to performance measures. We also reviewed FCC’s fiscal year 2016 Annual Performance Report and Fiscal Year 2018 Budget Estimates to Congress; however, FCC did not include enforcement program goals in these reports. To obtain information on the performance goals and measures used by other agencies with enforcement programs, we selected three additional agencies to review: the Securities Exchange Commission (SEC), Commodities Futures Trading Commission (CFTC), and the Federal Trade Commission (FTC). We selected these agencies from a group of comparison agencies that also had (1) federal independent regulatory authority and (2) a dedicated enforcement bureau/division. After applying the first set of criteria, we selected the three agencies based on similarity to FCC in terms of budget and number of employees allocated in the congressionally approved budget for fiscal year 2016. For each of the agencies selected we reviewed its most recent performance plans and other relevant enforcement related documentation. We also interviewed officials from each of these agencies to gain their perspectives on managing performance and measuring enforcement efforts. To determine stakeholder views on FCC’s enforcement program we interviewed a non-generalizable sample of 22 stakeholders who were knowledgeable of the Enforcement Bureau and the communications industry. We selected these stakeholders in order to get a range of perspectives using the following criteria: (1) type of industry perspective, (2) size of a company (where applicable), (3) level of activity in filing comments with FCC, and (4) area of expertise. After applying these criteria we assembled a list of stakeholders who viewed the industry from different perspectives (telecommunications companies, public interest groups, industry association, telecommunications experts) and different areas of expertise (phone, radio, television, and internet). By taking into account stakeholders’ prior work and their level of FCC comment activity, we also ensured these selected stakeholders were knowledgeable of the industry. For a full list of the stakeholders whom we interviewed see table 8 below. To determine how FCC communicates with stakeholders, we reviewed FCC documentation and policies for formal and informal communication with stakeholders. We compared these policies to the Organisation for Economic Co-operation and Development’s (OECD) Best Practice Principles for Regulatory Policy: Enforcement and Inspections and Federal Internal Control Standards on managing external communications. We also analyzed publicly accessible data on monetary enforcement actions that are on FCC’s website from calendar years 2012 through 2016 to determine whether stakeholder views matched recent FCC enforcement outcomes. We determined that these data were sufficiently reliable for our purposes by reconciling the data with FCC provided data and interviewing FCC officials on their data validation efforts. We conducted this performance audit from June 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Derrick Collins, Assistant Director; Jade Winfree, Analyst-in-Charge; Dennis Antonio, Anne Doré, Camilo Flores, Josh Ormond, Michelle Weathers, and Elizabeth Wood made key contributions to this report. | FCC's Enforcement Bureau is primarily responsible for ensuring the telecommunications industry's compliance with federal statutes and the Commission rules and orders designed to protect consumers, ensure public safety, and encourage competition. Some industry stakeholders have raised questions about the transparency and fairness of the Enforcement Bureau. GAO was asked to review FCC's management of its enforcement program. In this report, GAO addresses: (1) actions FCC has taken in the last 5 years to update its enforcement program, (2) FCC's enforcement performance goals and measures, and (3) selected stakeholders' views on FCC's enforcement program and external communications. GAO reviewed FCC's enforcement policies and procedures; analyzed FCC's performance measures and spoke with officials of similarly sized independent agencies with enforcement missions; and interviewed FCC officials and 22 stakeholders from public and private organizations who were knowledgeable of the Enforcement Bureau and the communications industry. The Federal Communications Commission (FCC) has taken actions in the last 5 years to update its enforcement data collection and processes. In 2012, FCC implemented a new enforcement data system, which combined five previously separate databases and contains pertinent information related to each enforcement case. In 2014, FCC launched a new consumer complaints portal that FCC officials can use to identify trends and determine whether to investigate a particular company or practice. FCC also updated its internal enforcement program guidance, which includes case prioritization policies as well as timeliness goals for case resolution. Lastly, FCC completed its reorganization of the Enforcement Bureau's field office division in January 2017, closing 11 of 24 field offices and decreasing personnel from 108 to 54. FCC officials stated they do not anticipate a decline in enforcement activity because FCC is taking steps to use the anticipated annual cost savings of $9 to $10 million from the reorganization to invest in training, equipment, and technology that will improve efficiency. Given the recent changes, it is too early to determine the impact these actions will have on enforcement efforts. FCC has not quantified most of its enforcement performance goals and measures. FCC officials told GAO that in 2009 the Chairman's Office decided that narrative examples, rather than quantifiable goals and related measures, were the most appropriate way to report on the enforcement program. For example, FCC's 2016 Annual Performance Report describes details of settlements or fines levied without reporting such goals or measures. Although such metrics can be difficult to develop, GAO found that other enforcement agencies report quantified performance goals and related measures and that FCC has the data it would need to develop such goals and measures. Without meaningful program performance goals and measures, FCC lacks important tools for assessing and reporting on the progress of its enforcement efforts and determining whether it should make changes to its program. FCC also may be missing an opportunity to help promote transparency and support congressional oversight by clearly communicating enforcement priorities. Most of the selected stakeholders GAO interviewed affirmed the importance of enforcement, but cited concerns about FCC's current enforcement process and communication efforts with stakeholders. Fourteen of 22 selected stakeholders said enforcement is important for deterring violations of federal statutes and FCC rules. However, 17 of 22 also expressed concerns regarding the transparency or fairness of the enforcement process or regarding FCC's emphasis on generating publicity by proposing high dollar fines for potential violators. FCC does not have a formal communications strategy that outlines its enforcement purposes and processes. Instead, FCC tailors the extent of its communications to stakeholders on a case-by-case basis. FCC officials told GAO that information about the enforcement process is sensitive and could undermine their cases. However, leading practices on enforcement highlight the importance of disclosing agency enforcement processes, including how to challenge and appeal conclusions, as a way to foster fair and consistent enforcement. Increased communication from FCC could improve transparency and stakeholder perceptions of FCC enforcement actions. FCC should establish and publish: (1) quantifiable performance goals and related measures for its enforcement program; and (2) a communications strategy outlining its enforcement program for external stakeholders. FCC concurred with the recommendations. |
Direct-use nuclear material is essential for building nuclear weapons. The diversion or theft of such material can enable terrorists or countries to build nuclear weapons without investing in expensive nuclear technologies and facilities. One way of deterring and detecting theft is by instituting nuclear material control systems on a national level and at facilities handling direct-use material. “Direct-use nuclear material” consists of highly enriched uranium (HEU) and plutonium that is relatively easy to handle because it has not been exposed to radiation or has been separated from highly radioactive materials. Direct-use material presents a high proliferation risk because it can be used to manufacture a nuclear weapon without further enrichment or irradiation in a reactor. According to the International Atomic Energy Agency, approximately 25 kilograms of HEU or 8 kilograms of plutonium is needed to manufacture a nuclear explosive, although the Department of Energy (DOE) suggests the amounts needed to build a weapon may be smaller. Many types of nuclear facilities routinely handle, process, or store direct-use material. Besides nuclear weapon production facilities, direct-use material can also be found at research reactors, reactor fuel fabrication facilities, uranium enrichment plants, spent fuel reprocessing facilities, and nuclear material storage sites. Most civilian nuclear power facilities are of less concern because they use low-enriched or natural uranium as fuel, which would require additional enrichment before the fuel would be suitable for nuclear weapons. While these reactors produce plutonium in spent reactor fuel, such fuel is dangerous to handle because it is highly radioactive. Spent reactor fuel also requires reprocessing before it is suitable for nuclear weapons. Nuclear materials are controlled to prevent and detect their theft. Nuclear material can be stolen or diverted by (1) outside individuals or groups, such as terrorists attempting to break in and steal nuclear material; (2) inside individuals or groups, such as one or more employees that have access to nuclear material; and (3) combinations of insiders and outsiders. A nuclear material control system consists of three overlapping components—material protection, material control, and material accounting. Together they compose a set of procedures, personnel, and equipment that address both insider and outsider threats. Material protection systems are designed to limit access to nuclear material by outside individuals and prevent the unauthorized removal of material from a facility by inside individuals. Nuclear facilities protect their material by (1) installing fences with sensors and television cameras to delay, detect, and assess unauthorized intrusions; (2) posting armed guards at entry and exit points; (3) establishing a protective response force that can react to unauthorized intrusions; and (4) installing nuclear material monitors to detect attempts to remove material from a facility. Nuclear facilities also assess the reliability of personnel with access to nuclear material by conducting background checks and continuously monitoring their behavior. Material control systems contain, monitor, and establish custody over nuclear material. Nuclear facilities control material by (1) storing material in containers and vaults equipped with seals that can indicate when tampering may have occurred, (2) controlling access to and exit from nuclear material areas using badge and personnel identification equipment, and (3) establishing procedures to closely monitor nuclear materials. Nuclear facilities also designate custodians to be responsible for nuclear material in their possession. Material accounting systems maintain information on the quantity of nuclear materials within specified areas and on transfers in and out of those areas. They employ periodic inventories to count and measure nuclear material by element and isotopic content. Nuclear facilities use the inventory and transfer data to establish nuclear material balances, which track materials on hand and the flow of material within a specified area. The material balances are closed periodically by reconciling physical inventory with recorded inventories, correcting errors, calculating inventory differences and evaluating their statistical differences, and performing trend analysis to detect protracted theft of nuclear material. Nuclear facilities in the United States are capable of updating material accounting data within 24-hour periods. Some U.S. facilities with more modern nuclear accounting systems are capable of updating material accounting data within 4 hours. In addition to facility systems, the United States and most other countries have established national material protection, control, and accounting (MPC&A) systems. These systems include regulations governing procedures for nuclear material protection control and accounting, inspection requirements to ensure that the systems are implemented properly, and tracking systems to provide information on the location and disposition of nuclear material nationally. In the United States, the Nuclear Regulatory Commission and DOE have promulgated regulations on controlling nuclear material. The United States is pursuing two different, but complementary strategies to achieve its goals of rapidly improving nuclear material controls over direct-use material in the newly independent states (NIS). Under the Cooperative Threat Reduction (CTR) program,the U.S. Department of Defense (DOD) entered into agreements with the governments of Russia, Ukraine, and Kazakstan in 1993 to rapidly improve nuclear material controls over civilian nuclear material and develop national MPC&A systems in these countries. On June 23, 1995, DOD entered into an agreement with the Ministry of Defense in Belarus to improve controls over its civilian nuclear material. DOE implements the programs under these agreements. As a complementing strategy, DOE initiated a program in April 1994 of MPC&A cooperation with Russia’s nuclear institutes, operating facilities, and enterprises. This initiative, known as the lab-to-lab program, brings U.S. and Russian laboratory personnel directly together to work cooperatively on implementing MPC&A upgrades at Russian nuclear facilities. The purpose of the lab-to-lab program is to rapidly improve MPC&A at civilian, naval nuclear, and nuclear weapons-related facilities handling direct-use material in Russia. The program is jointly funded by DOE and the CTR program. Our objectives were to (1) review the nature and extent of problems with controlling nuclear materials in the NIS; (2) determine the status and future prospects of U.S. efforts to help strengthen controls over direct-use nuclear material in Russia, Ukraine, Kazakstan, and Belarus; and (3) assess plans for consolidating these efforts in DOE. While seven NIS inherited direct-use nuclear material, we focused on the four countries that have been the primary recipients of U.S. assistance—Russia, Ukraine, Kazakstan, and Belarus. The scope of our review included direct-use nuclear material controlled by civilian authorities in the NIS and direct-use material used for naval nuclear propulsion purposes. We did not review the protection, control, and accounting systems used for nuclear weapons in the possession of the Ministry of Defense in Russia. U.S. officials believe there to be relatively better controls over weapons in the custody of the Ministry of Defense than over material outside of weapons. We also did not include in our review the upgrades at four sites funded by DOE that were not part of the lab-to-lab program. We recently issued a report that addressed the safety of facilities in the NIS. To meet our objectives, we reviewed U.S. assessments of the nature and extent of nuclear material control problems in the NIS; pertinent program documents, including agreements between DOD and the Russian Ministry of Atomic Energy (MINATOM), the Ukrainian State Committee on Nuclear and Radiation Safety, the Ministry of Defense of Kazakstan, the Ministry of Defense of Belarus, and between DOE and Gosatomnadzor (GAN); program plans; trip reports; quarterly progress reviews and State Department cables; and program budget, obligation, and expenditure data for the CTR-sponsored government-to-government program and for DOE’s lab-to-lab program. We also discussed with DOE plans to consolidate U.S. MPC&A assistance in DOE. We interviewed officials from DOD, DOE, the Department of State, the Nuclear Regulatory Commission, the National Laboratories (including Los Alamos, Sandia, and Lawrence Livermore), the Pacific Northwest Laboratory, the National Security Council, and the National Academy of Sciences. We also interviewed nonproliferation specialists from the Monterey Institute of International Studies. In Russia, we interviewed officials from MINATOM, Gosatomnadzor (the Russian nuclear regulatory agency), the Kurchatov Institute, the Institute of Physics and Power Engineering, the Elektrostal Machine Building Plant, the MINATOM nuclear weapons laboratories Arzamas-16 and Chelyabinsk-70, and the Kazakstan Atomic Energy Agency. In addition, we toured facilities at the Kurchatov Institute and the Institute of Physics and Power Engineering, located in the Russian Federation, to obtain information on current MPC&A systems implemented at these facilities. We visited sites in Russia that have been the recipients of U.S. assistance efforts, including the Elektrostal Machine Building Plant, the Kurchatov Institute, and the Institute of Physics and Power Engineering. We also witnessed the demonstration of a model MPC&A system at Arzamas-16. Our review was conducted between November 1994 and January 1996 in accordance with generally accepted government auditing standards. With the dissolution of the Soviet Union, Russia and six other NIS inherited hundreds of tons of direct-use nuclear material. Much of this material is thought to be located at 80 to 100 civilian, naval nuclear, and nuclear weapons-related facilities, mostly in Russia. However, U.S. and NIS officials do not know the exact amounts and locations of this material. Much of it is highly attractive to theft because it is relatively safe to handle and is not in weapons. U.S. officials are concerned that social and economic changes in the NIS have increased the threat of theft and diversion of nuclear material, and with the breakdown of Soviet-era MPC&A systems, the NIS may not be as able to counter the increased threat. While as yet there is no direct evidence that a nuclear black market for stolen or diverted nuclear material exists in the NIS, the seizures of gram and kilogram quantities of direct-use material have increased these concerns. The Soviet Union produced up to 1,200 metric tons of HEU and 200 metric tons of plutonium. Much of this material is outside of nuclear weapons, and the stockpile of material outside of weapons is expected to grow rapidly as Russia proceeds to dismantle its weapons. The material is considered to be highly attractive to theft because it is (1) not very radioactive and therefore relatively safe to handle and (2) in forms that make it readily accessible to theft, for example, in containers that can easily be carried by one or two persons or as components from dismantled weapons. This material can be directly used to make a nuclear weapon without further enrichment or reprocessing. Most of the material is located in Russia. Los Alamos National Laboratory has identified five sectors in the Russian nuclear complex that handle direct-use material. • Nuclear materials in weapons. (This material is largely in the custody of the Ministry of Defense. ) • The MINATOM defense complex, which contains large amounts of nuclear material removed from dismantled nuclear weapons and stockpiles of HEU and plutonium produced for the nuclear weapons program. • The MINATOM civilian sector, which includes a number of reactor development institutes such as the Institute of Physics and Power Engineering at Obninsk, as well as organizations, such as the Elektrostal Machine Building Factory, that produce nuclear fuels and materials for civilian applications. (Some of these institutes and enterprises do both civilian and defense work.) • Civilian research institutes outside of MINATOM, which include the Kurchatov Institute and facilities run by the Academy of Sciences, the Ministry of Science, and the Commission on Defense Industry. (Most of these institutes possess only small quantities of materials, although some, such as the Kurchatov Institute, possesses several tons of direct-use material.) • The naval propulsion sector, which includes the Navy and the Ministry of Shipbuilding. (This sector comprises stockpiles of HEU used in submarines and icebreakers.) Other NIS with facilities that handle direct-use material include Belarus, Georgia, Kazakstan, Latvia, Ukraine, and Uzbekistan. Generally, the nuclear facilities in these countries are operated by their respective atomic energy ministries or academies of science and involve nuclear research centers, research reactors, and, in the case of Kazakstan, a plutonium breeder reactor. The Soviet Union controlled nuclear materials since the beginning of its nuclear program in the 1940s. The Soviet approach to controlling nuclear materials placed a heavy emphasis on internal security, which corresponded to the political and economic conditions within the Soviet Union. It placed less emphasis on accounting procedures, which were used to monitor production, rather than to detect diversion or ensure the absence of diversion. The Soviet Union located its nuclear weapons complex in closed secret cities. The cities were separated from other urban areas, self-contained, and protected by fences and guard forces. Personnel working in the Soviet nuclear complex were under heavy surveillance by the KGB. Personnel went through an intensive screening process, and their activities were closely monitored. In general, facilities would control access to nuclear material using a three-person rule, requiring two facility staff members and at least one person from the security services to be present when material was handled. The Soviet-era control system enforced severe penalties for violations of control procedures. According to U.S. national laboratory officials, the Soviet system accounted for nuclear material, although it was not complete, timely, or accurate. Facilities paid close attention to end-products to meet production quotas and paid less attention to the use of completely measured material balances to track net gains and losses of materials as they were processed or handled. The Soviet system relied on manual, paper-based systems that made tracking material time-consuming. They also used standard estimates of rates of loss for materials that could be held up in processing equipment, such as pipes, rather then measuring actual losses. According to DOE, in these respects, the Soviet system of accounting was similar to that used in the early days of the U.S. nuclear program. According to Russian officials, traditional Soviet approaches to nuclear material controls were generally effective because (1) the Soviet Union was a closed society (separated by a robust iron curtain) with strict controls over foreign travel by its citizens, (2) internal security within the Soviet Union was quite rigid and strict discipline was carried out when controls were violated, and (3) there was no black market in nuclear materials within the country. Social and economic changes in the NIS have increased the threat of theft and diversion of nuclear material, and Soviet-era MPC&A systems may not be able to adequately counter the increased threat. The major nuclear facilities in the MINATOM weapons complex are no longer secret, and access to these facilities, along with the other nuclear facilities in the NIS, has increased. According to a U.S. government assessment, (1) the difficult economic situation has led to a loss of prestige for nuclear workers, (2) inflation and late payment of wages have eroded the value of salaries, and (3) pervasive corruption in society and the increasing potency of a strong criminal element have weakened the insider protection program based on personnel surveillance. With these changes, Russian and U.S. officials have become increasingly concerned about growing insider and outsider threats of nuclear theft. According to an official from one of MINATOM’s major facilities in its nuclear weapons complex, the insider threat at the facility has increased due to the frustrations of the institute’s workers who had not been paid in months. According to this official, this causes changes in their attitudes toward their work and places pressures on their families. The outsider threat has also increased at this facility because the closed city is now open to businesspeople and outside workers who visit for short periods of time. According to this official, the institutes do not have background information on the visitors. Consequently, they have a lower level of trust in the visitors than in the employees who have been working at the facility. According to this official, while no nuclear material has been stolen from this facility, other precious metals such as platinum and gold have been. With the erosion of traditional nuclear controls, current nuclear control systems in the NIS have weaknesses that could result in the theft of direct-use materials. The NIS may not have complete and accurate inventories of their nuclear materials, and some material may have been withheld from facility accounting systems. Nuclear facilities rely on antiquated accounting systems and practices that cannot quickly detect and localize nuclear material losses. Many NIS facilities also lack certain types of modern equipment that can detect unauthorized attempts to remove nuclear material from facilities. The NIS may not have accurate and complete inventories of the direct-use material they inherited from the former Soviet Union. According to a GAN official, the nuclear safeguard system inherited from the former Soviet Union was not a comprehensive system. The Soviet Union did not have a national material control and accounting system and according to a Russian laboratory official, the Soviet Union did not conduct comprehensive physical inventories of nuclear material at its nuclear facilities. Some of the facilities we visited, such as the Kurchatov Institute, were in the process of conducting such a comprehensive inventory, but it was not completed at the time of our visit. At the Institute of Physics and Power Engineering, officials were conducting an inventory of 70,000 to 80,000 small disk-shaped fuel elements containing direct-use uranium and plutonium at one reactor. When we visited the facility, they did not have an exact count of the elements. Figure 2.1 shows examples of the small disk-shaped fuel elements we observed at this facility that could be attractive to theft. U.S. and Russian officials are also concerned that some direct-use nuclear material has not yet been discovered at NIS nuclear facilities. According to U.S. national laboratory officials, some nuclear material may have been withheld from facility accounting systems so that plant managers could make up shortfalls in meeting their production quotas. According to another national laboratory official, organizations do not always share information with one another on the location and availability of specific nuclear products. Russian officials are concerned that they have no real information on the amounts or presence of some nuclear material and that this material has yet to be discovered. According to a DOD official, HEU for a Soviet navy reactor program that was terminated years earlier was discovered by Kazakstani officials after the Soviet Union dissolved. This HEU, enough for over two dozen nuclear weapons, was transferred from Kazakstan to the United States under Project Sapphire. U.S. officials are uncertain as to whether they have identified all facilities within the NIS where direct-use material is located. The United States has identified 80 to 100 facilities that handle direct-use material in the NIS. However, according to a DOE official, there may be as many as 35 additional facilities where such material is handled. Many nuclear facilities in the NIS rely on manual, paper-based material accounting systems that cannot quickly locate and assess material losses, rather than computer-based systems. Nuclear facility operators have to manually check hundreds of paper records to determine if material is missing. In contrast, U.S. nuclear facilities use computers extensively to maintain current information on the presence and quantity of all material. U.S. facilities are capable of updating nuclear material accounting information within 24 hours, and some can update material accounting information within 4 hours. Russian accounting systems do not provide systematic coverage of materials through all phases of the nuclear fuel cycle. According to U.S. national laboratory officials, these systems do not adequately measure or inventory material held up in processing equipment and pipes or material disposed of as waste. In addition, NIS facilities do not make full use of measured nuclear material balances, which makes it difficult to detect thefts occurring over a long period of time. According to a Los Alamos National Laboratory official, these facilities typically weigh material at certain points in production and generally measure radiation emitted from the material. These procedures, while useful in identifying the types of material present, are less rigorous than required in the United States because they do not measure the quantity of material. Diversions of small amounts of nuclear material could go undetected over time without more accurate measurements. Figure 2.2 shows a Russian radiation measuring instrument we observed being used at a facility to identify the types of material present in reactor fuel elements. Nuclear facilities in the NIS also use material control equipment that could be made more resistant to tampering by insiders. For example, nuclear material containers and vaults are sealed with a wire and wax seal system that could be removed and replaced without detection. In contrast, in the United States, material is sealed using numbered copper seals that are controlled and crimped, making them much more resistant to tampering. Material protection systems at NIS nuclear facilities have weaknesses that could result in the inability to detect insiders or outsiders trying to steal nuclear material. In the United States, sites handling direct-use material are protected by two fences; various sensors designed to delay and detect intruders as they approach a facility; and television cameras, which allow facility personnel to assess the nature of the threat. The nuclear facilities we visited in Russia for the most part did not have such equipment. For example, during our visit to the Kurchatov Institute, we noticed that a concrete fence protecting the main facility was crumbling. The fence appeared to lack television monitors or other sensors. A fence used to protect another site at the institute with large quantities of direct-use material did not appear to have any sensors or television cameras to detect intrusion and had vegetation that could obscure intruders or those leaving the facility. We toured another site at the Kurchatov Institute where several hundred kilograms of direct-use material were present. Although the site was within the walled portion of the institute, there was no fencing or other intrusion delay and assessment system around the site. Although we were accompanied by an institute official who had cleared our visit with security personnel, we were able to gain access without showing identification. One unarmed security guard was posted within the building. In contrast, during a visit to a Sandia National Laboratory facility in New Mexico, we were required to show identification and display security badges while we visited a facility with large amounts of direct-use material. This facility had numerous armed guards inside and outside the site. According to U.S. officials, there is no direct evidence that a nuclear black market linking buyers, sellers, and end-users exists for stolen or diverted nuclear material in the NIS. However, the seizure of gram and kilogram quantities of direct-use material in Russia, Germany, and the Czech Republic have increased concerns about the effectiveness of MPC&A systems in the NIS. The first case involving the theft or diversion of direct-use material appeared in Russia in 1992. According to U.S. officials, the more significant cases included the following: • From May to September 1992, 1.5 kilograms of weapons grade HEU were diverted from the Luch Scientific Production Association in Russia by a Luch employee. According to a nonproliferation analyst, the material was diverted in small quantities about 20 to 25 times during the period. The employee was apprehended en route to Moscow. In March 1994, three men were arrested in St. Petersburg trying to sell 3.05 kilograms of weapons-usable HEU. According to U.S. officials, Russian media articles claim that the material was smuggled out of a MINATOM facility located near Moscow in an oversized glove. • On May 10, 1994, 5.6 grams of nearly pure plutonium-239 were seized by German officials. • On August 10, 1994, 560 grams of a mixed-oxide uranium plutonium mixture were seized at Munich Airport from a flight originating from Moscow. • On December 14, 1994, 2.72 kilograms of weapons-grade uranium were seized by police in Prague. U.S. officials stated that they have not uncovered any direct links between buyers of direct-use materials and end-users that would use the material for weapons purposes. However, the cases are troubling for several reasons. • The cases are the first to involve gram and kilogram quantities of direct-use material. • They show that individuals are willing to take high risks to traffic in smuggled direct-use material. • While scientific analysis cannot pinpoint which facilities the material seized in Europe originated from, the criminal investigations suggest that the material may have come from the NIS. • The detection of nuclear smuggling so far has been by chance, rather than by reliance on physical protection control and accounting systems, or customs checks at the borders of the NIS. The United States is pursuing two different, but complementary strategies to achieve its goals of rapidly improving nuclear material controls over direct-use material in the NIS. The CTR-sponsored government-to- government program, which works directly with the NIS, is only now beginning to improve controls over direct-use material because (1) until January 1995, Russia’s MINATOM was reluctant to cooperate with the U.S. program because of security concerns and (2) work at non-Russian facilities with direct-use material is in the early stages of implementation. The DOE lab-to-lab program, which works directly with Russian nuclear facilities, has improved controls over direct-use material at five facilities during its first full year of implementation. Despite the slow start, the prospects for U.S. efforts to enhance MPC&A in the NIS are improving. Russia and the United States agreed in June 1995 to add five high-priority sites that have large amounts of direct-use material to the CTR-sponsored government-to-government program. In Kazakstan and Ukraine, the CTR-sponsored MPC&A program is progressing steadily with improvements at several sites with direct-use nuclear material. DOE also signed an agreement with GAN, the Russian nuclear regulatory agency, in June 1995 to cooperate on the establishment of a national nuclear materials control and accounting system in Russia. DOE’s lab-to-lab program is also expanding to cover MINATOM nuclear weapons facilities. Both DOD’s CTR-sponsored government-to-government program and DOE’s lab-to-lab program were designed to demonstrate MPC&A technology at model facilities and facilitate the transfer of MPC&A improvements to other nuclear facilities in the NIS. The CTR-sponsored program works with the governments of Russia, Ukraine, Kazakstan, and Belarus to upgrade civilian MPC&A at selected facilities and develop regulations, enforcement procedures, and national material tracking systems. DOE’s lab-to-lab program works directly with Russian nuclear facilities to upgrade their MPC&A controls. The two programs differ in their strategies to improve MPC&A in the NIS. The CTR-sponsored program is implemented by DOE through direct government-to-government agreements between DOD and the respective Ministries responsible for atomic energy in Russia, Ukraine, Kazakstan, and Belarus. The agreements and their amendments specify the total amount of funds available to the programs in each country, identify the types of facilities that will participate, establish the roles and responsibilities of the participating organizations, and establish rights to audit and examination by U.S. officials. To the maximum extent feasible, the CTR-sponsored MPC&A programs use U.S. goods and services. DOE’s lab-to-lab program, in contrast, is implemented directly with Russian nuclear facilities. DOE’s national laboratories participating in the program sign contracts directly with their Russian laboratory counterparts, and DOE’s national laboratories can purchase goods and services from U.S., Russian, or other suppliers as needed. The program includes complete MPC&A upgrades at specific facilities, or the rapid deployment of a particular MPC&A element, such as portal monitors, as needed. The CTR-sponsored government-to-government program is funding projects in Russia, Ukraine, Kazakstan, and Belarus for improving civilian nuclear material controls at selected model facilities and developing regulations, enforcement procedures, and national material tracking systems. Figure 3.1 shows the location of current CTR-sponsored government-to- government projects. In Russia, CTR funds have supported MPC&A upgrades for a low-enriched uranium fuel fabrication facility and a training center. In Ukraine and Kazakstan, the program has funded site surveys at facilities that use direct-use material and lower priority material and assisted national authorities in establishing MPC&A regulations and reporting systems. In Belarus, the program has funded a site survey at a facility using direct-use material and is assisting the Belarussian government in establishing MPC&A regulations and a reporting system. Since the beginning of the CTR-sponsored program in 1991, DOD has budgeted $63.5 million for government-to-government MPC&A assistance, obligated $59.2 million, and spent $3.8 million. The government-to- government program has provided working group meetings, site surveys, physical protection equipment, computers, and training for projects in Russia, Ukraine, Kazakstan, and Belarus. As of January 1996, none of the projects have been completed. Table 3.1 shows the distribution of CTR government-to-government program funds among Russia, Ukraine, Kazakstan, and Belarus. By July 1995, the CTR-sponsored government-to-government program had started to improve physical protection at a facility with direct-use material. The slow pace of the government-to-government program in Russia can be attributed to two major obstacles. The first obstacle involved difficulties in negotiating agreements with MINATOM to obtain access to sites handling direct-use material. The United States proposed to MINATOM in March 1994 that demonstration projects be initiated at two HEU fuel fabrication facilities. The U.S. position was that including these facilities would support nonproliferation objectives. MINATOM rejected the U.S. proposal saying that the inclusion of direct-use material was a sensitive and delicate issue and that experience in cooperating on low enriched uranium facilities would be needed before expanding to direct-use materials. As a result, the United States agreed to fund only one project in Russia, the low enriched uranium facility at Elektrostal. Recently, physical protection equipment was installed in the building housing the low enriched uranium fuel line. The same building also houses an HEU fuel fabrication line, which will be protected by this equipment. In the summer of 1994, the United States proposed a quick-fix approach to upgrade MPC&A at Russian facilities with direct-use material. Under this approach, the United States would provide expedited assistance to upgrade nuclear material security at key Russian nuclear facilities. Russian officials were not supportive of the approach citing concerns about providing the United States access to sensitive nuclear facilities. The second obstacle was MINATOM’s resistance in recognizing the role of GAN as a nuclear regulatory entity and GAN’s own lack of statutory authority for oversight and enforcement of nuclear regulations. According to State Department officials, GAN was often at odds with MINATOM about the ongoing transition of regulatory authority to GAN. Also, GAN was unable to assert its regulatory role because it lacked legislative authority to regulate facilities with nuclear materials. In addition, despite a decree issued in September 1994 by the Russian President, that named GAN as the lead agency in overseeing the security of nuclear materials in Russia and ordered MINATOM to work with GAN on this issue, there are still disputes over authority between ministries that have not been resolved. In Ukraine, Kazakstan, and Belarus, the CTR-sponsored government-to- government program is working to improve MPC&A systems at nuclear facilities, develop national MPC&A systems, and help them prepare for International Atomic Energy Agency safeguards pursuant to the Nuclear Nonproliferation Treaty. However, CTR-sponsored projects are just beginning, and improvements to controls at the first facility handling direct-use materials will not be completed until mid-1996 at the earliest. In Ukraine, the program has completed a site survey for the Kiev Institute of Nuclear Research, which uses direct-use material for fuel in a research reactor and has started delivering access control equipment. The program is also in the process of conducting a site survey at the Kharkiv Institute of Physics and Technology, which also contains direct-use material. The program is also implementing an MPC&A project at the South Ukraine Power Plant, which is a lower priority site because it uses low enriched uranium for fuel. Work at the Kiev Institute is expected to be completed by mid-1996, and work at the other sites is expected to be completed by the end of fiscal year 1997. The program has also established a computer network for the State Committee for Nuclear and Radiation Safety to facilitate the creation of Ukraine’s national nuclear database. In Kazakstan, the focus of CTR-funded work has been on the Ulba Fuel Fabrication Plant, a low-priority site that produces low enriched uranium fuel elements for power reactors. The program also conducted site surveys for research reactor sites at Semipalatinsk and Almaty and for a breeder reactor at Aktau. DOE expects the program in Kazakstan to be completed by the end of 1997. In Belarus, the program is upgrading MPC&A systems for direct-use material at the Sosny Research Center in cooperation with Sweden and Japan, helping Belarus develop national regulations, and preparing the government for International Atomic Energy Agency safeguards. The program has completed a site survey and delivered access control equipment and interior sensors to Sosny. DOE expects the program in Belarus to be completed by the end of 1996. The lab-to-lab program is funding projects in Russia to improve MPC&A at sites within nuclear facilities, demonstrate MPC&A technologies, and deploy MPC&A equipment on an as-needed basis. Figure 3.2 shows the location of current lab-to-lab projects. The lab-to-lab program has completed pilot projects at the Kurchatov Institute in Moscow and the Institute of Physics and Power Engineering and has demonstrated a model material control and accounting system at Arzamas-16, a MINATOM nuclear weapons facility. In addition, the program has deployed nuclear portal monitors around a nuclear site at Chelyabinsk-70, a second MINATOM nuclear weapons facility, the Kurchatov Institute, the Institute of Automatics, the Institute of Physics and Power Engineering, and Arzamas-16. Table 3.2 shows obligations and expenditures for the lab-to-lab program. The pilot project at the Kurchatov Institute improved MPC&A for a reactor site containing about 80 kilograms of direct-use material. The improvements included a new fence, sensors, a television surveillance system to detect intruders, a nuclear material portal monitor, a metal detector at the facility entrance, improved lighting, alarm communication and display systems, an intrusion detection and access control system in areas where nuclear material is stored, and a computerized material accounting system. Figure 3.3 shows the types of improvements we observed during our visit to the Kurchatov Institute reactor site in March 1995. At Obninsk, the program has upgraded MPC&A systems for a research reactor facility that houses several thousand kilograms of direct-use material. The program is providing a computerized material control and accounting system; entry control; portal monitoring systems; a vehicle monitor; and bar codes to be attached to the discs, seals, and video surveillance systems. In addition, the program will assist the facility with taking a physical inventory and performing radiation measurements to quantify the amount of material present. The first phase of this project was completed in September 1995. A pilot demonstration project was also completed with Arzamas-16 in March 1995. This project demonstrated MPC&A technologies that could be applied to MINATOM nuclear weapons facilities and the CTR-sponsored fissile material storage facility. Using U.S.- and Russian-supplied equipment, the demonstration consisted of computerized accounting systems; a system to measure nuclear materials in containers; access control systems; a monitored storage facility using cameras, seals, and motion detector equipment; and a system to search for and identify lost or stolen material. Although this project did not have a direct or immediate impact on protecting direct-use material, it has led to greater interest in participation in the lab-to-lab program by MINATOM defense facilities. Figure 3.4 shows U.S.- and Russian-supplied equipment that we observed in use during the March 1995 Arzamas-16 demonstration project. The lab-to-lab program is also rapidly deploying nuclear material portal monitors to Russian institutes, enterprises, and operating facilities. Starting in June 1995, the lab-to-lab program assisted Chelyabinsk-70 in deploying two nuclear material portal monitors and a vehicular portal monitor at the entrances to a key nuclear site. This effort was in response to increased concerns of Chelyabinsk officials about controlling access to the site. Nuclear material portal monitors have also been installed at an engineering test facility at Arzamas-16 and at one of the main entrances to the Institute of Automatics, where the monitors are undergoing testing and evaluation. The lab-to-lab program has also started delivering portal monitors to Tomsk-7. The program officials have signed a contract to install monitors at all portals at Tomsk-7. While the CTR-sponsored government-to-government program has gotten off to a slow start controlling direct-use material, the U.S. government is making progress in expanding participation in the program to more facilities with direct-use material in the NIS. The lab-to-lab program is also expanding its outreach to additional facilities in Russia that require MPC&A upgrades, and DOE officials have been approached by the Russians to expand their efforts to other facilities. In January 1995, the United States and Russia agreed to expand the CTR-sponsored government-to-government program to facilities using direct-use material. An agreement was signed in June 1995 at the Gore-Chernomyrdin Commission meeting to add five direct-use facilities.These are high-priority facilities because they handle large amounts of direct-use material. They include the HEU fuel fabrication line at the Elektrostal Machine Building Plant, the Scientific Production Association Luch in Podolsk, the Scientific Research Institute for Nuclear Reactors in Dmitrovgrad, the Mayak Production Association, and the Institute of Physics and Power Engineering at Obninsk for a nuclear training laboratory and MPC&A improvements in addition to those underway in the lab-to-lab program. The lab-to-lab program plans to implement MPC&A projects at several MINATOM nuclear weapons complex facilities during fiscal year 1996 and continue work at the Kurchatov Institute and the Institute of Physics and Power Engineering at Obninsk. The lab-to-lab program has signed contracts to upgrade MPC&A systems at Tomsk-7, Chelyabinsk-70, and Arzamas-16. The program at Tomsk-7 includes deployment of nuclear material portal monitors, development of an automated material control and accounting system for an HEU facility, development of an access control system for a sensitive facility on site, and implementation of a rapid inventory system for uranium and plutonium in containers based on the technology demonstrated in fiscal year 1995 at Arzamas-16. At Chelyabinsk-70, the program plans to enhance MPC&A at a reactor facility handling large amounts of direct-use material. The lab-to-lab program is also pursuing new initiatives with Russian nuclear weapons assembly and disassembly facilities and the Russian navy. In August 1995, representatives of the four Russian nuclear weapons assembly and disassembly facilities (Avangard, Penza-19, Sverdlovsk-45, and Zlatoust-36) met to discuss possible joint work to improve MPC&A at their facilities. U.S. technical experts have also met with officials from the Russian naval fuel sector and the Kurchatov Institute to discuss cooperative work to improve MPC&A at Russian naval facilities. The Russians have proposed a list of eight potential areas of cooperation for improving MPC&A at the naval facilities and have recommended that the joint work be conducted with the participation of the Kurchatov Institute. In fiscal year 1996, the United States substantially increase its MPC&A assistance program to include all facilities in the NIS known to contain direct-use nuclear material. With the increase, the executive branch has consolidated management and funding responsibilities for the DOD-sponsored CTR government-to-government program and the DOE’s lab-to-lab program within DOE. The expanded program faces several uncertainties involving the number of facilities to be assisted, costs, and ultimate effectiveness. DOE is developing responses to each of these issues. The executive branch has acted to address the problem of quickly improving MPC&A at NIS facilities by proposing a multiyear program to help the NIS strengthen their controls over direct-use materials. In September 1995, the President directed DOE to prepare a long-range plan to enhance nuclear material controls by the year 2002 at the 80 to 100 facilities in the NIS handling direct-use material. The President also transferred responsibility for funding and supporting new government-to-government projects, which was the responsibility of the CTR program, from DOD to DOE in fiscal year 1996. DOE will also continue to manage the lab-to-lab program. DOE plans to request from Congress $400 million for the program over 7 years. DOE requested $70 million in fiscal year 1996 and plans to continue requesting $70 million per year through fiscal year 1999, then reducing the request to $50 million a year until 2001, and to $20 million in 2002. DOE plans to work at up to 15 facilities per year. DOE and national laboratory officials estimate that the cost per facility will range from $5 million to $10 million, on the basis of DOD’s and DOE’s experiences to date working at a limited number of sites at several facilities in the NIS. As DOE prepares to undertake the much larger task of managing the expanded program, it will face several uncertainties that can affect program implementation. • As previously stated, DOE does not know how many facilities may ultimately require assistance. Currently, U.S. officials do not know where all the direct-use material is located. According to a DOE official, the United States may need to include as many as 35 additional facilities beyond the 80 to 100 facilities currently envisioned to achieve its goal of enhancing controls over all direct-use material. • DOE is uncertain about the total costs of the program. The cost of the entire program could range from $400 million to over $1 billion based on the estimate that the number of facilities that may require assistance could range from 80 facilities to as many as 135 facilities, and that per project costs could range from $5 million to $10 million. Project estimates could vary as the program expands to different types of facilities, or if the NIS consolidate their stockpiles of direct-use material. • DOE may have difficulty directly verifying that U.S. assistance is used for its intended purposes because the Russians may limit direct measures that the United States may use at highly sensitive facilities. DOE plans to provide assistance to sensitive MINATOM defense facilities. While DOE is attempting to negotiate the use of direct measures, such as audit and evaluation procedures wherever possible, the Russians may deny the use of such direct measures in certain facilities. DOE is currently developing responses that could address these program uncertainties, including developing a long-range plan, a consolidated cost-reporting system, and a flexible strategy for auditing and evaluating program progress. These responses had not been completed at the conclusion of our review. In September 1995, the President directed DOE to develop a long-term plan. According to a DOE official, the plan will include strategies, priorities, and costs for the work at the 80 to 100 facilities where the U.S. plans to provide assistance. The U.S. strategy is to gain commitments from the Russians for work at facilities where direct-use material is present: the MINATOM defense facilities, MINATOM civilian research facilities, civilian research institutes, and the naval propulsion sector. DOE’s priorities are to (1) improve controls at facilities in the NIS handling direct-use material, (2) help the Russians develop and deploy current MPC&A equipment and technology to these facilities, and (3) assist the NIS in developing a national MPC&A regulatory system. DOE estimates that the fiscal year 1996 budget for the lab-to-lab program will be $40 million, the government-to-government program will be $15 million, cooperation with GAN will be $10 million, and cooperation for securing Russian naval nuclear fuel will be $5 million. According to a national laboratory official, supporting plans are also being developed by the national laboratories. For example, the lab-to-lab program has developed a unified U.S.-Russian plan for work at MINATOM defense facilities. The plan provides objectives, priorities, a list of facilities to receive MPC&A enhancements, and approaches for providing assurances that equipment and other support are used for intended purposes and for protecting sensitive information. Similar plans for the MINATOM civilian sector and the independent nuclear facilities are also being developed. DOE is developing a centralized cost-reporting system for the government-to-government and lab-to-lab programs. Currently, DOE does not have a consolidated source of information on the obligations and expenditures for the two programs. While DOE program managers receive quarterly financial information from reports prepared by the national laboratories, there is no central point within DOE where data for the government-to-government program and lab-to-lab program are aggregated. A centralized consolidated cost-reporting system will provide DOE managers with current financial and project status information. This would be useful in responding to changes in program requirements and costs and revising program budget requests to reflect operating experiences at facilities in the NIS. Because the United States places a high priority on preventing diversion of nuclear material, the executive branch has agreed, in principle, on the need for flexibility in pursuing adequate arrangements for ensuring that U.S. assistance is used as intended. The arrangements include formal audit and evaluation rights negotiated as part of government-to-government agreements and flexible arrangements developed by the national laboratories to be applied to the lab-to-lab program. Under government-to-government agreements, which provide basic rights and responsibilities for the government-to-government program, the United States is allowed to conduct audits and examinations during the period of the agreements upon 30 days advanced notice. These agreements give U.S. personnel the right to visit sites receiving U.S. assistance. DOD and MINATOM signed an additional agreement on Administrative Arrangements for the Conduct of Audits and Examinations of Assistance. Using these arrangements, DOD conducts audits and examinations of all CTR-funded assistance and will include MPC&A assistance. In contrast, the lab-to-lab program, which works directly with Russian nuclear facilities, is not covered by the formal government-to-government agreement with Russia. However, the annex to the lab-to-lab program plan outlines guidance for ensuring that assistance is used as intended. The annex specifies various management controls, such as making progress payments to Russian laboratories only for specific delivered products, and only after U.S. laboratory officials have evaluated the product against the contract to ensure that payments to Russian laboratories are only used for their intended purposes. The annex also provides a series of direct and indirect measures to determine if U.S. assistance is improving nuclear material controls. Some measures for program success include tracking the amount of nuclear material covered by strengthened safeguards that can be directly assessed through visits to facilities and exchanges of photographs, video tapes, records, and documents to show implementation of an improved system and more limited access on a controlled basis to the facilities. The Departments of State and Energy generally agreed with the report. Their comments are presented separately in appendixes I and II. The Department of State provided editorial comments, which have been incorporated in the text as appropriate. DOD officials also agreed with the facts as presented in this report, but expressed concern about how the report portrayed the relative success of the government-to-government and lab-to-lab programs. These officials stated that the programs are complementary approaches to achieving the goal of improving controls and accountability over direct-use nuclear material in the NIS. We agree and have modified the report accordingly. | Pursuant to a congressional request, GAO reviewed U.S. efforts to strengthen controls over nuclear materials in the newly independent states of the former Soviet Union, focusing on the: (1) nature and extent of problems with controlling direct-use nuclear materials in the newly independent states; (2) status and future prospects of U.S. efforts in Russia, Ukraine, Kazakhstan, and Belarus; and (3) executive branch's consolidation of U.S. efforts in the Department of Energy (DOE). GAO found that: (1) the Soviet Union produced about 1,200 metric tons of highly enriched uranium and 200 metric tons of plutonium; (2) much of this material is outside of nuclear weapons and is highly attractive to theft, and the newly independent states may not have accurate and complete inventories of the material they inherited; (3) with the breakdown of Soviet-era material protection, control, and accounting (MPC&A) systems, the newly independent states may not be as able to counter the increased threat of theft; (4) nuclear facilities cannot quickly detect and localize nuclear material losses or detect unauthorized attempts to remove nuclear material; (5) while there is not yet direct evidence of a black market for nuclear material in the newly independent states, the seizures of direct-use material in Russia and Europe have increased concerns about theft and diversion; (6) U.S. efforts to help the newly independent states improve their MPC&A systems for direct-use material started slowly; (7) the Department of Defense's (DOD) government-to-government Cooperative Threat Reduction (CTR) program obligated $59 million and spent about $4 million from fiscal years (FY) 1991 to 1995 for MPC&A improvements in Russia, Ukraine, Kazakstan, and Belarus, and provided working group meetings, site surveys, physical protection equipment, computers, and training; (8) the program began to gain momentum in January 1995 when CTR program and Russian Ministry of Atomic Energy (MINATOM) officials agreed to upgrade nuclear material controls at five high-priority facilities handling direct-use material; (9) DOE and Russia's nuclear regulatory agency have also agreed to cooperate on the development of a national MPC&A regulatory infrastructure; (10) DOE's lab-to-lab program, which obligated $17 million and spent $14 million in FY 1994 and 1995, has improved controls at two zero-power research reactors, and begun providing nuclear material monitors to several MINATOM defense facilities to help them detect unauthorized attempts to remove direct-use material; (11) in FY 1996, the program is implementing additional projects in MINATOM's nuclear defense complex; (12) the United States expanded the MPC&A assistance program in FY 1996 to include all known facilities with direct-use material outside of weapons; (13) management and funding for the expanded program were consolidated within DOE, which plans to request $400 million over 7 years for the program; and (14) DOE is responding to uncertainties involving the program's overall costs and U.S. ability to verify that assistance is used as intended by developing a long-term plan and a centralized cost reporting system and implementing a flexible audit and examination program. |
DOD defines force protection as “actions taken to prevent or mitigate hostile actions against Department of Defense personnel (to include family members), resources, facilities, and critical information.” Our review concentrated mostly on the physical security and related aspects of force protection that include measures to protect personnel and property and encompass consequence management, intelligence, and critical infrastructure protection. We have identified a risk management approach used by DOD to defend against terrorism that also has relevance for the organizations responsible for security at commercial seaports. This approach can provide a process to enhance preparedness to respond to terrorist attacks or other emergencies, whether natural or man-made (intentional or unintentional). The approach is based on assessing threats, vulnerabilities, and criticalities (the importance of critical infrastructure and functions). Threat assessments identify and evaluate potential threats on the basis of factors such as capabilities, intentions, and past activities. These assessments represent a systematic approach to identifying potential threats before they materialize. However, even if updated frequently, threat assessments may not adequately capture all emerging threats. The risk management approach therefore uses vulnerability and criticality assessments as additional input to the decision-making process. Vulnerability assessments identify weaknesses that may be exploited by identified threats and suggest options that address those weaknesses. For example, a vulnerability assessment might reveal weaknesses in a seaport’s security systems, police force, computer networks, or unprotected key infrastructure such as water supplies, bridges, and tunnels. In general, teams of experts skilled in areas such as structural engineering, physical security, and other disciplines conduct these assessments. Criticality assessments evaluate and prioritize important assets and functions in terms of factors such as mission and significance as a target. For example, certain power plants, bridges, computer networks, or population centers might be identified as important to the operation of a seaport. Criticality assessments provide a basis for identifying which assets and structures are more important to protect from attack. These assessments also help determine mission-essential requirements to better prioritize limited force protection resources while reducing the potential for expending resources on lower priority assets. In the event of a major military mobilization and overseas deployment, such as Operation Desert Shield, a large percentage of U.S. forces (equipment and other materiel) would be sent by sea through a number of commercial seaports in the United States to their respective areas of operations. To accomplish this, DOD would use several shipping methods, including government-owned and maintained reserve sealift ships and ships operated or chartered by the Military Sealift Command. Figure 1 shows two reserve sealift ships berthed at a commercial seaport. The military also uses commercial seaports for deployments such as those to operations in the Balkans. The Departments of Defense and Transportation have identified 17 seaports on the Pacific, Atlantic, and Gulf Coasts (13 commercial ports, 1 military port, and 3 military ammunition ports) as “strategic,” meaning that they are necessary for use by DOD in the event of a large scale military deployment. Because the security activities that DOD may conduct outside its installations are limited, it must work closely with a broad range of federal, state, and local agencies to ensure that adequate force protection measures exist and are executed during deployments through strategic seaports. Force protection responsibilities for DOD deployments through commercial seaports are divided among a number of DOD organizations including the U.S. Transportation Command and its components (particularly the Military Traffic Management Command and the Military Sealift Command), the U.S. Army Forces Command, and individual deploying units. Port Readiness Committees at each strategic port provide a common coordination structure for DOD, the Coast Guard, and other federal, state, and local agencies at the port level and are the principal interface between DOD and other officials at the ports during the movement of military equipment. The Port Readiness Committees are focused largely on preparing for potential military movements through a port and not on day- to-day security concerns at the port. The issue of security at the nation’s seaports has been the subject of a recent major study, as has the broader issue of homeland security. In fall 2000, the Interagency Commission on Crime and Security in U.S. Seaports reported that security at seaports needed to be improved in a number of areas, including assessments of threats, vulnerabilities, and critical infrastructure at ports; coordination and cooperation among agencies; and establishment of guidelines for commercial facilities handling military cargo. In February 2001, the Commission on National Security/21st Century (commonly referred to as the Hart-Rudman Commission) reported that threats such as international terrorism would place the U.S. homeland in great danger. In addition to recommending national action, the commission urged DOD to pay closer attention to operations within the United States. The security environment at strategic seaports is uncertain because comprehensive assessments of threats, vulnerabilities, and port infrastructure and functions have not been completed. Recent efforts by the Coast Guard, the Transportation Security Administration, and other agencies at the ports have begun to address several important security issues, and maritime security legislation before the Congress may assist these efforts. Further, proposed legislation may provide a framework for seaport organizations to improve the coordination and dissemination of threat information. There is a wide range of vulnerabilities at strategic seaports, including critical infrastructure such as bridges and refineries in close proximity to open shoreline, shipping containers with unknown contents, and an enormous volume of foreign and domestic shipping traffic. Figure 2 illustrates typical commercial port infrastructure and operations. Many of the organizations responsible for seaport security do not have the resources (such as trained personnel, equipment, and funding) necessary to mitigate all vulnerabilities. To determine how best to allocate available resources and address security at seaports, it is vital that responsible agencies involved follow a risk management approach that includes assessments of threats, vulnerabilities, and critical infrastructure and functions. The results of these assessments should then be used to better conduct risk-based decisions involving security planning and actions. Since September 11, the organizations responsible for security at strategic seaports have increased emphasis on security planning. They now recognize that planning must include the protection of critical seaport infrastructure and assets that have not generally been considered vulnerable. Port authority officials stated that increased security planning has led to improvements in physical security, such as higher fences, more security personnel, and better coordination with local law enforcement and other agencies. The Coast Guard has taken broad actions forward and has redirected resources towards security planning improvements. However, in their planning efforts, the organizations at the ports we visited applied the elements of risk management differently. At only one of six ports we visited were the results of threat, vulnerability, and criticality assessments incorporated into a seaport security plan that included all relevant agencies. The Port Mobilization Master Plan developed by the Port Readiness Committee at this port employs a risk-based process and systematically identifies the mission, responsibilities, and functional relationships of each activity or agency involved in supporting a military deployment through the port. Specific weaknesses in the assessment process used at ports we studied include the following: Individual organizations at the seaports conducted separate vulnerability assessments that were not coordinated with those of other agencies and were not based on standardized approaches. The Coast Guard has taken the lead in developing a standard methodology for comprehensive portwide vulnerability assessments (also called port security assessments) that it plans to complete at 50 major ports, including all strategic seaports. Assessments of the criticality of seaport infrastructure were not done at all the ports we visited prior to September 11. The Coast Guard has since addressed this shortcoming by conducting assessments of high-risk infrastructure at all major ports. It coordinated the assessments with commercial facilities at the ports. Criticality of seaport assets and functions will also be incorporated into the port security assessments. In some cases, threat assessment information received by agencies at the ports is based on higher-level regional assessments that do not focus on the local port facility. These regional assessments, while helpful in providing a broader view of the security environment, do not provide site- specific local threat information to the port. Agencies involved with seaport security have different concepts of how threat assessments should be developed and the degree to which threat information should be shared and disseminated. Some agencies have not traditionally shared threat information as widely as may be necessary for comprehensive security measures at seaports. In addition to these specific weaknesses, we found that there is no single mechanism (such as a working group or committee) at the seaports we visited to analyze, coordinate, and disseminate information on a routine basis on the broad range of threats at each port. Most threat information at the ports was coordinated on an informal basis, such as through personal contacts between law enforcement individuals and those at other agencies. The lack of such a mechanism compounds the already difficult task of protecting deploying military forces and increases the risk that threats—both traditional and nontraditional ones—may not be recognized or that threat information may not be communicated in a timely manner to all relevant organizations. Currently, interagency bodies at or near the ports, such as port readiness committees, joint terrorism task forces, or the newly formed antiterrorism task forces, do not routinely coordinate threat information focused solely on the ports. The port readiness committees were designed to prepare commercial ports to conduct military movements. The task forces were designed to focus on threat information but on a regional rather than a port level. The need for efficient coordination of threat information has been amply documented and recognized, and there are examples of improved coordination efforts. The Interagency Commission on Crime and Security in U.S. Seaports noted in 2000 the importance of interagency threat coordination. The commission said that officials at seaports need a means to analyze, coordinate, and disseminate information on the broad range of threats they face. This includes information on ships, crews, and cargo and information on criminal, terrorist, and other threats with foreign and domestic origins. Although the commission did not recommend centralizing threat information distribution into a single agency or regulating dissemination procedures at seaports, it did recommend improvements in integrating threat information systems and improved coordination mechanisms for law enforcement agencies at the seaport level. Furthermore, the Coast Guard recognizes that agencies involved with seaport security are currently unable to adequately analyze, share, and exploit available threat information, and it also recognizes that asymmetric military and terrorist threats have a natural gateway into America via its ports. In response, the Coast Guard has developed a “maritime domain awareness” concept that emphasizes a risk management approach for preventing or mitigating both traditional and nontraditional threats through the analysis and dissemination of threat information. The concept involves being knowledgeable of all activities and elements in the maritime domain that could represent threats to the safety, security, or environment of the United States or its citizens. Through the timely delivery to the appropriate civilian or military authorities of processed information, drawn from all available sources, effective actions involving limited resources can be taken. Additionally, the maritime domain awareness concept allows the Coast Guard and other relevant agencies to incorporate nontraditional threat information, such as unintentional biological hazards in empty cargo containers or impending weather hazards into actionable intelligence. Both of these issues can constitute potential threats to a port and its operation. In commenting on a draft of this report, Transportation Security Administration officials agreed that the coordination and dissemination of threat information at the port level is an issue that needs to be addressed. They noted that the Transportation Security Administration is overseeing studies (as part of “Operation Safe Commerce”) aimed at identifying potential threats and risk mitigation techniques that will contribute to meeting this goal. Finally, as we have previously reported, DOD uses threat working groups at its installations as a forum to involve installation force protection personnel with local, state, and federal law enforcement officials to identify potential threats to the installation and to improve communication between these organizations. These working groups help coordinate as much information as possible on a broad range of potential threats. Given the limited information available on threats posed by terrorist groups or individuals, such a mechanism assists the installation commander and local authorities in gaining a more complete picture of internal and external threats on a more continuous basis over and above what is provided by an annual threat assessment. Since the September 11 attacks, the Coast Guard and other agencies at ports have made efforts to improve risk management and security measures. The Coast Guard, traditionally a multimission organization, has made a significant shift in operational focus toward seaport security. In so doing, the Coast Guard, in the months immediately following September 2001, diverted resources from other missions such as drug interdiction but has since restored some of its effort in those areas. Examples of additional recent efforts by the Coast Guard and other agencies include formation of Coast Guard maritime safety and security teams based at selected ports to assist in providing port security personnel and equipment; Coast Guard escorts or boarding of high-risk ships, including cruise ships, Coast Guard escorts for naval vessels; establishment and enforcement of new security zones and increased harbor security patrols (figure 3); and port authority cost estimates for improving facility security and interim security improvement measures. In commenting on a draft of this report, Transportation Security Administration officials indicated that they are taking initial steps toward accomplishing seaport security goals by awarding approximately $217 million in grants (funded through both regular and emergency appropriations) to public and private entities at the ports for initial security assessments, preliminary security improvements, and port incident response training. Legislation on maritime security before the Congress (as of October 22, 2002) may promote and enhance these seaport security efforts. Some of the major provisions include vulnerability assessments to be conducted at ports; establishment of port security committees at each port, with broad representation by relevant agencies, to plan and oversee security measures; development of standardized port security plans; background checks and access control to sensitive areas for port workers; and federal grants for security improvements. On the basis of our discussions with agency officials at the ports we visited, we believe that if enacted and properly implemented, these and other provisions of the maritime security legislation should assist officials in addressing many of the weaknesses we have identified. For example, comprehensive vulnerability assessments and the proposed standardized security plans could provide a more consistent approach to identifying and mitigating security weaknesses. In providing for port security committees and interagency coordination, the legislation would also provide a framework for organizations at seaports to establish a mechanism to coordinate, analyze, and disseminate threat information at the port level. There may be challenges, however, to implementing the maritime security legislation, including uncertainty about the amount and sources of funds needed to address security needs at seaports. We recently reported on these and other challenges to implementing the provisions of this legislation and the establishment of a new Department of Homeland Security. In commenting on a draft of this report, Coast Guard officials reported that notwithstanding the status of the proposed legislation, port security committees have already been established at some major ports and that the Coast Guard is preparing a nationwide policy to delineate the purpose and composition of these committees. Coast Guard officials believe that in addition to consideration of vulnerabilities and security planning, the port security committees, as currently envisioned, may provide a more effective mechanism for threat information coordination. During our review, we identified two significant weaknesses in DOD’s force protection process. First, DOD lacks a central authority responsible for overseeing force protection measures of DOD organizations while carrying out the various domestic phases of military deployments to and through U.S. seaports. As a result, potential force protection gaps and weaknesses requiring attention and action might be overlooked. Second, there are instances during some phases of these deployments when DOD transfers custody of its military equipment to nongovernment entities. At these times, the equipment could fall into the hands of individuals or groups whose interests are counter to those of the United States. Deploying units traditionally focus their force protection efforts primarily on their overseas operations. Before they arrive in an overseas region, the units are required to submit force protection plans to the unified combatant commanders, who are responsible for force protection of all military units in their regions, with the exception of DOD personnel assigned to the Department of State. The tactics, techniques, and procedures in the units’ plans must match the guidance developed by the unified commander, who coordinates and approves the individual plans. This allows the commander to ensure that a unit’s plan takes into account all current threats that could affect the mission and to accept or mitigate any security risks that arise. The situation for the domestic phases of overseas deployments is different: there is no designated commander with centralized force protection responsibilities similar to those of the overseas unified combatant commander. This creates gaps, during the domestic phases of a deployment, in DOD’s ability to coordinate individual force protection plans, identify gaps that may exist, and mitigate the identified risk. The one coordination mechanism that is in place—the Port Readiness Committee—is focused largely on port operations and at this time does not coordinate all phases of a deployment from an installation through the port. Figure 4 illustrates the domestic phases of a deployment and key organizations responsible for force protection. In the deployments we reviewed, service guidance and DOD antiterrorism standards, particularly those that emphasize the elements of risk management (such as Army major command force protection operations orders), were not always followed in all phases of a deployment from an installation through a port. For example, the Military Traffic Management Command’s transportation units recognized the vulnerability of seaport operations and prepared security plans for deployment operations at the ports that were based on assessments of threats, vulnerabilities, and critical infrastructure. The transport of military equipment to the port by commercial carrier was not always supported by such detailed plans and assessments. In contrast, we found that when a military unit travels by road to a seaport in its own convoy, it generally follows exhaustive planning and risk management measures. In discussing the absence of a focal point for coordinating and executing force protection measures for the domestic phases of military deployments, DOD officials indicated that the recently established U.S. Northern Command may serve as such a coordinating mechanism. Additionally, in commenting on a draft of this report, DOD officials noted that the principal defense guidance on military transportation issues is in the process of being revised to incorporate force protection guidance. During deployments from domestic installations through commercial seaports, there are three phases in which DOD either transfers custody of its equipment to nongovernment persons (in some cases foreign nationals) or does not have adequate information about who is handling its equipment, as follows: Private trucking and railroad carriers transport equipment and cargo from military installations to seaports. Civilian port workers handle and load equipment onto ships. Private shipping companies with civilian crews sometimes transport DOD equipment overseas. The four deployments we reviewed from three military installations in 2001 involved the use of road and rail contract carriers transporting equipment from the installation to a port of embarkation. Contract carriers are required to provide security for the equipment they transport, including sensitive items. For example, contract carriers are required to provide their own security at railroad switching yards, rest areas, overnight stops, and along the entire route whenever they transport sensitive equipment. Although we did not review the steps taken by DOD to evaluate the contractors’ security measures, the transfer of accountability to these nongovernmental agents creates a gap in DOD’s oversight of its assets between installations and ports. Once equipment arrives at a commercial seaport, it comes under the control of the military units responsible for managing the loading process. However, civilian port workers, stevedores, and longshoremen—who undergo limited screening and background checks by port authorities or terminal operators—handle military equipment and cargo, as well as the loading and unloading of ships used to transport the equipment overseas. This was the case in all the deployments we reviewed. In all cases, the stevedores or longshoremen were in the same labor pool as the one used for commercial port operations. While DOD officials have not identified port workers as a particular threat, they are concerned that lack of information on the background of individuals handling military equipment increases potential risk. Organizations at some of the ports we visited are now implementing or reviewing efforts to increase screening of port workers. And the maritime security legislation currently before the Congress includes provisions for background checks and access control for port workers. These measures, if approved and properly implemented, may help address this issue. In commenting on a draft of this report, Transportation Security Administration officials acknowledged the problems posed by the lack of screening for port workers and indicated that they plan to study and eventually issue nationwide standards for credentialing port workers. DOD also transfers custody of its equipment when the equipment is placed aboard a commercial ship for transport overseas. We reviewed four major overseas deployments from three military installations during calendar year 2001 that involved about 6,550 tons of military equipment and supplies. Although these four deployments are not representative of all DOD deployments conducted in 2001, they do illustrate the use of foreign- owned commercial vessels by DOD. In commenting on a draft of this report, DOD officials stated that about 43 percent of cargo shipped overseas in 2001 as part of deployments involving major equipment in support of overseas operations was carried on foreign-flagged ships. As indicated in table 1, most of the ships for the deployments we reviewed were both foreign-owned and foreign-crewed. In addition to transferring custody over its assets to non-DOD personnel, DOD did not generally provide security forces aboard these vessels. Several of the ships used in the deployments we reviewed did have DOD maintenance personnel aboard, but the ship manifests did not indicate that armed DOD personnel were aboard as a security force. The Military Sealift Command reviews charter vessel crew lists to determine whether any crewmembers are known security threats. Some of the materiel transported by these vessels included sensitive and mission essential items. Table 2 provides examples of equipment carried aboard foreign- owned and foreign-crewed ships for the deployments we reviewed. When DOD relinquishes control over its equipment, it relies on nongovernment third parties to protect its assets. Placing military equipment outside DOD’s control also complicates the steps needed to mitigate the higher risk and could disrupt military units from performing their intended missions. An example of the dangers of such loss of control occurred in summer 2000. While in the North Atlantic, the captain of a commercial vessel carrying Canadian military equipment and three Canadian Forces personnel from the Balkans refused to proceed to the ship’s destination port in Canada after a dispute over payment to the vessel’s owner. The vessel, GTS Katie, was owned by a U.S. company but registered in St. Vincent and the Grenadines and crewed by non-U.S. citizens. Alarmed at the loss of control over its equipment, including sensitive items, the Canadian government was compelled to board the Katie with a contingent of Canadian Forces naval personnel from a nearby warship. The vessel was then brought safely into a Canadian port. The Canadian Defense Minister explained that the loss of control over military equipment compromised Canada’s ongoing military operations and the ability to undertake new ones. Similarly, when the third parties to whom DOD relinquishes control of its equipment include foreign nationals, there may be an increased risk of the equipment being tampered with, seized, or destroyed by individuals or groups whose interests run counter to those of the United States and an increased chance that those weapons or equipment might be used against military or civilian targets. During our review, officials from several military commands expressed concern about placing military equipment aboard ships that are outside DOD control. DOD officials told us that the reasons for the use of commercial contract carriers include, among others, economy and efficiency over using government-owned and -operated vessels and the adequacy and availability of the U.S.-flagged merchant marine. In commenting on a draft of this report, Maritime Administration officials agreed with our concerns related to the use of foreign ships and crews to transport sensitive military equipment and reiterated their interest in increasing the number of U.S.-flag vessels appropriate for DOD use. They indicated that the shortage of appropriate U.S.-flagged ships will be exacerbated by Military Sealift Command plans to terminate existing charters for some U.S.-flag vessels. The events of September 11 highlighted the vulnerability of the U.S. homeland to unconventional attack, and the resulting new security environment warrants that more attention be paid to the domestic phases of military deployments. It is clearly evident that since September 11, DOD and the organizations responsible for seaport security recognize the need for increased vigilance at home during the domestic phases of a military deployment, and this recognition provides an opportunity to improve seaport security in a systematic and effective manner. However, the inadequate assessment of threats and vulnerabilities and lack of comprehensive security plans prevent organizations at seaports and DOD from thoroughly analyzing the security environment at the ports. This hampers the identification and prioritization of requirements for the protection of critical assets. This situation compounds an already difficult task of protecting deploying DOD forces. However, if enacted and properly implemented, pending maritime security legislation would address most of these issues. We are therefore making no recommendations in this area. The absence of a mechanism at the strategic seaports for coordinating and disseminating comprehensive threat information increases the risk that threats—both traditional and nontraditional—will not be identified and appropriately communicated to all relevant organizations. If established at the port level such a mechanism could provide a formal, rather than informal and ad-hoc, process for coordinating information, and it could focus on port-specific threats, rather than a regionwide perspective. A central coordination mechanism could also provide a means to analyze threats on a continuous basis. Without a DOD authority or organization to coordinate force protection planning and execution for the domestic phases of DOD deployments to and through strategic seaports, potential gaps in force protection may go unnoticed, increasing the risk to DOD operations and equipment. Having such an authority would not only reduce such risks, but would also provide oversight to ensure that risk management and antiterrorism standards are consistently applied through all phases of a deployment from an installation through a port. When military equipment is entrusted to non-DOD personnel, with limited DOD control over the equipment, there is a greater risk that it could be tampered with, seized, or destroyed. While we recognize there are times during a deployment when DOD will relinquish direct control of its equipment, the new security environment warrants that DOD re-evaluate its current policies and procedures to ensure that appropriate security measures are applied during these times. Weaknesses in DOD’s force protection approach along with uncertainties in the security environment at strategic seaports result in increased risks that military operations could be disrupted, successful terrorist attacks might occur, or sophisticated military equipment might be seized by individuals or organizations whose interests run counter to those of the United States. To improve the information available to develop effective seaport security measures, we recommend that the Secretary of Transportation identify and direct the appropriate transportation agency to develop a mechanism at the port level to compile, coordinate, analyze, and disseminate threat information on a real-time basis to all relevant organizations. Such a mechanism might be similar to DOD’s threat working groups but with broader membership or be part of an existing coordinating body (such as the proposed port security committees or the joint terrorism task forces). Whether established as a new entity or as a modification of an existing coordinating body, this mechanism should include representatives from a broad range of federal, state, and local agencies. It should also include in its assessment process nontraditional threats such as natural emergencies and information technology attacks. To improve DOD’s oversight and execution of force protection for deployments to and through domestic strategic seaports, we recommend that the Secretary of Defense designate a single authority (such as the recently established U.S. Northern Command) to coordinate and execute force protection planning for deployments of units from installations in the United States through seaports and until ships enter the destination areas of operation (this responsibility would be similar to that of the overseas unified combatant commands for their respective areas of operation) and direct the single coordinating authority (once established), along with the U.S. Transportation Command, to develop and implement measures to maintain greater security over equipment transported by non-DOD carriers. DOD agreed with the need for a single DOD authority to coordinate and execute force protection planning for deployments from installations in the United States through seaports and until ships enter the destination areas of operation. In commenting on this report, DOD stated that the recently established U.S. Northern Command will work closely with the U.S. Transportation Command to examine security for deployments through domestic seaports. DOD also agreed with the need for measures to maintain greater security over equipment transported by non-DOD carriers. In its comments, however, DOD stated that it has for decades relied on the commercial sector to provide a large portion of the nation’s strategic sealift capabilities in both peacetime and during contingencies and that it is not cost effective to use government-owned sealift vessels for routine cargo movements or force rotations of the type included in GAO’s analysis. Nonetheless, DOD stated that the U.S. Transportation Command and the new U.S. Northern Command will continue to seek ways to improve the security of DOD cargo transported via commercial carrier, including the use of satellite tracking of cargo and vessels and placing security personnel aboard those ships. On those occasions when DOD transfers custody of its equipment to non-DOD carriers, the kinds of additional measures DOD discussed should help improve the overall security of sensitive DOD cargoes. DOD’s written comments are included in their entirety in appendix II. In addition, DOD officials suggested a number of technical clarifications and corrections, which we have incorporated into this report where appropriate. In oral comments on a draft of this report, Department of Transportation officials generally agreed with the findings, conclusions, and recommendations. They also provided additional information and suggested a number of technical clarifications and corrections, which we have incorporated into this report where appropriate. Transportation officials discussed several new and ongoing efforts affecting seaport security by the newly established Transportation Security Administration. Among other initiatives, these include measures for seaport security grants, studies on credentialing port workers, and a study on developing a threat assessment center. These initiatives are funded through regular and emergency appropriations for fiscal year 2002. Additionally, proposed appropriations for fiscal year 2003 would provide further funding if enacted into law. If properly implemented, these initiatives should contribute to the goal of improved seaport security. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretaries of Defense and Transportation and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no cost on the GAO Web site at http://gao.gov. If you or your staff have any questions regarding this report, or wish to discuss this matter further, please contact me at (202) 512-6020. Key contributors are acknowledged in appendix III. To analyze the security environment at strategic seaports we reviewed security planning and procedures during the conduct of site visits at six selected commercial seaports and two military-owned ammunition ports. These six commercial ports included ports that regularly support DOD deployments as well as those that are used less frequently. We selected ports on the West Coast, East Coast and on the Gulf of Mexico. We visited two of the three dedicated ammunition ports identified by DOD, one on each coast. For security reasons, we do not discuss location-specific information in this report. At these selected ports we reviewed documents, observed security measures, and discussed port operations, security planning, coordination mechanisms, specific vulnerabilities, mitigation plans, and resource issues with government and nongovernment officials. Among the organizations we visited during our seaport visits were the Coast Guard, the U.S. Maritime Administration, the Federal Bureau of Investigation, the U.S. Customs Service, port authorities, and local law enforcement agencies. Although the information we obtained at these locations could not be generalized to describe the environment DOD could expect at all seaports, it provides insight into what DOD could expect to encounter at domestic seaports. We also discussed these issues with officials at Coast Guard headquarters and the U.S. Maritime Administration, both in the Department of Transportation in Washington, D.C. To analyze DOD’s process for securing deployments of military equipment through strategic seaports we examined force protection plans, procedures, and coordination measures for four deployments conducted in 2001. We selected these deployments based on information provided by the U.S. Army Forces Command. The command provided a list of deployments involving units moving from within the continental United States to an overseas location during calendar year 2001 that required the use of sealift to transport military equipment. We selected four deployments originating from three installations in calendar year 2001 because they represented about 65 percent of the total tonnage of equipment for all deployments to major DOD contingency operations during that period. An additional factor in our selection was the geographic dispersion of the domestic seaports used for the deployments. Our review of force protection procedures included the guidance and criteria for force protection for deployments, the extent to which these are clearly defined and carried out, and the extent to which DOD works with other federal, state, and local agencies to plan and carry out force protection measures. We also reviewed information from the Military Sealift Command and Military Traffic Management Command on the ships used to transport equipment for these deployments and the equipment they carried. We interviewed officials from the following organizations: Office of Assistant Secretary of Defense for Special Operations and Low- Intensity Conflict in Washington, D.C. U.S. Transportation Command at Scott Air Force Base, Ill. Military Transportation Management Command in Fort Eustis, Va. Military Sealift Command in Washington, D.C. U.S. Central Command in Tampa, Fla. U.S. Army Forces Command in Atlanta, Ga. Army and Navy Force Protection Offices in Washington D.C. Transportation and force protection officials at the installation and unit levels for Army and Marine Corps units To examine DOD force protection efforts, we conducted site visits at three military installations that were the origins of the four 2001 deployments in our review. During these site visits, we reviewed DOD force protection plans, policies and standards used for the equipment involved in the deployments and discussed with unit and installation personnel how DOD addressed security weaknesses identified at the seaports. We also discussed the experience of past deployments and recent deployments with DOD officials at installations and the ports. We also reviewed the findings and recommendations of the Interagency Commission of Crime and Security in U.S. Seaports and the provisions of maritime security legislation now before Congress to determine the potential impact on current and future seaport security efforts. We analyzed the provisions of both House and Senate versions of the legislation and discussed key provisions with staff members of cognizant Congressional committees. We conducted our review from January through August 2002 in accordance with generally accepted government auditing standards. In addition to those names above, Willie J. Cheely, Jr., Brian G. Hackett, Joseph W. Kirschbaum, Jean M. Orland, Stefano Petrucci, Elizabeth G. Ryan, and Tracy M. Whitaker also made key contributions to this report. Homeland Security: Department of Justice's Response to Its Congressional Mandate to Assess and Report on Chemical Industry Vulnerabilities. GAO-03-24R. Washington, D.C.: October 10, 2002. Homeland Security: Information Sharing Activities Face Continued Management Challenges. GAO-02-1122T. Washington, D.C.: October 1, 2002. Combating Terrorism: Department of State Programs to Combat Terrorism Abroad. GAO-02-1021. Washington, D.C.: September 6, 2002. National Preparedness: Technology and Information Sharing Challenges. GAO-02-1048R. Washington, D.C.: August 30, 2002. Homeland Security: Effective Intergovernmental Coordination is Key to Success. GAO-02-1013T. Washington, D.C.: August 23, 2002. Homeland Security: Effective Intergovernmental Coordination is Key to Success. GAO-02-1012T. Washington, D.C.: August 22, 2002. Homeland Security: Effective Intergovernmental Coordination Is Key to Success. GAO-02-1011T. Washington, D.C.: August 20, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments Through Domestic Seaports. GAO-02-955TNI. Washington, D.C.: July 23, 2002. Homeland Security: Critical Design and Implementation Issues. GAO- 02-957T. Washington, D.C.: July 17, 2002. Homeland Security: Title III of the Homeland Security Act of 2002. GAO- 02-927T. Washington, D.C.: July 9, 2002. Homeland Security: Intergovernmental Coordination and Partnerships Will Be Critical to Success. GAO-02-899T. Washington, D.C.: July 1, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Homeland Security: Responsibility And Accountability For Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Combating Terrorism: Considerations For Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T, September 21, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. | The October 12, 2000, attack against the Navy destroyer U.S.S. Cole in the port of Aden illustrated the danger of unconventional threats to U.S. ships in seaports. The September 11, 2001, attacks further heightened the need for a significant change in conventional antiterrorist thinking, particularly regarding threats to the U.S. homeland. The new security paradigm assumes that all U.S. forces, be they abroad or at home, are vulnerable to attack, and that even those infrastructures traditionally considered of little interest to terrorists, such as commercial seaports in the continental United States, are now commonly recognized as highly vulnerable to potential terrorist attack. Of the more than 300 seaports in the United States, the Departments of Defense (DOD) and Transportation have designated 17 as "strategic," because in the event of a large-scale military deployment, DOD would need to transport more than 95 percent of all equipment and supplies needed for military operations by sea. If the strategic ports were attacked, not only could massive civilian casualties be sustained, but DOD could also lose precious cargo and time and be forced to rely heavily on its overburdened airlift capabilities. The security environment at strategic seaports remains uncertain because comprehensive assessments of threats, vulnerabilities, and critical port infrastructure and functions have not been completed, and no effective mechanism exists to coordinate and disseminate threat information at the seaports. GAO identified two significant weaknesses in DOD's force protection process for deployments through domestic seaports. First, DOD lacks a central authority responsible for overseeing force protection measures of DOD organizations that move forces from domestic installations through U.S. seaports. Second, during some phases of a deployment, DOD transfers custody of its military equipment to non-DOD entities, including foreign-owned ships crewed by non-U.S. citizens. |
The Federal Property and Administrative Services Act of 1949, as amended (40 U.S.C. 471-486), places responsibility for the disposition of government real and personal property with the General Services Administration. The General Services Administration delegated disposal of DOD personal property to the Secretary of Defense, who in turn delegated it to the Defense Logistics Agency. The Defense Reutilization and Marketing Service, a component of the Defense Logistics Agency, carries out the disposal function. The complexity of DOD’s disposal process is characterized by the massive volumes of surplus property. In fiscal year 1996, DOD disposed of millions of items with a reported acquisition value (the amount originally paid for the items) of almost $24 billion. The focus of this report, aircraft parts, represents $2.3 billion of this total. DOD provides overall guidance for determining if aircraft parts should be disposed of. The military services and the Defense Logistics Agency determine if specific parts for which they have management responsibility are excess to their needs. Once the military services or the Defense Logistics Agency declares aircraft parts excess to their needs, they enter the disposal process. These parts are sent to one of 170 worldwide Defense Reutilization and Marketing Offices (DRMO), or disposal yards. Upon receipt, DRMO personnel inspect the parts for condition, acquisition value, and special handling requirements such as those for military sensitive items. DRMOs, consistent with legislative requirements, have disposition priorities to make the excess parts available for reutilization within DOD or transfer to other federal agencies. Parts that remain are designated as surplus and can be donated to eligible entities such as state and local governments, among many others. After these priorities have been served, parts that remain may be sold to the general public. Figure 1 shows the usual process for disposing of aircraft parts. Surplus aircraft parts can generally be divided into four categories of condition: (1) new; (2) worn, but still working; (3) broken, but repairable; and (4) scrap. In this report, we refer to the first three categories of parts as potentially usable, since they can be repaired or used as is. The fourth category—scrap—refers to those parts that DOD does not intend to reuse and sells for their basic material content value. Because of concerns about safeguarding military technology and maintaining flight safety, DOD has specific policies and procedures relating to the disposal of aircraft parts. For parts that have military technology involving weapons, national security, or military advantages inherent in them, DOD requires the parts to be demilitarized so that the technology remains within DOD. Demilitarization makes the parts unfit for their originally intended purpose, either by partial or total destruction, before or as a condition of sale to the public. For parts that could cause an aircraft to crash if the parts fail during a flight, DOD components have local policies requiring the destruction of certain used parts with flight safety implications to prevent the parts from reentering the DOD supply system or being made available to the civil aviation industry. In our 1994 report, we cited concerns from the Federal Aviation Administration and the Department of Transportation’s Inspector General that DOD aircraft parts, sold as scrap, reentered civil aviation as usable. As a result, in July 1995, DOD initiated a departmentwide program to identify and prevent parts with potential flight safety risks from being sold intact through DRMOs. The services and the Defense Logistics Agency began identifying parts with flight safety characteristics so they could destroy the parts before they were sold. Some usable aircraft parts DOD sells as surplus fit only on military aircraft but have no military technology implications. These parts are called “nonsignificant military unique” parts. Examples include bolts, fuel controls, engine parts, and airframe parts that have been strengthened to withstand rigorous military use. Companies buy military unique parts on the speculation that DOD may need these parts at a future date. Other usable aircraft parts DOD sells as surplus have applications to aircraft used in civil aviation or by other government agencies and foreign countries. These parts are called commercial-type parts. Examples include the Air Force’s KC-135 air refueling tanker that has many of the same parts as a commercial Boeing 707 aircraft; the Air Force’s C-130 cargo plane that has many of the same parts as a Lockheed 382 Hercules aircraft used by 49 foreign countries; and the Army’s UH-1 Huey utility helicopter that has many of the same parts as a commercial Bell 205 helicopter. Companies buy commercial-type parts on the speculation that they can resell the parts to civil aviation, foreign countries, or DOD. DOD could have avoided destroying certain usable aircraft parts that were in the disposal process. The parts were destroyed because (1) the military services improperly coded parts without military technology as having military technology implications and (2) policies and practices intended to prevent an inadvertent sale of military technology or flight safety items did not adequately exclude parts without military technology or flight safety implications. Until DOD improves the accuracy of assigned demilitarization codes, adopts better management policies and practices, and moves to use private sector techniques, such as identifying highly marketable parts, some usable parts will be unnecessarily destroyed during the disposal process. The three DRMOs we visited destroyed usable parts because the demilitarization codes the military services had assigned were inaccurate. For example, we evaluated 71 sample items at the Oklahoma City DRMO. We selected these items because they were commercial-type items but, at the time of selection, the military services had coded the parts as having military technology implications. We found usable quantities for 10 of our sample items that were marked for destruction at the DRMO. Records showed that the DRMO had previously destroyed quantities of the other 61 sample items. We met with Air Force and Navy equipment specialists and policy officials and questioned the demilitarization codes assigned to each of the 71 items. The policy officials told us that they require equipment specialists to periodically review the demilitarization codes for accuracy and that equipment specialists had recently corrected the codes on nine items. The equipment specialists did not agree on the need to change the codes on the remaining 62 items until we pointed out that these were commercial-type parts. The equipment specialists confirmed that the assigned demilitarization codes—requiring the parts to be destroyed due to military technology content—were incorrect for each of the 62 sample items. The specialists revised each of the demilitarization codes to identify the parts as having no military technology implications. At the San Antonio DRMO, the assigned demilitarization codes were inaccurate for 22 of 27 sample items because the parts had no military technology implications. Similarly, at the Corpus Christi DRMO, the assigned demilitarization codes were inaccurate for 13 of 17 sample items. Each of the military services and the Defense Logistics Agency were responsible for sample items with assigned codes that were inaccurate. Examples of parts destroyed because the assigned codes were wrong can be found in appendix II. DOD has had problems with the accuracy of assigned demilitarization codes for many years. In 1987, the Deputy Secretary of Defense directed the military services and the Defense Logistics Agency to review the assignment of demilitarization codes. The Deputy Secretary was concerned because a partial audit of seven weapon systems revealed that 43 percent of the items checked had been coded incorrectly. In 1994, the Defense Logistics Agency found that 28 percent of the assigned demilitarization codes it reviewed were incorrect. DOD officials told us that historically they assigned demilitarization codes to parts the first time the parts were purchased for a new weapon system. They said that for expediency purposes, they often assigned codes that showed military technology content for all parts on new weapon systems rather than evaluating individual items. Recognizing the need for trained personnel to assign proper codes, DOD developed a course on demilitarization. Despite such efforts to correct the erroneous codes, in April 1997, the DOD Inspector General reported that 52 percent of the demilitarization codes assigned to parts for new weapon systems it reviewed were incorrect. The Inspector General reported that training was not adequate for personnel responsible for assigning and reviewing demilitarization codes and that documentation showing the rationale for their decisions did not exist. According to the Inspector General, DOD’s training course provided only general awareness of the demilitarization program and did not provide the specific details necessary to make decisions on selecting the appropriate demilitarization codes. Our review shows that DOD could improve the accuracy of assigned demilitarization codes by providing its personnel with guidance on how to make prudent decisions on selecting the appropriate codes. For our sample items at Oklahoma City, the Air Force equipment specialists completed a demilitarization code assignment worksheet. The worksheet is a draft document the Air Force is developing for the equipment specialists to follow to identify the proper code and to document the rationale they use in assigning the code. We found that the draft worksheet was a useful tool that provided a step-by-step process in determining the correct demilitarization code. The worksheet also provided documentation supporting how the equipment specialist arrived at the demilitarization code. Moreover, the worksheet proved useful to equipment specialists that had not received recent training. Until DOD provides its personnel with the specific details necessary to make prudent decisions on selecting the appropriate demilitarization codes, inaccurate codes will continue to cause the unnecessary destruction of usable aircraft parts. Policies and practices intended to prevent an inadvertent sale of military technology or flight safety items did not adequately exclude parts without military technology or flight safety implications. The policies and practices in question dealt with the destruction of usable parts categorized as (1) scrap when the parts were usable, (2) sensitive items when the parts were not sensitive, (3) flight safety items when the parts had no flight safety implications, and (4) causing a storage space problem when there was no storage space shortage. In 1994, the Defense Reutilization and Marketing Service directed the DRMOs to destroy all parts categorized as scrap or downgraded to scrap. The reason usable parts were destroyed involved DOD’s categorization of parts as scrap. DOD defines scrap parts as material that has no value except for its basic material content, whereas DOD defines usable parts as material that has value greater than its basic material content and has potential to be used for the originally intended purpose. Commercial company officials told us that some parts that DOD considers scrap have value beyond basic material content and are repairable and reusable in the commercial sector. For the most part, this situation occurs because DOD labels containers of parts it does not want to repair for economic reasons as scrap. On the basis of their experience and independent analyses, commercial companies frequently did not agree with DOD’s economic determinations. In such cases, the companies wanted to buy the used parts, repair them, and resell them for a profit. For example, DOD pays the manufacturer $866 each for first stage turbine vanes used on the T-56 engine. Because DOD’s cost to repair a turbine vane is $750, or 87 percent of the cost of a new vane, DOD considers the vane uneconomical to repair and categorizes it as scrap when worn or broken. However, the manufacturer sells the same first stage turbine vane to commercial customers for $2,020 each. Because of the higher commercial acquisition cost, commercial users can justify the repair cost, which is 37 percent of the commercial acquisition cost. DRMO officials told us that usable parts without military technology were destroyed because of the policy to destroy items categorized as scrap. After receiving complaints from potential buyers and DRMOs that usable parts were needlessly destroyed, the Defense Reutilization and Marketing Service revised its policy in June 1996 to state that only those items categorized both as scrap and as sensitive items are to be destroyed. The Service considers aircraft parts to be sensitive items if the assigned stock number corresponds to 1 of 18 federal supply classes or groups that frequently contain military technology. The classes or groups include weapons, rocket engines, and communication equipment. DRMOs destroyed items considered sensitive property when the items were received as scrap or downgraded to scrap, irrespective of whether the assigned demilitarization codes indicated the parts had military technology implications. DOD officials stated that due to the time and resources required to destroy and document the destruction of material, it is not in DRMOs’ best interest to destroy parts that do not contain military technology. However, the officials said destruction was necessary to prevent an inadvertent release of parts with military technology implications. We recognize the need for DOD to prevent the inadvertent sale of parts with military technology implications. However, DOD management policies and practices resulted in the destruction of commercial-type parts and nonsignificant military unique parts that did not have technology and safety implications. We previously reported that DOD could increase proceeds from the sale of surplus aircraft parts—not by destroying them—but by adopting private sector practices. Specifically, we stated that DOD should use techniques to enhance the marketability of its aircraft parts, including identifying highly marketable commercial-type parts that would yield the greatest benefits at the minimum cost. We pointed out that some commercial airlines identify parts that have a high demand or command a high price and place them on a special listing for marketing purposes. This review shows that DOD has not implemented similar procedures. DRMO personnel also destroyed parts, even though they were not on the sensitive items list. According to DRMO officials, the personnel did this to increase sales proceeds. They explained that historically DRMOs received scrap value for usable parts. They stated that by destroying usable parts, surplus parts dealers would get what they paid for and nothing more. The officials reasoned that once surplus dealers realized that DRMOs destroyed the parts, they would be willing to buy the usable parts before they were destroyed and would pay higher than scrap value for them. As a result, sales proceeds would increase. We reviewed 83 sample items at the San Antonio DRMO that were not on the sensitive items list and that the disposal histories showed were categorized as scrap or downgraded to scrap after receipt. Our analysis identified instances where the DRMO offered usable parts for sale but did not sell them because bids did not exceed scrap value. The DRMO subsequently destroyed the parts and sold them as scrap. Some of the parts were worth more than scrap value and should have been held for another sale as usable parts. An example of parts destroyed because of the DRMO practice of destroying scrap not on the sensitive items list can be found in appendix II. As a result of our 1994 report, DOD initiated a departmentwide program to identify and prevent parts with potential flight safety risks from being sold intact through DRMOs. The military services and the Defense Logistics Agency began identifying parts with flight safety characteristics so they could destroy the parts before they were sold. However, our review showed that aircraft parts were destroyed as flight safety risks when the parts had no flight safety implications. This destruction occurred because DRMO practices intended to prevent the inadvertent sale of parts with flight safety implications also caused the planned destruction of parts without these implications. For example, in response to a potential buyer’s complaint on September 20, 1996, that the San Antonio DRMO was destroying usable blades for the T-56 engine, the San Antonio Air Logistics Center investigated. The Center found 7,018 blades, originally costing $1.06 million, that the Air Force had incorrectly categorized as scrap because of a breakdown in inspection procedures and had sent them to the DRMO. San Antonio DRMO officials said the destruction was to prevent an inadvertent sale of flight safety items. However, Center officials said that these parts were incorrectly sent to the DRMO and did not have to be destroyed for flight safety reasons. DRMO officials said they preferred to err on the side of safety. We recognize the need for DRMOs to prevent the inadvertent sale of parts with flight safety implications. However, DRMO practices resulted in the planned destruction of commercial-type parts and nonsignificant military unique parts that did not have flight safety implications. An additional example of parts being unnecessarily destroyed as flight safety risks is in appendix II. An interim Army Aviation and Troop Command instruction to destroy all parts with flight safety implications also resulted in the destruction of some helicopter parts without such implications. According to DRMO and Army records, for example, on February 5, 1997, a potential buyer witnessed the destruction of between 200 and 300 UH-1 helicopter gear shafts and 10 turbine rotors at the Texarkana, Texas, DRMO. The destroyed parts were new, were in the original equipment manufacturer’s boxes, had a manufacturer’s list price totaling about $1 million, and were categorized as flight safety critical parts. After the buyer complained, the Army agreed that the parts were new and should not have been destroyed. According to DRMO officials, the interim instruction resulted in the destruction of large quantities of new, unused parts that had no flight safety risks. After receiving complaints from DRMOs and potential buyers that new parts were being destroyed, the Command revised its instructions and authorized the sale of flight safety critical parts under certain conditions, such as when the parts are new and unused. To determine if the procedural change was working, we reviewed a sample of 73 items at the Corpus Christi DRMO that the Army had identified as having flight safety implications and that DRMO records indicated were new. Our analyses showed that the DRMO either offered each sample item for sale or had already sold it. We concluded that no unnecessary destruction of new parts occurred on the transactions we reviewed. Examples of flight safety items properly sold can be found in appendix II. At the Corpus Christi DRMO, we observed quantities of 157 different usable parts for the AH-1 Cobra helicopter scheduled for destruction (see fig. 2). Specifically, we noted that there was a total of 1,972 usable, mostly new, helicopter parts in a DRMO warehouse. The parts originally cost $6.9 million. According to the DRMO Chief, these parts were to be destroyed beginning May 3, 1997, to free up storage space. We contacted the Defense Reutilization and Marketing Service and advised it of our concern with the scheduled destruction because the assigned demilitarization codes indicated no military technology was associated with 155 of the 157 different parts and because there were sufficient amounts of warehouse storage space for the parts. Defense Reutilization and Marketing Service officials said that in February 1996, they placed a prohibition against selling Cobra parts at DRMOs because the Army-assigned demilitarization codes were inaccurate. The property disposal specialist responsible for the prohibition said the Army planned to review and validate the demilitarization codes for the Cobra helicopter parts and he wanted to be sure the codes were accurate before proceeding with a sale or destruction action. He said the Army had not completed its demilitarization code review. The specialist said he also instructed the DRMOs to destroy the parts if they started experiencing a storage impact. After a meeting with the Chief of the Corpus Christi DRMO, the Defense Reutilization and Marketing Service issued a memorandum directing the DRMOs not to destroy any Cobra parts unless they are in a scrap condition and to hold usable parts in storage until the Army completes the demilitarization code review. Army Aviation and Troop Command officials who are responsible for reviewing and validating demilitarization codes for Cobra helicopter parts told us they were waiting to complete the demilitarization code review until after Army headquarters makes a decision on whether or not to sell disarmed, surplus Cobra helicopters to the public for use in such purposes as fighting forest fires. In our opinion, accurate code assignments are required regardless of whether the helicopters are sold to the public. The military services’ inventory managers did not have adequate information on aircraft parts located in DRMOs. DOD Materiel Management Regulation 4140.1-R requires inventory managers to have information on parts transferred to DRMOs, to recall parts for reutilization to prevent concurrent procurement and disposal, and to prevent the repair of unserviceable items when serviceable items are available. However, we found that they did not have the needed information and that DRMOs destroyed quantities of parts DOD components needed. For example, at the Corpus Christi DRMO, we compared the 157 different usable Cobra helicopter parts scheduled to be destroyed by the DRMO with Army budget and procurement records. The records showed that the Army needed quantities for 22 of the 157 parts, totaling $196,500. We discussed our findings with the Defense Reutilization and Marketing Service, which notified the Army Aviation and Troop Command of the need to return the parts to the DOD supply system. The Command had not responded to this notification at the time our field work was completed. Additional examples of parts needed by DOD components can be found in appendix II. Air Force and Army officials said that, despite the requirements of the DOD regulation, they did not have adequate visibility over parts in DRMOs. They stated that interface problems between military service and DRMO computer systems precluded the services from knowing what parts were in DRMOs. Since the services did not have adequate visibility over parts in DRMOs, the DRMOs were destroying the same parts the services were purchasing or repairing. DOD headquarters officials commented that DOD was working to correct the computer interface problem as part of a Total Asset Visibility program, but it would be several years before the problem is fixed. The officials stated that DOD had neither established milestones for correcting the computer interface problem nor instituted alternative ways to obtain the needed information on a routine basis. For example, aircraft parts available at DRMOs can be identified by telephone calls, the Internet, or physical inspections. The conditions described in this report result in an unnecessary expenditure of resources to destroy parts that do not actually require destruction. In some instances, the government also loses the increased revenue that could be derived from the sale of usable parts to prospective buyers and the opportunity to return usable parts to the DOD supply system to avoid unnecessary procurements or repairs. Accordingly, we recommend that the Secretary of Defense take the following actions to prevent the destruction of usable aircraft parts. Provide guidance on selecting appropriate demilitarization codes that includes the specific details necessary to make appropriate decisions. The guidance could take the form of the draft demilitarization code assignment worksheet being used by the Air Force. Exclude commercial-type parts and nonsignificant military unique parts that do not have military technology and flight safety implications from policies and practices intended to prevent an inadvertent sale of parts with these implications. Work closely with the private sector to identify and list commercial-type aircraft parts and nonsignificant military unique parts the private sector needs and require the DRMOs to check this list before destroying parts. Require the Army to complete its validation of the demilitarization codes assigned to Cobra helicopter parts so commercial-type parts and nonsignificant military unique parts can be sold. Establish milestones for correcting computer interface problems that preclude the military services from having visibility of parts located in DRMOs and from following regulations that require parts to be returned to the supply system when needed to prevent unnecessary procurements or repairs. In the interim, institute alternative ways to obtain this information on a routine basis. For example, aircraft parts available at DRMOs can be identified by telephone calls, the Internet, or physical inspection. DOD generally agreed with the report and stated that the concepts presented appear to be beneficial to the disposal of aircraft parts (see app. III). Concerning our first recommendation, DOD agreed that a code assignment sheet may be useful in assigning demilitarization codes and stated that it would work with the military services and the Defense Logistics Agency to determine the feasibility of departmentwide use of the Air Force, or a similar, worksheet. In response to our second recommendation, DOD agreed that, when properly coded by item managers, usable parts that do not have military technology and flight safety implications do not have to be destroyed. DOD noted that challenge programs are available if parts are miscoded. With regard to our recommendation that the Army complete its validation of the demilitarization codes assigned to Cobra helicopter parts, DOD stated that it is monitoring the Army’s validation process. The validation, which will determine which parts are commercially available and can be sold, is expected in November 1997. DOD partially agreed with our recommendation that it work closely with the private sector to identify and list parts the private sector needs and require the DRMOs to check this list before destroying parts. DOD stated that it previously attempted to obtain private sector input but the response was minimal. DOD also stated that the identification of commercial-type aircraft parts should be incorporated into an existing database rather than utilizing a separate list. DOD added that, although it is DOD policy that DRMOs destroy parts only when demilitarization is required or they are identified as having flight safety implications, inaccurate information does occur and use of all available data to reduce unnecessary destruction should be used by the DRMOs. We continue to believe that DOD should work closely with the private sector because DOD’s previous inquiries were limited to the original equipment manufacturers. Officials from the companies we contacted, including the National Association of Aircraft and Communication Suppliers, told us that, although they are buyers of large quantities of aircraft parts at DRMO sales, DOD had not asked them for input to identify commercial-type aircraft parts. Our report documents examples where DRMOs destroyed usable parts that did not have military technology or safety implications. Because the current system for identifying commercial-type and nonsignificant military unique parts the private sector needs is not working, we also continue to believe that DOD needs to list these parts separately. DOD also partially agreed with our recommendation that it establish milestones for correcting the computer interface problems that preclude the military services from having visibility of parts located in DRMOs and, in the interim, institute alternative ways to obtain this information on a routine basis. DOD stated that the interface problems are addressed as they arise and that a joint Total Asset Visibility office is working with the military services to finalize a functional description for automated visibility of disposal assets to prevent unnecessary buys and repairs. Once finalized, milestones for implementation will be developed based on the complexity of the information system changes required. DOD stated that the earliest projected date for development of milestones is the first quarter of fiscal year 1998. DOD also stated that, in the interim, many other sources are available to the military services that provide visibility of parts at the DRMOs, including the Internet, an Interrogation Requirements Information System, and formal and informal contacts between DRMOs and item managers. While we agree that the long-term solution rests with implementation of the Total Asset Visibility program, we continue to be concerned that routine interim procedures do not exist. Although DOD acknowledges that many other sources are available to the military services that provide visibility of parts in DRMOs, our report shows that DOD guidance is needed because the military services are not routinely checking with these sources. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director, Defense Logistics Agency; and the Director, Office of Management and Budget. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix IV. We reviewed policies, procedures, disposal histories, transaction histories and related records obtained from the Defense Reutilization and Marketing Offices (DRMO) and item managers and documented disposal practices. We interviewed policy officials, disposal office personnel, item managers, and equipment specialists. To determine the Department of Defense’s (DOD) policies and practices for destroying aircraft parts during the disposal process, we held discussions and performed work at the Office of the Deputy Under Secretary of Defense (Logistics), Washington, D.C.; the Army, the Navy, and the Air Force Headquarters, Washington, D.C.; the Defense Logistics Agency, Fort Belvoir, Virginia; and the DOD Inspector General, Washington, D.C. and Columbus, Ohio. To obtain information on how surplus parts are received and processed for sale, we documented procedures and practices at three DRMOs located in Oklahoma City, Oklahoma; San Antonio, Texas; and Corpus Christi, Texas. According to DOD officials, the Oklahoma City and San Antonio DRMOs handle the largest volumes of surplus aircraft parts. Since these DRMOs handle surplus parts used mostly on Air Force and Navy aircraft, we also selected the Corpus Christi DRMO, which handles large quantities of surplus parts used mostly on Army aircraft. We also collected budget, procurement, inventory, weapon system application, and disposal information from item managers, equipment specialists, and policy officials at the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the San Antonio Air Logistics Center, Kelly Air Force Base, Texas; the Corpus Christi Army Depot, Corpus Christi, Texas; the Army’s Aviation and Troop Command, St. Louis, Missouri; and the Naval Inventory Control Point, Philadelphia, Pennsylvania. We also visited and collected data from members of the National Association of Aircraft and Communication Suppliers, Inc., Alamo Aircraft Supply, Inc., and Dixie Air Parts Supply, Inc., San Antonio, Texas; Jet Reclamation, Inc., Bulverde, Texas; and Rick’s Mfg. and Supply, Choctaw, Oklahoma to identify specific problems they were having with DOD’s disposal practices. We judgmentally selected 271 surplus items for review to determine the adequacy of DOD’s policies and procedures for ensuring that aircraft parts without military technology and flight safety implications are not unnecessarily destroyed. We selected 83 items at the San Antonio DRMO involving the disposal of usable parts as scrap material and 27 items involving the accuracy of assigned demilitarization codes; 73 items at the Corpus Christi DRMO involving flight safety and 17 items involving the accuracy of assigned demilitarization codes; and 71 items at the Oklahoma City DRMO involving the accuracy of assigned demilitarization codes. We selected these items because they were commercial-type parts or nonsignificant military unique parts that were either coded for destruction due to military technology content or alleged by the Association to have been unnecessarily destroyed. We also reviewed the results of prior DOD internal studies. To determine whether parts being destroyed at the three DRMOs were needed by the military services, we compared selected sample items with the services’ budget stratification databases and requirements computations. We checked to see if there were current or future buy and repair requirements for the items. We informed the military services of any sample items that had current or planned requirements so the parts could be recalled from the DRMOs. We performed our review between January 1997 and June 1997 in accordance with generally accepted government auditing standards. The Air Force decided that 184 TF-33 engine combustion chambers (Stock No. 2840008285214RV) (see fig. II.1) used on the KC-135 aircraft were surplus and sent them to the Oklahoma City DRMO. The parts originally cost $452,352. On April 15, 1997, the DRMO destroyed the 184 parts, although the parts were repairable. The DRMO destroyed the parts because the Air Force had assigned a demilitarization code to the parts requiring total destruction to protect military technology. The DRMO estimated that it spent $211 to destroy the parts and sold them as scrap for $3,450. After we pointed out that this was a commercial-type item, the Air Force equipment specialist said the assigned demilitarization code was incorrect because the parts contained no military technology. As a result, the DRMO destroyed parts that the private sector could have used. The equipment specialist corrected the demilitarization code. On April 9, 1997, we observed the destruction with a cutting torch of 20 nozzle rings (Stock No. 2840011611133RV) (see fig. II.2) used on the KC-135 aircraft engine. These parts originally cost $94,400. The Oklahoma City DRMO destroyed the parts because the Air Force had assigned a demilitarization code that required total destruction to protect military technology. According to the equipment specialist, the Air Force replaced the parts with a newer version. He said that the parts sent to the DRMO, although usable, were no longer needed by the Air Force. After we pointed out that this was a commercial-type item, the equipment specialist said the assigned demilitarization code was incorrect because the part contained no military technology. He also said the destroyed parts were usable on commercial Boeing 707 aircraft in the private sector. As a result, the DRMO destroyed parts that the private sector could have purchased. The equipment specialist corrected the demilitarization code. On April 14, 1997, the Corpus Christi DRMO destroyed 53 circuit card assemblies (Stock No. 5998013370963) used on the UH-60 helicopter. The parts originally cost $54,392. The DRMO destroyed the parts because the Army had assigned a demilitarization code to the parts requiring total destruction to protect military technology. After we questioned if military technology was involved with this part, the Army equipment specialist said the assigned demilitarization code was incorrect because the part, although military unique, was nonsignificant and contained no military technology that needed to be protected. As a result, the DRMO destroyed parts that the private sector could have purchased. The equipment specialist corrected the demilitarization code. During fiscal year 1996, the San Antonio Air Logistics Center sent six usable support assemblies (Stock No. 2840011932157RW) used on the C-130 aircraft engine to the DRMO because the parts were no longer needed. The parts originally cost $19,660. The San Antonio DRMO destroyed the six parts because the Navy had assigned a demilitarization code to the part requiring total destruction to protect military technology. After we pointed out that this was a commercial-type item, the Navy equipment specialist said that the assigned demilitarization code was incorrect because the part contained no military technology. He said the destroyed parts were usable on commercial aircraft in the private sector. As a result, the DRMO destroyed parts that the private sector could have purchased. The equipment specialist corrected the demilitarization code. On February 26, 1996, the San Antonio DRMO downgraded to scrap 13 nozzle assemblies (Stock No. 2840010668071RW) used on the T-56 engine and destroyed them. The parts were destroyed to prevent surplus dealers from buying usable parts at scrap prices. The parts originally cost $15,953. These parts did not appear on the Defense Logistics Agency’s sensitive item list and had no military technology or safety implications. The destroyed parts sold for about $2 each. By contrast, on August 20, 1996, the DRMO sold 24 usable nozzle assemblies intact for $1,183, or over $49 each. The San Antonio Air Logistics Center considered 72 turbine vanes (Stock No. 2840004262571RW) for the T-56 engine not usable because they were worn and cracked and sent them to the DRMO for disposal. These parts originally cost $200,000. The San Antonio DRMO Chief said that he decided to destroy these parts at his own management discretion strictly for flight safety reasons. He said that he would not want parts in such poor condition to be refurbished and installed on an aircraft that he or anyone else was a passenger on. However, Center officials said these parts had no safety implications. After reviewing this matter, the Center’s Commander told the DRMO to sell the parts intact. On October 7, 1996, the Corpus Christi DRMO received 1,101 turbine rotor blades (Stock No. 2840001523806) (see fig. II.3) used on the CH-47 helicopter for disposal. Since the Army had assigned demilitarization code F to the parts, indicating that they had flight safety implications, the DRMO requested disposition instructions from the Army Aviation and Troop Command. The Command instructed the DRMO to destroy the part unless it was (1) unused, (2) in serviceable condition, (3) physically marked with the manufacturer’s code, and (4) in the manufacturer’s original packaging. The DRMO decided that the parts met this exception and offered them for sale. DRMO records showed that the turbine rotor blades were sold in a lot with another turbine rotor blade for $13,796. On September 29, 1996, the Corpus Christi DRMO received notice that six transmission cartridge assemblies (Stock No. 1615011167083) used on the UH-1 helicopter were no longer needed by the Army. These parts originally cost $36,774. Since the Army had assigned demilitarization code F to the parts, the DRMO requested disposition instructions from the Army Aviation and Troop Command. On December 12, 1996, the Command instructed the DRMO to destroy the part unless it was (1) unused, (2) in serviceable condition, (3) physically marked with the manufacturer’s code, and (4) in the manufacturer’s original packaging. The DRMO determined that these parts met this exception and on May 29, 1997, prepared a notice for the assemblies to be listed for sale in the International Sales Office catalog. At the completion of our fieldwork, the sales office had not set the date of sale. At the Oklahoma City DRMO, we observed two nozzle rings (Stock No. 2840009911048RV) for the TF-33 engine being destroyed with a cutting torch. The two nozzle rings, originally costing $7,000, were being destroyed at the discretion of a DRMO employee. We obtained documents that showed these parts were in usable condition and that the Air Force needed the parts and had recently placed orders to buy 107 new nozzle rings. After we pointed this situation out to the Oklahoma City Air Logistics Center, the Center implemented new procedures to prevent usable engine nozzle rings and other needed parts from being destroyed. The procedures require equipment specialists to periodically inspect parts sent to the DRMO. Within a month, the Center identified and prevented the destruction of 200 additional usable parts that were at the DRMO. In response to a potential buyer’s complaint on September 20, 1996, that the San Antonio DRMO was destroying usable blades (Stock No. 2840011123776RW) for the T-56 engine, the San Antonio Air Logistics Center investigated. The Center found 7,018 blades, originally costing $1.06 million, that the Air Force had incorrectly categorized as scrap because of a breakdown in inspection procedures and sent them to the DRMO. San Antonio DRMO officials said the destruction was to prevent an inadvertent sale of flight safety items. However, Center officials said that these parts did not have to be destroyed for flight safety reasons and were needed to satisfy depot maintenance requirements. The DRMO returned the blades to the Air Force. Roger Tomlinson Jackie Kreithe Bonnie Carter Frederick Lyles The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed selected aspects of the Department of Defense's (DOD) disposal process, focusing on whether: (1) DOD destroyed, during the disposal process, usable aircraft parts that did not have military technology and flight safety implications; and (2) the military services recalled aircraft parts from the disposal process to preclude unnecessary purchases or repairs. GAO noted that: (1) management of the aircraft parts disposal process can be improved; (2) DOD destroyed some usable aircraft parts and sold them as scrap; (3) these parts were in new or repairable condition and did not have military technology or flight safety implications; (4) the parts could possibly have been sold intact at higher than scrap prices; (5) this situation occurred for several reasons; (6) for example, disposal offices destroyed parts because the demilitarization codes the military services had assigned to the parts were inaccurate; (7) the codes indicated the parts contained military technology when they did not; (8) GAO work showed that the Oklahoma City disposal office destroyed 62 of 71 sample items, even though they did not have technology implications, because the assigned codes required their destruction; (9) personnel responsible for assigning and reviewing the codes had not been sufficiently trained and guidance was not adequate; (10) in addition, policies and practices designed to prevent the inadvertent or unauthorized release of parts with military technology and flight safety implications did not distinguish between parts with or without such implications; (11) parts without military technology and flight safety concerns were destroyed along with parts that had these characteristics; (12) GAO work also showed that DOD could have purchased or repaired fewer aircraft parts if it would have recalled the needed parts from the disposal process; (13) for example, the Army could have reduced current and planned purchases by about $200,000 by using Cobra helicopter parts scheduled for destruction; (14) DOD regulations require the military services to know which parts they have placed in the disposal process; (15) however, interface problems between service and disposal office computer systems precluded the services from knowing what parts were at the disposal offices; (16) the military services had not instituted alternative ways to obtain this information on a routine basis; (17) problems with the disposal process are likely not unique to the three disposal yards GAO visited because DOD, military service, and Defense Logistics Agency policies and procedures generally apply to activities being performed at all locations; and (18) GAO past reviews and DOD internal studies have identified similar problems at these and other locations over the past 10 years or earlier. |
We previously reported that states originally funded most FPS services themselves or with nonfederal funds. As the demand for services increased and available resources became more constrained, states sought additional funding from federal sources, such as Title IV-B Child Welfare Services, Title XX Social Services Block Grant, and Title IV-A Emergency Assistance. However, funding levels were still insufficient to keep pace with service needs. By the early 1990s, over half the programs we surveyed reported that they were not able to serve all families who needed services primarily due to the lack of funds and staff. OBRA 1993 created the Family Preservation and Support Services program under Title IV-B, Subpart 2, of the Social Security Act. Administered by HHS’ Administration for Children and Families (ACF), OBRA 1993 authorized $930 million over a 5-year period. Through fiscal year 1997, Congress appropriated $623 million for grants to states to conduct planning activities and fund FPS services for the first time. The grants are based on each state’s percentage of children receiving Food Stamps, a federal food subsidy program for low-income households. State child welfare agencies are responsible for administering the FPS program in each of the 50 states and the District of Columbia. OBRA 1993 allowed states to use up to $1 million of their grant amount for planning purposes during the first year, with no required state match. Funds used for FPS services and other allowable activities, such as additional planning or evaluation, require a 25-percent state match. The law also requires states to spend a significant portion of service dollars for each type of service, which HHS has defined as at least 25 percent each for family preservation services and for family support services. Further, state administrative costs are limited to 10 percent. To receive FPS funds, states submitted grant applications to HHS by June 1994 and comprehensive plans a year later. These plans were based on a needs assessment; developed with community groups; and coordinated with health, education, and other agencies that serve children and families. As required, the plans described goals that states expect to achieve by 1999 and methods that they will use to measure their progress. Federal guidance also encourages states to continue their collaborative planning activities, improve service delivery, and leverage additional funding from other sources for FPS services. Family preservation programs generally serve families where child abuse or neglect has occurred or where children have been identified as representing a danger to themselves or others. These families risk having their children temporarily or permanently placed outside the home in foster care, juvenile detention, or mental health facilities. Most family preservation programs provide specific services tailored to the family’s needs to help ameliorate the underlying causes of dysfunction. These services may include, for example, family counseling and training in parenting skills. The intensity, duration, and packaging of services differentiates these programs from the traditional delivery of child welfare services, which also share the goal of placement prevention and family reunification. Even among family preservation programs, however, service delivery varies. In the widely used Homebuilders intensive crisis intervention model, caseworkers typically carry small caseloads of two families at a time and are available to families on a 24-hour basis for 4 to 6 weeks. In other program models, caseworkers may carry caseloads of up to 20 families, with one or two personal contacts per week for a period of 7 or more months. (See app. II for a description of various family preservation program models.) Family support programs include a broad spectrum of community-based activities that promote the safety and well-being of children and families. In general, the purpose is to reach families before child abuse or neglect occurs. Often provided in a community center or a school, family support programs may include services outside the traditional scope of the child welfare agency, such as health care, education, and employment. Some family support programs offer a comprehensive array of services to an entire community, including parenting classes, health clinics, and counseling. Other programs are more narrow in scope and may focus only on family literacy or provide information and referral services. Compared to family preservation, eligible participants may be more broadly or narrowly defined; for example, all families in a community or only teenage mothers in a community. In practice, the distinction between family preservation and family support services may be blurred. The federal FPS services legislation provides states with the flexibility to meet the needs of children and families through family preservation and community-based family support services. Exercising this flexibility, states have reported choosing to fund an array of services and, in many cases, strategies for improving the ways in which services are delivered. Almost all states appear to be introducing new family preservation and family support services. Our analysis shows that states are allocating somewhat more funds to family support services. Forty-four states reported that they used federal funds to create new family preservation programs, family support programs, or both. For example, Oregon has had one model of a family preservation program—the Intensive Family Services Program—since 1980; however, concerns about the high numbers of African-Americans in foster care in one community prompted this state to initiate a new family preservation program. This program is based on the Homebuilders service-delivery model but refined to better meet the cultural needs of this population. Although almost all the states reported starting new programs, the size and service levels of these programs vary across states and programs. Some are quite small. For example, in a low-income neighborhood in Maryland, a new family preservation component was added to a community-based substance abuse treatment program. There, $40,000 pays for one caseworker to provide family preservation services for as many as five families at a time to prevent the need to remove children from their homes while the parents are treated for substance abuse. Another new family support program in this same community provides information and coordinates communitywide activities to ensure families have knowledge of and access to all available community resources. About $75,000 is being spent for this program that plans to serve 200 persons by telephone or in-person and make 600 contacts by mail a month. By contrast, another community in Texas implemented a larger-scale program spending $971,000 in federal funds over the last 2 years to create a family resource center in each of three school districts. This new family support program offers an array of services at each school-based center, including parent education, counseling, adult education, childcare, some health care, and family support workers for families in need of more intensive services. As of August 1996, over 3,000 households containing 8,600 individuals had registered since the program’s inception. In addition to introducing new programs, almost every state used federal funds to fill gaps in existing FPS services. Forty-seven states reported expanding existing family preservation services, family support services, or both by making them available to more clients within existing service areas, by adding more program sites, or enhancing programs by increasing the intensity of existing services or adding new services as illustrated by the following cases: Texas expanded its intensive placement prevention program to additional locations to reach new clients as well as more clients within existing service areas. This family preservation program is designed to prevent the need for placing abused and neglected children in foster care. The existing 18 service-delivery units are being expanded to 38 units and about 115 new workers are being hired to serve an additional 520 families per year. Arkansas expanded its Intensive Family Services program from 10 to 20 counties. This family preservation program was also enhanced by adding emergency cash assistance for participating families. This new service will enable families in crisis to address some of their immediate needs, such as covering back rent to avoid becoming homeless. One Maryland community enhanced a neighborhood recreation program by adding new activities and increasing its hours of operation. Community members voiced concern that the lack of recreational activities was a factor in the adolescent crime rate. This family support program is designed to give young people a safe place to congregate and recreate, especially in the late afternoon and evening hours, to keep them off the streets and away from the influence of illegal and drug-related activities. The likelihood of states creating new or expanding existing programs appeared unaffected by whether states had previously provided family preservation or family support services, how long states have had service dollars available, or whether service decisions were made at the state or local level. Our analysis of state survey responses showed no clear patterns regarding the circumstances that might result in states funding certain types of services. As the law requires, most states are spending a significant portion of their federal funds for family preservation services and family support services. Of the federal funds used for services in fiscal year 1996, states allocated an average of 56 percent to family support services and 44 percent to family preservation. In 1996, over half the states allocated a majority of their service dollars for family support services, as shown in table 1. Four of these states are using all their service dollars for family support activities, which is allowable as long as the state justifies this distribution. While every state is initiating or expanding family support services, family preservation services, or both, slightly more states are using federal funds for family support services. Forty-one states reported introducing new family support programs, while 34 states initiated new family preservation programs. Forty-three states expanded existing family support services, compared to 38 states for family preservation. Several reasons may explain why states have placed somewhat more emphasis on family support services. According to federal and state officials, some states had already spent considerable state or other federal funds for family preservation services and decided—either at the onset or based on planning results—to place greater emphasis on family support services. Further, many states delegated to counties or communities the responsibility for conducting localized planning and making service decisions. Localities were apt to be more familiar with support services and to play a larger role in program decisions than the child welfare agencies familiar with family preservation. States have had 1 to 2 years to initiate or expand family preservation and family support services, depending on how they used their first year’s funds. All states spent at least a year doing collaborative community-based planning to develop their 5-year plans, in accordance with HHS guidance. Nineteen states elected to implement services while simultaneously conducting planning activities resulting in these states having had about 2 years to implement services. The majority of the states, however, began to implement federally funded services a year later, after they had completed their 5-year plans. For those states that have had a year to initiate or expand services, most reported that implementation of family preservation and family support services has been slower than they expected. In total, 25 states indicated being behind schedule primarily due to the magnitude and complexity of the implementation effort. Most of these states said that they experienced delays in designing or developing FPS services. Moreover, the competitive process to select communities or programs that would receive federal funds also caused delays in several states. Officials had not anticipated the time required to solicit and review proposals, respond to challenges, and award contracts. Many states also reported that an extended period of time was required to change their service-delivery system to facilitate implementation, such as training staff on procedural changes and collaborating with other service providers. In addition to these procedural factors, many states attributed their receipt of federal funds later than expected as a reason for being behind schedule. While states appear to be using most of their federal FPS funds to initiate or expand family preservation or family support services, many states are also undertaking a variety of activities to enable the service-delivery system to serve vulnerable children and families more effectively and efficiently. According to estimates provided by each state at the time of our study, an average of 83 percent of federal funds had been spent on direct services, such as the new and expanded family preservation and family support services already described. The remaining federal dollars were used for other allowable activities, including additional planning, administration, and capacity-building such as training and technical assistance. Five states dedicated all their FPS funds to the provision of direct services, while 46 states used a portion of their funds for other activities as well as direct services. Thirty-eight of these states conducted activities designed to enhance the capacity of state and local agencies to provide family preservation and support services. These activities included staff training in cultural awareness or procedural changes, technical assistance to service providers, research or evaluation activities, and management information system development and improvement. For example, Arkansas held two conferences, which were attended by over 1,000 individuals, to educate the public on prevention issues, encourage collaboration among providers, and provide technical assistance and training to staff of community-based organizations. Several states contracted with universities or private research firms to conduct outcome evaluations. In Idaho, local panels were established to review closed child protection cases to identify service gaps and improvements to the service-delivery system responsible for investigating allegations of child abuse and neglect. In addition, 17 states reported that planning activities will continue beyond 1995. For example, Maryland is taking more time to allow its 19 local management boards representing the state’s 24 counties to develop their own community-based plans. At the time of our study, the state had provided federal funds to 11 boards for localized planning. Eventually every local management board in the state will have the opportunity to develop its own plan. Midway into this 5-year program, it is too early to identify what impact the federally funded family preservation and family support services have had on the lives of vulnerable children and their families. Several efforts, however, are underway to monitor results and assess impact. By law, states must track results and report on their progress in achieving the goals set in their 5-year plans. Some states will also conduct formal evaluations to examine outcomes and processes. To determine the impact of federally funded services, however, requires rigorous evaluation. Eleven states plan to conduct such evaluations. In addition, federal efforts are underway to assess the effectiveness of family preservation and family support services. States plan to track the results of federally funded services by using a variety of measures. At a minimum, all states report that they will track the number of children and families served and most will measure the extent to which their needs are being met. Specifically, 45 states will look for evidence of changes in parent-child relationships, family functioning, or participants’ satisfaction with services delivered. Many states will also assess the well-being of children by using appropriate measures, such as the number of infants discharged from community care who receive follow-up care within 48 hours. More than half the states told us that they expect to determine the program’s cost effectiveness, the efficacy of certain services for particular client groups, or both. Finally, at least 45 states plan to monitor traditional indicators of child welfare, such as the number of child abuse and neglect reports, and changes over time in one or more aspects of the service-delivery system. For example, one state plans to examine the extent to which consumers are participating in service planning groups and services are provided in conjunction with community and neighborhood organizations. Having set goals and measurable objectives in their 5-year plans, states are expected to annually report on outcomes and progress towards achieving these goals. At the time of this report, HHS and its contractor responsible for evaluating state implementation were reviewing states’ first progress reports and expected to complete their initial analyses in December 1996. In addition, some states will conduct formal evaluations that examine processes and outcomes, in many cases in conjunction with schools of social work at state universities. For example, Kentucky has contracted with the University of Kentucky to develop an evaluation program to assess the extent to which the state’s FPS services program reaches the target population, monitor the frequency of service delivery and client participation, tabulate the cost of implementing the program, and assess the extent that program goals are achieved. In Arizona, the state’s evaluation will track multiple child, family, and community outcome measures over time and compare results to baseline indicators. Data sources include family questionnaires, agency reports, and worker assessments. State plans for monitoring and evaluating FPS programs should yield useful information on the size, nature, and outcomes of funded activities as well as changes in the well-being of communities, families, and children. Because these efforts will not necessarily confirm that the programs caused improved outcomes, 11 states plan to conduct their own rigorous evaluations—even though such evaluations are not required—that will yield more conclusive results. For example, a research contractor will conduct a 3-year randomized clinical trial of a home visitation program in San Diego County, California, that is based on Hawaii’s Healthy Start model. Researchers will randomly assign 500 families to one of two groups—about half the families will receive program services and the other families will not—and evaluate the effectiveness of the program model as it is implemented in San Diego. Primary study objectives include testing whether implementation of this model results in improved outcomes and determining what cost-benefits are derived. Two federal evaluations are underway to rigorously assess the impact of FPS programs on children and families—one for family preservation and the other for family support services. Each evaluation is comprised of multiple studies of mature programs—initiated before the federal FPS services law—that span a range of program models and methods for targeting services. At the time that we prepared this report, the research contractors were expected to begin data collection in the fall of 1996 and issue interim reports a year later. The family preservation evaluation is reviewing four programs that aim to prevent out-of-home placement and one program that tries to reunite foster children with their families. Two of these programs use the Homebuilders crisis intervention model, while the other three use less intensive service models. Each program evaluation is designed to assign families to treatment and control groups. Families in the treatment group receive services from the family preservation program. Families in the control group receive services that they would have received if the program was not available. Outcomes to be measured include changes in foster care placement rates, recidivism, and duration in stay, as well as family functioning and subsequent child abuse and neglect. The family support evaluation consists of multiple studies of eight different programs, including several comprehensive community family support programs as well as those that focus on economic self-sufficiency, family literacy, or preventing substance abuse. Five programs are being evaluated using treatment and control groups. The remaining three programs will compare families that receive program services with families in other programs or similar settings. For example, families that participate in Florida’s Full Service Schools program will be compared with families in comparable schools where the program is not offered.Outcomes to be measured include family functioning, child and family well-being, instances of child abuse and neglect, and satisfaction with services delivered. Although it is too early to identify service impacts on children and families, 10 states reported that program results were available on federally funded FPS services. Most of these states collected data on the number of children and families served, changes in child abuse and childhood mortality rates, as well as changes in their approaches to delivering services. For example, Louisiana reported on the results of federally funded projects after the first year of implementation. To contribute to future planning efforts related to the configuration of family preservation and family support services in Louisiana, the evaluation described the services and population characteristics in three programs and assessed the relationship between services and short-term outcomes. In particular, the Intensive Home Based Services program, which is a family preservation program, met its goal of preventing child removal and continued maltreatment. A family support program, designed to prevent child abuse and neglect, resulted in few reports of child maltreatment even though a majority of cases had had one or two child abuse or neglect reports before receiving program services. In another example for another family support program operating at a child development center whose primary service population is teenage parents and their children, individual service needs were summarized based on participants’ and workers’ completion of a new needs assessment form. While not much is known yet about the impact of federally funded services, the legislation appears to have affected the ways in which states and localities develop and administer services for children and their families. According to federal and state officials, the primary impact to date has been to forge links between state agencies and the communities they serve. The process of developing states’ 5-year plans resulted in public agencies, organizations, service providers, and consumers working together for the well-being of children and families. Many states departed from their traditional method of administering child welfare services at the state level. In particular, 27 states reported distributing federal funds to counties and other local entities, such as community groups and local coalitions, to develop their own plans and make service decisions. Several states took additional steps to better identify the service needs of children and families. For example, Michigan is investing an additional $10 million in state funds to supplement federal funds and enable each of its 83 counties to participate in the process of improving services to better meet local needs. State officials credit this process with ensuring that at-risk families now have greater access to needed services and contributing significantly to the broader goal of positive system reform. Before the enactment of OBRA 1993, FPS programs throughout the country were unable to meet the demand for services to strengthen and support families. Since then, states have begun to both initiate and expand programs of family preservation and support services to achieve the purpose of the FPS legislation. Early results indicate that these services are being offered to families and children who might otherwise have fallen through the cracks and that some programs supported with federal funds have met their goals of strengthening families and reducing child abuse and neglect. Information being gathered by states, universities, and research firms should increase our understanding of the outcomes of funded activities as well as changes over time in the well-being of communities, families, and children. Moreover, the community-based collaborative planning process undertaken seems to be having beneficial effects on the service-delivery system. While there has been service innovation and services have been expanded, it is still too early to tell what will be the ultimate impact of these programs on children and families. In commenting on a draft of this report, HHS agreed with our findings that implementation has been slower than expected but has achieved several positive outcomes. In particular, HHS emphasized the availability of new and expanded programs for both family preservation and support services, the focus on family support as a balance to family preservation, and the extension of services to families otherwise overlooked. Further, HHS noted that the use of FPS funds has encouraged collaboration among programs and levels of government and has attracted additional funds to meet community needs. We are providing copies of this report to the Secretary of Health and Human Services, state child welfare directors, and state FPS coordinators. We will also make copies available to other interested parties upon request. Should you or your staff have any questions or wish to discuss the information provided, please call me at (202) 512-7125. Other GAO contacts and staff acknowledgements are listed in appendix IV. We had previously assessed federal and state efforts to implement the FPS provisions during the first 18 months after OBRA 1993 was enacted and highlighted areas in which those efforts could be enhanced. To update this information, we interviewed officials from HHS’ ACF, which is responsible for overseeing this program, and reviewed related federal guidelines. Recognizing that it might be too early to identify service impacts on children and families, we also reviewed several states’ 5-year plans and first annual progress reports to determine the availability of information related to our objectives and to document states’ plans for assessing impact. To obtain information about the status of federal evaluation efforts, we interviewed officials from HHS’ Office of the Assistant Secretary for Planning and Evaluation (ASPE) and ACF who are responsible for overseeing the three national evaluation contracts that will collectively assess state implementation and the effectiveness of FPS programs. We designed a survey instrument to obtain information about states’ use of federal funds for FPS services, plans for assessing impact, and impacts identified to date. We discussed development of the instrument with HHS headquarters staff and several state child welfare agency officials. We pretested the instrument by telephone with the Title IV-B agency’s FPS coordinator in two states—Indiana and New Jersey. We chose these states for our pretest because they had distributed their federal funds in different ways—one to counties to do their own planning and make service decisions and the other to programs directly based on state-level decisionmaking. We revised the instrument based on the results of the pretest. In late June and early July 1996, we sent a copy of the instrument to the appropriate official of the child welfare agency in each of the 50 states and the District of Columbia. We offered the officials the option of completing the instrument in writing and returning it to us within 2 weeks. We interviewed by telephone those officials who did not return a completed instrument. We did not verify the information obtained through the survey instrument. However, we conducted in-depth interviews in nine states to supplement information collected in the survey. In particular, we obtained additional information about (1) the programs that these states initiated or expanded with federal funds, (2) how federal funds were distributed within the state, and (3) plans for rigorous evaluation, if any. We conducted seven interviews by telephone and two in person—one in Anne Arundel County, Maryland, and one in Sacramento County, California. In each state, we interviewed the same state-level individual(s) who responded to our survey. In five of these states—California, Iowa, Maryland, Texas, and Wisconsin—we also interviewed knowledgeable staff from a locality that had received federal funds. We selected these nine states because of their different size, location, and method for distributing federal funds. We conducted our work between May and September 1996, in accordance with generally accepted government auditing standards. This appendix presents our survey of state child welfare agencies regarding their use of Title IV-B, Subpart 2, funds for services. Each question includes the summary statistics and the actual number of respondents that answered the question. In each case, we use the format that we believe best represents the data, including frequencies, means, and ranges. Interview of State Officials on OBRA 1993 and Family Preservation and Support Services BACKGROUND INFORMATION (TO BE FILLED OUT BEFORE INTERVIEW AS MUCH AS POSSIBLE.) B. (________)________-____________ FAX #: Mo. Yr. Hello, Mr./Ms. ________________________________, my name is _______________________________________. I am with the U.S. General Accounting Office (GAO), an agency of the Congress. The Congress has asked us to study the nature and extent of new family preservation and support services since the enactment of OBRA 1993, and how states are assessing the impact of these services on families, children and communities. As part of the study, we are interviewing officials from all 50 states and the District of Columbia. During the interview we will ask about the ways in which your state has used its Title IV-B Subpart 2 funds and about recent family preservation and support initiatives in your state. We are also interested in how your state plans to measure, or is measuring, the impact of family preservation and support services on families and children. C. This interview should take about 45 minutes. Do you have the time to talk with me now? 1. ] Yes (IF "YES," GO TO E.), 2. D. When would be a good time to call you back? E. O.K., let’s begin. US E O F OBRA F UNDS First we'd like to discuss how OBRA funds are being used in your state. Let me clarify that, when we say "OBRA funds", we mean the federal family preservation and support funds provided by OBRA 1993--that is, Title IV-B Subpart 2 funds. Also, our focus is on only those OBRA funds that your state received for services--that is, those funds requiring a state match. 1. On what date were OBRA funds first made available to your state to use for services? (ENTER DATE.) _________/__________/__________ Range-8/1/94 to 6/1/96 Month Day n=51Year 2. Did your state allocate its OBRA funds to fam ily preservation and support programs directly, or did your state distribute funds to counties or other local governments for them to allocate to fam ily preservation an d support programs? (CHECK ONE.) n=51 1. Family preservation and support programs directly (GO TO QUESTION 7.) 2. Counties or other local governments 3. 3. As of now, how many counties have received OBRA funds to use for fam ily preservation and support services? (ENTER NUMBER.) n=27 1. Range=3-111 No. of countiesMean=27.6 4. Did these counties do localized planning to decide what fam ily preservation and support services to fund? (CHECK ONE.) n=27 1. 2. 5. Did your state retain any of its OBRA funds at the the state level before distributing funds to counties? (CHECK ONE.) n=27 1. Yes => About what percentage was retained at the state level? 2. _______________________% (ENTER PERCENTAGE.) Range=2-77% Mean=20.3% 6. How many counties are there in your state? n=27 1. Range=3-256 No. of countiesMean=65.4 The next few questions ask about the use of OBRA funds that were available to your state for services. Again, we mean those Title IV-B Subpart funds requiring a state match. 7. Now I’d like to ask you about the use of OBRA funds in your state for activities that do not involve directly initiating or expanding family preservation or family support services. The question is, in your state... (CHECK ONE FOR EACH.) Have any OBRA funds been used to... Yes (1) No (2) 1. Pay for broad-based planning activities that were not covered by the FY 1994 funds available for developing your state’s 5-year plan--that is, those first-year funds requiring no state match? n=51 2. Pay for efforts to increase your state’s capacity to provide family preservation and support services? Examples of this include training staff in cultural awareness or process changes; providing technical assistance to individuals, groups, and organizations that deliver family preservation and support services; conducting research or evaluation activities; and developing or improving management information systems. n=51 3. Fund the reporting of your state’s progress toward achieving the goals set out in the 5-year plan? n=51 4. Pay for family preservation and family support administrative costs? These include costs for procurement, payroll processing, management, data processing and computer services, as well as other indirect costs. n=51 5. Pay for or fund any other activity that does not involve directly initiating or expanding family preservation or family support services? (IF "YES," ASK RESPONDENT TO PLEASE SPECIFY.) n=51 (IF ALL ITEMS IN QUESTION 7 ARE CHECKED "NO," GO TO QUESTION 9.) 8. Of the OBRA funds your state has received for services so far, about what percentage has been used for the activities you just mentioned? And, about what percentage has been used to directly initiate or expand family preservation and support services? (ENTER PERCENTAGE FOR EACH.) n=46 for activities that do not directly involve initiating or expanding services, as mentioned in Question 7. for directly initiating or expanding services. FAMILY PRESERVATION AND SUPPORT ACTIVITIES Now we’d like to ask some questions about family preservation and support services that have been initiated or expanded with OBRA funds. We will begin by asking a series of questions about family preservation services. 9. Before October 1, 1993, were any family preservation services provided in your state? (CHECK ONE.) n=51 1. 2. 10. Since your state first received OBRA funds for services, have any of these funds been used to implement any family preservation services in your state?(CHECK ONE.) n=51 1. 2. (GO TO QUESTION 16 ON PAGE 7.) The next few questions ask about the number of family preservation programs that have been funded with OBRA dollars. Let me clarify that, when we say "program," we mean a type of program or model within which specific services are provided. Examples of family preservation programs could include the Homebuilders crisis intervention model, or a less intensive family reunification program. A particular program may be available at multiple sites, or funds may be distributed to multiple service-providers to implement a particular program.(FOR A MORE DETAILED DESCRIPTION OF DIFFERENT TYPES OF FAMILY PRESERVATION PROGRAMS OR MODELS, SEE PAGE 16 AT THE BACK OF THIS SURVEY.) 11. How many types of family preservation programs or models in your state have been funded with OBRA dollars? (ENTER NUMBER OR CHECK "DON’T KNOW".) n=45 1. _______________ n=36, Range=1-15, Mean=4.1 No. of types of programs or models 2. Don’t know at state level 12. We are interested in learning more about these family preservation services. First, we’d like to know if OBRA funds have been used to introduce brand new programs, or were OBRA funds used to expand or enhance existing programs. Second, we’d like to know how many programs were brand new, expanded, or enhanced. The first question is this: Anywhere in your state, have OBRA funds been used to ... ONE FOR EACH ITEM IN (A); IF "YES" IN (A), THEN CONTINUE TO (B).) (A) (B) How many types of family preservation been used to... in this category? EACH) CHECK "DON’T KNOW") No. of types of (2) (3) (1) (1) know (2) 1. Introduce new family preservation programs that were not used before? n=45 2. Expand existing family preservation programs to new locations? n=45 3. Expand existing family preservation programs to reach more clients within the same service areas? n=45 4. Enhance existing family preservation programs by providing more of an existing service or introducing new services to the same number of clients within the same service areas? n=45 5. Do anything else regarding family preservation services? (PLEASE SPECIFY.) n=45 13. Since your state first received OBRA funds for services, have any clients been served, who would not have been served, without the provision of OBRA funding for these family preservation programs? (CHECK ONE.) n=45 1. 2. No 14. Consider your state’s schedule for implementing OBRA-funded family preservation services. In general, would you say that the implementation of family preservation services in your state, as of now, is very much ahead of schedule, slightly ahead of schedule, on schedule, slightly behind schedule, or very much behind schedule?(CHECK ONE.) n=45 1. Very much ahead of schedule (GO TO QUESTION 16.) 2. (GO TO QUESTION 16.) 3. (GO TO QUESTION 16.) 4. 5. 15. Now, I’m going to mention some reasons why a state’s implementation of family preservation services might be behind schedule. Please indicate if any of these reasons apply to your state. EACH.) Is implementation behind schedule because your state... Yes (1) No (2) 1. Received its OBRA funds later than it expected to receive them? n=22 2. Decided to delay action on family preservation services until federal welfare reform was complete? n=22 3. Delayed action on family preservation services until receiving federal guidance related to the implementation of OBRA 1993? n=22 4. Experienced delays in developing or producing the 5-year plan? n=22 5. Experienced delays in designing or developing the "new" family preservation services? n=22 6. Required an extended period of time to make changes to the existing service-delivery system before family preservation services could be implemented? These changes might include training staff on cultural awareness or process changes, collaborating with other related service providers, reorganizing departments, or changing service delivery processes. n=22 7. Experienced delays in any other pre-implementation activities? (IF "YES," PLEASE SPECIFY.) n=22 Now we’d like to ask a series of questions about family support services. 16. Before October 1, 1993, were any family support services provided in your state? (CHECK ONE.) n=51 1. 2. 17. Since your state first received OBRA funds for services, have any of these funds been used to implement any family support services in your state?(CHECK ONE.) n=51 1. 2. (GO TO QUESTION 23 ON PAGE 10.) The next few questions ask about the number of family support programs that have been funded with OBRA dollars. Let me clarify that, when we say "program," we mean a type of program or model within which specific services are provided. Examples of some family support programs that have been replicated around the country include: comprehensive/community family support programs like the Parents Services Project that originated in the San Francisco Bay Area; child abuse and neglect prevention programs like Hawaii’s Healthy Start model; and school readiness programs like HIPPY and Parents as Teachers (PAT). A particular program may be available at multiple sites, or funds may be distributed to multiple service-providers to implement a particular program. DETAILED DESCRIPTION OF DIFFERENT TYPES OF FAMILY SUPPORT PROGRAMS OR MODELS, SEE PAGES 16 AND 17 AT THE BACK OF THIS SURVEY.) 18. How many types of family support programs or models in your state have been funded with OBRA dollars? (ENTER NUMBER OR CHECK "DON’T KNOW".) n=50 1. _______________ No. of types of programs or models n=37, Range=1-35, Mean=7.4 2. Don’t know at state level 19. We are interested in learning more about these family support services. First, we’d like to know if OBRA funds have been used to introduce brand new programs, or were OBRA funds used to expand or enhance existing programs. Second, we’d like to know how many programs were brand new, expanded, or enhanced. The first question is this: Anywhere in your state, have OBRA funds been used to ...(CHECK ONE FOR EACH ITEM IN (A); IF "YES" IN (A), THEN CONTINUE TO (B).) (A) (B) been used to... in this category? EACH) CHECK "DON’T KNOW") No. of types of (2) (3) (1) (1) know (2) 1. Introduce new family support programs that were not used before? n=50 2. Expand existing family support programs to new locations? n=50 3. Expand existing family support programs to reach more clients within the same service areas? n=50 4. Enhance existing family support programs by providing more of an existing service or introducing new services to the same number of clients within the same service areas? n=50 5. Do anything else regarding family support services? (PLEASE SPECIFY.) n=50 20. Since your state first received OBRA funds for services, have any clients been served, who would not have been served, without the provision of OBRA funding for these family support programs? n=50 (CHECK ONE.) 1. 2. No 21. Consider your state’s schedule for implementing OBRA-funded family support services. In general, would you say that the implementation of family support services in your state, as of now, is very much ahead of schedule, slightly ahead of schedule, on schedule, slightly behind schedule, or very much behind schedule? (CHECK ONE.) n=50 1. Very much ahead of schedule (GO TO QUESTION 23.) 2. (GO TO QUESTION 23.) 3. (GO TO QUESTION 23.) 4. 5. 22. Now, I’m going to mention some reasons why a state’s implementation of family support services might be (CHECK ONE FOR EACH.)behind schedule. Please indicate if any of these reasons apply to your state. Is implementation behind schedule because your state... Yes (1) No (2) 1. Received its OBRA funds later than it expected to receive them? n=22 2. Decided to delay action on family support services until federal welfare reform was complete? n=22 3. Delayed action on family support services until receiving federal guidance related to the implementation of OBRA 1993? n=22 4. Experienced delays in developing or producing the 5-year plan? n=22 5. Experienced delays in designing or developing the "new" family support services? n=22 6. Required an extended period of time to make changes to the existing service-delivery system before family support services could be implemented? These changes might include training staff on cultural awareness or process changes, collaborating with other related service providers, reorganizing departments, or changing service delivery processes. n=22 7. Experienced delays in any other pre-implementation activities? (IF "YES," PLEASE SPECIFY.) n=22 IMPACT OF FAMILY PRESERVATION AND FAMILY SUPPORT SERVICES Now, I’d like to ask you about any results achieved by OBRA-funded family preservation and support services. Again, when we say "OBRA funds", we mean Title IV-B Subpart 2 funds. 23. We realize it may be too early to have done this, but has your state gathered any information on the results achieved so far by OBRA-funded... (CHECK ONE FOR EACH.) n=51 Yes (1) No (2) 1. Family preservation services? 2. Family support services? (IF "NO" TO BOTH family preservation AND family support SERVICES, THEN GO TO QUESTION 28 ON PAGE 13.) 24. Now, I’m going to mention some measures that might be used to assess the impact of family preservation or family support services. First, we’d like to know if the measure was used. If so, we’d like to know if it was used to assess the impact of family presevation services, family support services, or both. The first question is this: For OBRA-funded family preservation or family support services, did anyone in your state measure... (CHECK ONE FOR EACH ITEM IN (A); IF "YES" IN (A), THEN CONTINUE TO (B).) (A) (B) Was it used for family measure... preservation (FP) FOR EACH) services, family support (FS) services, or both? No (1) Yes (2) (ENTER "FP", "FS", OR "BOTH") 1. The number of children, families, or clients served? n=10 2. The extent to which the needs of vulnerable or at-risk children and families were met? n=10 3. The number of foster care placements prevented or number of family reunifications? n=10 4. Changes in the well-being of children, including each child’s development, school performance or readiness? n=10 5. Changes in parent-child relationships, family satisfaction, or family functioning? n=10 6. Changes in the community, such as in the number of child abuse/neglect reports, in poverty rates, in birth rates, or in childhood mortality rates? n=10 (A) (B) Was it used for family measure... preservation (FP) FOR EACH) services, family support (FS) services, or both? No (1) Yes (2) (ENTER "FP", "FS", OR "BOTH") 7. Changes to the service-delivery system, such as in caseloads or expenditures? n=10 8. Other changes to the service-delivery system, such as changes in the extent of collaboration, coordination, and inclusiveness? n=10 9. Still other changes to the service-delivery system, such as in staffing levels, staff training, number of cases per worker, or timeliness of services? n=10 10. Cost effectiveness? 11. Which types of services work best for certain groups of 12. Anything else? (IF "YES," PLEASE SPECIFY) n=10 25. We would like any information you might have on specific results of OBRA-funded family preservation or family support services in your state. Could you mail or fax to us any documentation? n=10 1. Yes => I will tell you where to mail or fax this information at the conclusion of this interview. 2. No 26. I am going to now mention some ways in which the impact of OBRA-funded family preservation or family support services might be assessed. To your knowledge, in your state, ... (CHECK ONE FOR EACH.) Has anyone assessed the impact of OBRA-funded services by... Yes (1) No (2) 1. 2. Preparing periodic progress reports? n=10 3. Reviewing individual case records? n=10 4. 5. Reviewing specific family preservation or family support programs? n=10 6. Reviewing portions or all of your state’s child and family service-delivery system? n=10 7. Doing anything else? (IF "YES," PLEASE SPECIFY.) n=10 27. To your knowledge, has anyone in your state conducted a formal evaluation--that is, an evaluation that utilized an experimental design--to assess the effectiveness of OBRA-funded...(CHECK ONE FOR EACH.) n=10 Yes (1) No (2) 1. Family preservation services? 2. Family support services? We are interested in your state’s plans for assessing the impact of family preservation and family support services on children, families, and communities. 28. Now, I’m going to mention some measures that might be used to assess the impact of family preservation or family support services. First, we’d like to know if the measure will be used in your state. to know if it will be used to assess the impact of family presevation services, family support services, or both. The first question is this: For OBRA-funded family preservation or family support services, does anyone in your state plan to measure...(CHECK ONE FOR EACH ITEM IN (A); IF "YES" IN (A), THEN CONTINUE TO (B).) If so, we’d like (A) (B) Will it be used for measure... FOR EACH) family preservation (FP) services, family support (FS) services, or both? No (1) Yes (2) (ENTER "FP", "FS", OR "BOTH") 1. The number of children, families, or clients served? n=51 2. The extent to which the needs of vulnerable or at-risk children and families were met? n=51 3. The number of foster care placements prevented or number of family reunifications? n=51 4. Changes in the well-being of children, including each child’s development, school performance or readiness? n=51 5. Changes in parent-child relationships, family satisfaction, or family functioning? n=51 6. Changes in the community, such as in the number of child abuse/neglect reports, in poverty rates, in birth rates, or in childhood mortality rates? n=51 7. Changes to the service-delivery system, such as in caseloads or expenditures? n=51 8. Other changes to the service-delivery system, such as changes in the extent of collaboration, coordination, and inclusiveness? n=51 9. Still other changes to the service-delivery system, such as in staffing levels, staff training, number of cases per worker, or timeliness of services? n=51 10. Cost effectiveness? n=51 (A) (B) Will it be used for measure... FOR EACH) family preservation (FP) services, family support (FS) services, or both? No (1) Yes (2) (ENTER "FP", "FS", OR "BOTH") 11. Which types of services work best for certain groups of 12. Anything else? (IF "YES," PLEASE SPECIFY) n=51 (IF ALL ITEMS IN QUESTION 28 ARE CHECKED "NO," GO TO QUESTION 30.) 29. We are interested in examples of specific measures that will be used to assess the impact of OBRA-funded family preseravtion or family support services. Does your state’s 5-year plan describe any of the measures that you just mentioned?(CHECK ONE.) n=51 1. Please mail or fax us the relevant pages from your state plan. send this information at the conclusion of this interview. I will tell you where to 2. 30. I am going to now mention some ways in which the impact of OBRA-funded family preservation or family support services might be assessed. To your knowledge, in your state, ... (CHECK ONE FOR EACH.) Will anyone assess the impact of OBRA-funded services by... Yes (1) No (2) 1. 2. Preparing periodic progress reports? n=51 3. Reviewing individual case records? n=51 4. 5. Reviewing specific family preservation or family support programs? n=51 6. Reviewing portions or all of your state’s child and family service-delivery system? n=51 7. Doing anything else? (IF "YES," PLEASE SPECIFY.) n=51 31. To your knowledge, will anyone in your state conduct a formal evaluation--that is, an evaluation that will utilize an experimental design--to assess the effectiveness of OBRA-funded... n=51 (CHECK ONE FOR EACH.) Yes (1) No (2) 1. Family preservation services? 2. Family support services? 32. For background purposes, we are interested in other data that may be included in your state’s 5-year plan. Does your state’s plan include any data that portrays either graphically, in tables, or in narrative, any aspect of child welfare at or before the time the 5-year plan was developed? n=51 1. Yes => Please mail or fax us the relevant pages from your state plan. 2. 33. (IF "YES" TO QUESTIONS 25, 29, or 32.) You can mail or fax us (1) documentation related to results of family preservation/family support services in your state, (2) those sections of your state’s 5-year plan related to measures that will be used to assess the impacts of these services, or (3) those sections of the state plan related to child welfare data to: U.S. General Accounting Office Attn: Ms. Karen Lyons Federal Office Building 2800 Cottage Way Room W-2326 Sacramento, CA 95825 The fax number is 916-974-1202 If you have any questions, you can call me at 916-974-3341 (California time). That concludes this interview. Thank you very much for your time and cooperation. We define family preservation services and family support services as they appear in the Omnibus Budget Reconciliation Act of 1993: Family preservation services are typically designed to help families at risk or in crisis. Services may be designed to (1) prevent foster care placement, (2) reunify families, (3) place children in other permanent living arrangements, such as adoption or legal guardianship, (4) provide followup care to reunified families, (5) provide respite care for parents and other caregivers, and/or (6) improve parenting skills. Family support services are primarily community-based preventive activities designed to promote the well- being of children and families. Services are designed to (1) increase the strength and stability of families, (2) increase parents’ confidence and competence in their parenting abilities, (3) afford children a stable and supportive family environment, and (4) otherwise enhance child development. The terms, family preservation program and family support program, refer to the type of program or model within which specific services are provided. A particular program may be available at multiple sites, or funds may be distributed to multiple service-providers to implement a particular program. Family preservation programs are often distinguished by one of the following theoretical approaches or models: Crisis intervention technique forms the basis for the Behavioral Science Institute’s Homebuilders model. Key program characteristics include: contact with the family within 24 hours of the crisis; caseload sizes of one or two families per worker; service duration of 4 to 6 weeks; provision of both concrete services and counseling; staff availability to families on a 24-hour basis; and an average of 20 hours of service per family per week. Family systems technique is a model typified by the FAMILIES program, originated in Iowa. Attention is focused on the way family members interact with one another and seeks to correct dysfunction by working on the family’s interaction with the community. Teams of workers carry a caseload of 10 to 12 families; families are seen in their own homes for an average of four and one-half months; and both concrete and therapeutic services are provided. Therapeutic family treatment is a model that relies less on the provision of concrete and supportive services and more on family therapy. One of the first such programs was the Intensive Family Services Program which began in Oregon. Treatment is less intensive than the Homebuilders model and can be delivered in either an office or home setting. Workers carry a caseload of about 11 families and service duration is 90 days with weekly followup services provided for an average of 3 to 5 1/2 months. Some family preservation programs use slight variations of these existing models or hybrids of several models. Family support programs can be categorized by their type, which is closely aligned with their mission. Common program types are:(with nationally recognized programs and models in parentheses) Comprehensive/community family support programs offer a wide array of services and typically serve multiple populations, such as teen parents, juvenile offenders, and jobless adults. Programs tend to be community-based and open to the entire community. Program components may include some of the more narrowly focused family support programs listed below.(Parent Services Project) Child abuse and neglect prevention programs serve at-risk populations and focus on prevention of abuse and neglect by working to eliminate social isolation. Programs link families to one another and to services, including homevisiting, parenting education classes, peer support groups, and child-related services.(The Nurturing Program; Hawaii’s Healthy Start (replicated through Healthy Families America)) Economic self-sufficiency programs serve unemployed and/or underemployed parents by offering extensive job preparation, skills development workshops, training sessions, and job placement services. Most programs also provide comprehensive services for families, including referral to other community agencies, mental health services, and tax/legal assistance.(Comprehensive Child Development Program (federal program)) Family literacy programs focus on generating literacy competency in parents and children. Programs are often linked with community-based organizations, including libraries and family learning centers. (Parent and Child Education (PACE); Even Start (federal program); FAMILY MATH; National Center for Family Literacy; SERS Family Learning Centers) Infant and child health and development programs serve families from prebirth until the child reaches the age of 3. Programs are often home-based and incorporate a strong emphasis on health and nutrition. Many programs are linked to healthcare facilities, including hospitals, clinics, and community health facilities.(Maternal Infant Health Outreach Worker (MIHOW) Project) School readiness/achievement programs primarily aim at preparing children for school success.In addition to cognitive skills, many programs stress the development of children’s competencies in social, emotional, and physical domains.(Home Instruction Program for Preschool Youngsters (HIPPY); Parents as Teachers (PAT); Teachers Involve Parents in Schoolwork (TIPS)) Situation-specific programs are designed to meet the unique needs of families in specific situations, including homeless families, rural families, refugee families, military families, families with incarcerated members, and single-parent families. (Single Parent Resource Center) Special needs programs primarily serve families whose children have special developmental needs or disabilities. Most programs focus on providing parents with information to enable them to cope with the additional stresses of nurturing special needs children.(Family, Infant, and Preschool Program (FIPP)) Substance abuse prevention programs are sometimes designed for all children and families and are preventive in orientation; in other cases, programs target children and youth known to be at-risk or live in substance abusing family situations. Programs aim at strengthening self-esteem and promoting healthy lifestyles.(Families and Schools Together (FAST)) -- Wellness programs serve families who are dealing with normal stresses of parenting. Programs offer a wide range of support to families in the area of parenting education. These programs tend to be co- located--at YWCAs, health councils, and religious service organizations--often functioning as a supplementary service for adults.(Child Rearing Program; Effective Parenting Information for Children (EPIC); The Mothers’ Center; Parents Place) In addition to those named above, the following individuals made important contributions to this report: Patricia L. Elston conducted both the nationwide survey and in-depth interviews for a portion of the states and coauthored the report; Deborah A. Moberly performed these same tasks and conducted computerized analyses of the survey data; and Joel I. Grossman assisted in developing, pretesting, and finalizing the survey instrument. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the status of states' use of funds for family preservation and support (FPS) services, focusing on: (1) the nature and extent of states' use of federal funds for new and expanded FPS services; and (2) states' plans to assess the impact of these services on children and their families and impacts identified to date. GAO found that: (1) all states reported that they are using federal funds to increase the availability of family preservation and family support services either by creating new programs or expanding existing programs; (2) 44 states said that they introduced new programs; (3) 47 states reported enhancing their existing programs or expanding them to serve more clients; (4) as required by the law, GAO's analysis shows that states appear to be allocating a significant portion of their federal funds to both family preservation and family support services: (5) in the last 2 years, states budgeted 56 percent of their service dollars to family support and 44 percent to family preservation; (6) the somewhat greater emphasis on family support services reflects priorities established through state and community planning efforts; (7) moreover, many states already had family preservation programs in place and decided to bolster family support services; (8) to determine whether this infusion of federal funds improves services for children and families, GAO identified a number of efforts that are underway or planned to assess programs providing FPS services; (9) states plan to track the results of their federally funded services, for example, by measuring the number of clients served and the extent to which their needs are met, improvements in parent-child relationships, the degree that services are coordinated, and indicators of community well-being, such as child abuse rates; (10) although not required to do so, at least 11 states are also planning formal evaluations to determine whether the services actually improve outcomes for families; (11) two federally sponsored evaluations are underway to assess the effectiveness of family preservation and family support services; (12) early results from 10 states indicate some successes, such as preventing child removal and continued maltreatment; and (13) while it is too early to determine the impact of these programs, federal and state officials report that the extensive community and interagency collaboration required by the law has resulted in improved identification of service needs, setting of priorities, and receipt of services by at-risk families otherwise overlooked. |
The secretaries of the Army, Navy, and Air Force have a key role in making decisions on where to locate their services’ forces when they are not otherwise employed or deployed by order of the Secretary of Defense or assigned to a combatant command. The service secretaries are authorized, subject to the authority, direction, and control of the Secretary of Defense, to conduct all affairs of their departments—including functions such as organizing, equipping, training, and maintaining force structure. The secretaries also have the authority to construct, maintain, and repair buildings, structures and utilities, and to acquire the real property or interests in real property necessary to carry out their responsibilities. In addition, the secretaries may assign forces under their jurisdiction to carry out these functions, unless otherwise directed by the Secretary of Defense or the forces are assigned to a combatant command. The Secretary of Defense has authority, direction, and control over DOD, including the military services, and may perform any of his functions through organizations of the department as he may designate, unless prohibited by law. Furthermore, OSD was established in part to assist the Secretary of Defense in carrying out his duties and responsibilities and to carry out such other duties as may be prescribed by law. Senior officials within OSD develop policy and guidance for their unique areas of responsibility. For example, among the duties of the Under Secretary of Defense for Acquisition, Technology and Logistics is establishing policies for logistics, maintenance, and sustainment support for all elements of DOD. DOD periodically monitors, as part of its oversight role, its significant investments of military force structure and resources through its Quadrennial Defense Review that is generally conducted every four years. Under law, the Secretary of Defense is to conduct a comprehensive examination of the national defense strategy, force structure, force modernization plans, infrastructure, budget plan, and other elements of the country’s defense program and policies with a view toward determining and expressing the nation’s defense strategy and establishing a defense program for the next 20 years. The four military services each use different terminology and definitions when describing their basing decision processes. For example, the Army describes its basing decision process as “stationing,” the Marine Corps generally uses the term “force laydown,” and the Air Force uses the term “beddown.” The Navy describes its basing decision process using the terms “strategic laydown” and “strategic dispersal;” the strategic laydown process provides the Navy with a methodology to align, organize, and position naval forces between the Atlantic and Pacific Fleets. The strategic dispersal process is used to determine the distribution of ships by homeport in regard to infrastructure, operational availability, proximity to ranges and support, port loading, quality of service and quality of life, and antiterrorism and force protection factors. For the purposes of this report, we use “basing” to refer to the services’ processes to make decisions about where to establish locations for their force structure within the United States (the 50 states and the District of Columbia) that are not made under BRAC legislation. Our analysis showed that, generally, each of the services has established a basing decision process that uses similar criteria, scope, and methodologies to determine where to locate its force structure within the United States and globally. The basing process begins by the service identifying the goals for the planned change in the location of military force structure. The service then conducts a series of analyses, such as capability and capacity analyses, to determine the specific requirements for meeting those goals. Based on the results of the services’ analyses, potential installations are identified. Further analyses are conducted using cost estimates and environmental considerations to develop a list of candidate basing locations. The candidate locations are presented to the service’s leadership, and after further review, a final basing decision is reached. Throughout their processes, the services conduct multiple risk assessments; coordinate with internal and external stakeholders, including combatant commanders; and use military judgment to support their decisions. The services have guidance documents that are used to implement the processes for making basing decisions within the United States and not made under the BRAC legislation. This guidance and its implementation is part of the services’ management control, which provides oversight of the basing processes. In addition, service officials stated that the same guidance and processes are used to make overseas or global basing decisions. The Army, Marine Corps, and Air Force use a comprehensive regulation, order, and instruction, respectively, for their processes. According to Navy officials, the Navy currently uses five guidance documents to implement its basing decision process: Chief of Naval Operations Instruction: Navy Organization Change Strategic Laydown Flow Chart Strategic Dispersal Flow Chart Chief of Naval Operations Instruction: Environmental Readiness Secretary of the Navy Instruction: Environmental Planning for Department of the Navy Actions As an aspect of management control—to continually seek ways to better achieve an agency’s mission and program results—each of the services is taking steps to strengthen its basing process. The Army and Air Force have made revisions to their regulation and instruction, respectively, to incorporate changes made in how their processes are conducted. For example, Army officials stated that the Army’s basing regulation will incorporate an analysis of military value, which was identified as a priority criterion to be used by the Secretary of Defense during the BRAC process. Army officials said that the addition of this analysis in its process will provide more data to its leaders for making future basing decisions. Air Force officials told us that the Air Force recently changed from a decentralized to a centralized process to better clarify roles and responsibilities in the process and ensure that the Air Force performs an objective review of all operational and training options. The Marine Corps’ most recent revisions to its basing process clearly emphasizes the integration of strategic guidance (top-down direction) and commander- generated recommendations (bottom-up requests); mandates a detailed integrated examination of doctrine, organization, training, materiel, leadership, personnel, and facilities; and explicitly defines leadership roles and responsibilities. Navy officials stated that while the Navy has used its strategic laydown process to make basing decisions for the past 20 years, it recently refined the process and added a strategic dispersal process, which was designed to align with the transformation described in the 2006 Quadrennial Defense Review and the Navy’s Maritime Strategy. To assist in evaluating the military services’ basing decision processes, we developed an assessment tool that included the key elements, factors within the elements, and management control standards that are part of a comprehensive process, and when incorporated in the process, increase its transparency, repeatability, and defendability. Our tool includes four key elements—strategic and force structure planning, infrastructure analysis, implementation considerations, and authority for making the basing decision—together with various factors that make up each element (see table 1). Within each of the four key elements are a series of factors that represent supporting analyses and activities that are important for completing the element. The strategic and force structure planning element, for example, includes factors such as national strategies, DOD and service planning and guidance documents, the results of risk assessments, and military judgment. Risk assessment is also considered as a factor in the infrastructure analysis and implementation considerations elements and as a standard for management control. In commenting on our assessment tool, OSD and service officials agreed that our tool was reasonable and complete. Management control underpins the entire basing process, and the Standards for Internal Control in the Federal Government provides a foundation that can help government program managers achieve desired results through effective stewardship of public resources. Management control comprises the plans, methods, and procedures used to meet the organization’s missions, goals, and objectives and consists of five standards—control environment, risk assessment, control activities, information and communications, and monitoring. For example, management control recommends that an organization issue a governing instruction that specifies who is responsible for each step of a process, including oversight and review of decisions made at critical steps by an official or group other than those who made the original decision, and directs those responsible to document the steps of a key decision process, such as the basing decision process. The Army, Marine Corps, and Air Force basing decision processes include all of the key elements, associated factors, and management control standards that we identified as necessary in a comprehensive process and that when incorporated in the process, increase its transparency, repeatability and defendability. However, the Navy’s basing process needs additional guidance for its infrastructure analysis—a key element—and for related management control standards for its process to be complete. We found, for example, that one of Navy’s guiding documents—the Strategic Dispersal Flow Chart—did not provide details about how and by whom specific actions will be done during the process. In addition, management control underpins all aspects of a basing decision process, and the Standards for Internal Control in the Federal Government recommends policies and procedures to enforce management’s directives; specify who is responsible for each step of the process, including oversight and review of decisions made; and direct those responsible to maintain appropriate documentation. Specifically, we found that some of the Navy’s guidance documents do not provide detailed information about how certain types of analyses will be completed and who is responsible for completing them. Additionally, Navy officials acknowledged that the Navy has not clearly described the linkage between all five guidance documents it uses to implement its basing decision process. Without comprehensive and clear guidance of the Navy’s overall basing decision process, the Navy may lack the completeness and management control to ensure that its basing decisions can facilitate external stakeholders’ examination and scrutiny or ensure effective implementation of Navy’s basing process. Our assessment found that the Army, Marine Corps, and Air Force basing processes incorporated all of the key elements, associated factors, and management control standards that we identified as necessary for a process to be comprehensive and its decisions to be transparent, repeatable, and defendable. However, the Navy has not provided complete guidance for its infrastructure analysis—a key element—and for some of its related management control standards in its basing process. Figure 1 summarizes our assessment and the rating we assigned to the key elements and management control for each of the services’ basing decision processes. None of the services fell into this category. During our assessment, we found that the Army, Marine Corps, and Air Force incorporate the key elements and management control to a large extent. The following are examples of how each of these services incorporated one of the key elements and the management control standards during its basing process: Strategic and force structure planning element: According to Army planning officials, they would ask about the strategic risk of performing a mission or not performing a mission and would complete tactical and strategic risk analyses using Army’s force structure. Infrastructure analysis element: In implementing their guidance, the Marine Corps required that a list of location alternatives and associated implications be submitted to the Marine Requirements Oversight Council for approval. Implementation considerations element: According to officials, the Air Force would rank the potential locations and determine which locations could best meet the Air Force’s basing needs. Management control standards: The Army, Marine Corps, and Air Force guidance documents clearly defined which office is responsible for each step of the process and who had the authority to make decisions at various steps, allowed for oversight and review of decisions made at critical steps, and developed records associated with various steps that provided evidence that the process was being followed. We also found that the Navy incorporated to a large extent three out of the four key elements in its basing process. For example, in the implementation considerations element, as part of the Navy’s basing process, the Navy uses its Environmental Readiness Program Manual, which considers regional or installation infrastructure plans, detailed cost estimates, environmental impacts, socioeconomic impacts, coordination with and input from other stakeholders, risk assessment, and military judgment during the process of assessing environmental impact. In addition, the Navy has coordinated with senior leadership within the office of the Secretary of the Navy and Naval Facilities Engineering Command and with other applicable agencies, such as the U.S. Fish and Wildlife Service, the National Marine Fisheries Service, the U.S. Army Corps of Engineers, and the Environmental Protection Agency. Furthermore, the Navy has performed risk assessments for such events as hurricanes, man- made disasters, and other military and port threats. However, for its infrastructure analysis key element and for related management control standards, the Navy needs additional guidance for its process to be complete. Our assessment, found, however, that some of the guidance that the Navy uses to implement its basing process is incomplete. The Army, Marine Corps, and Air Force have a regulation, order, and instruction, respectively, which describe the organizational roles and responsibilities; links between other necessary strategic and environmental guidance documents; and service basing analyses, factors, and criteria that should be used when making basing decisions. However, some of the Navy’s current guidance documents, primarily used for the infrastructure analysis key element and management control, do not contain detailed information about the specific actions that are taken during its basing process or clearly define who is responsible for completing certain types of analyses. In addition, according to Navy officials, the Navy uses the following five guidance documents to implement its overall basing decision process: (1) Chief of Naval Operations Instruction: Navy Organization Change Manual, (2) Strategic Laydown Flow Chart, (3) Strategic Dispersal Flow Chart, (4) Secretary of the Navy Instruction: Environmental Planning for Department of the Navy Actions, and (5) Chief of Naval Operations Instruction: Environmental Readiness Program Manual. However, Navy guidance does not provide a clear explanation for how all of these guidance documents are linked together in the process. In reviewing the infrastructure analysis element of the process, we found that the Navy’s Strategic Dispersal Flow Chart neither includes sufficient detail about the specific actions nor provides clearly defined responsibilities in the organization for completing and coordinating them. For example, the flow chart shows that some types of capability and capacity analyses of potential homeport locations are conducted that take into consideration access to training areas, sailor quality of life, family quality of life, and collocating of ships, and support units and planned military construction projects, port capacity and loading, pier space, and ship size, respectively. However, the Strategic Dispersal Flow Chart does not describe in any detail how the analysis is to be conducted and who is to conduct it. Furthermore, while Navy officials stated that there are working groups with appropriate stakeholders throughout the Navy’s basing process, we found that the Navy’s Strategic Dispersal Flow Chart does not describe in detail the type of coordination with other stakeholders that should occur. For management control, our assessment showed that some of the Navy’s five guidance documents only partially describe the standards for management control—risk assessment, information and communications, control environment, control activities, and monitoring. Specifically, some of the Navy’s basing process guidance documents do not describe how risk is evaluated and who conducts this analysis; provide detail to show how information flows down, across, and up the organization, or identify the means of communication with external stakeholders; clearly define key areas of authority and responsibility and establish appropriate lines of reporting; properly document policies and procedures, such as approvals and the creation and maintenance of related records, which would provide evidence that these activities have been executed; show how regular management and supervisory activities and other actions are performed during the normal course of its basing decision process; and clearly link all five guidance documents to enforce management’s directives. Two of the Navy’s guidance documents lack specific key management controls. First, the Navy’s Strategic Laydown Process Flow Chart does not describe how risk assessment should be evaluated. Second, the Navy’s Strategic Dispersal Flow Chart does not show how and who is responsible for conducting and evaluating risk assessment, how information is disseminated within the organization, and how it is exchanged with external stakeholders; clearly define key areas of authority and responsibility and establish appropriate lines of reporting; show proper documentation in executing the process and how it should be maintained; show how regular management and supervisory activities are performed during the normal course of Navy officials’ duties; and show the organizational roles and responsibilities for completing and coordinating this process. While each of the Navy’s five guidance documents for its basing process provides support for one or more key elements or for management control, Navy officials could not identify to us any guidance or related documents that clearly describe how these guidance documents are linked together in the process. For example, Navy officials told us that the flow charts describing its strategic laydown and strategic dispersal processes were the primary documentation used to support Navy’s basing methodology. However, these flow charts do not describe the Navy’s entire basing decision process. Specifically, the flow charts do not provide references to show that the Navy’s organization change manual and the two environmental planning guidance documents are also a part of the overall basing process. In addition, Navy officials acknowledged that without the linkage of these five documents, the Navy’s basing process may not be transparent to outside stakeholders. Since the five guidance documents are not all clearly linked, Navy management and staff may not have a clear and complete understanding of the roles, responsibilities, and relationships between various organizations within the process; the range of actions, analyses, and supporting documentation required; and the interrelationship of all the elements, factors, and management control standards needed to implement the process. The Secretary of Defense has not set a policy or assigned an office a clear role for providing management control of the services’ basing decision processes within the United States and not made under the BRAC legislation, and as a consequence may lack reasonable assurance that certain DOD-wide initiatives will be fully supported in service basing decisions. Specifically, in its 2007 Defense Installation Strategic Plan, DOD indicated it would attempt to reshape the overall structure of its installations in the United States to better support all DOD components and joint warfighting needs. In addition, DOD is continuing its efforts to reduce the number of troops permanently stationed overseas and consolidate overseas bases. Moreover, the 2007 Defense Installation Strategic Plan’s “Right Management Practices” goal suggests the DOD intends to embrace best business practices and modern asset management techniques to improve its installation planning and operations. Standards for Internal Control in the Federal Government recommends that management control should be built into an organization to help managers run it and achieve their aims on an ongoing basis. OSD officials told us that OSD provides management control over basing issues through its annual reviews of the services’ budgets and other program reviews, such as the Quadrennial Defense Review. According to OSD officials, even though OSD is developing policy and plans to prepare guidance for its overseas basing process, which DOD refers to as global basing, OSD has no current plans to develop a policy for the services’ basing processes within the United States. As a result, these officials acknowledged that there is no departmentwide policy that provides direction to the military services on how departmentwide issues, such as the potential sharing of DOD facilities by the services and global basing and operations, should be considered in evaluating domestic basing alternatives. Furthermore, the Secretary of Defense has not sufficiently delegated to an office within OSD a clear line of authority and responsibility for providing the guidance and oversight of the services’ domestic basing processes. Nonetheless, officials from the offices of the Under Secretary of Defense for Policy and the Deputy Under Secretary of Defense for Installations and Environment told us that it is important for the military services to consider any potential impacts that the services’ basing decisions could have on joint sharing of DOD facilities and global basing and operations. However, these officials also stated that it is unclear to what extent the services’ basing processes include risk assessment questions that take into consideration a cross-service perspective of base planning to share DOD facilities jointly and any impacts that the services’ basing decisions within the United States may have on global basing and operations. OSD officials stated that DOD has recently taken steps toward establishing an integrated process to assess and adjust global basing. DOD established the Global Posture Executive Council, which is responsible for facilitating global posture decisions and overseeing the assessment and implementation of global posture plans. In a July 2009 report, we identified a weakness in DOD’s approach, despite these positive steps. Specifically, as of July 2009 when we issued our report, DOD had not yet reported on global posture matters in a comprehensive manner. In that report, DOD concurred with our recommendations to (1) issue guidance establishing a definition and common terms of reference for global defense posture; (2) develop guidance requiring the geographic combatant commands to establish an approach to monitor initiative implementation, assess progress, and report on results; and (3) establish criteria and a process for selecting and assigning lead service responsibilities for future locations. OSD officials told us that since the services use the same processes for making basing decisions both within the United States and globally, OSD could similarly exercise management control of the services’ basing processes through its global defense posture policy to oversee basing decisions within the United States, but had not generally done so to date. In addition, these officials stated that the global defense posture policy draft is expected in spring 2010; however, officials did not know when it would be formally issued. Without implementing a DOD-wide policy that includes guidance and oversight of the military services’ basing processes and assigns an OSD office with authority and responsibility for providing this oversight, the Secretary of Defense lacks reasonable assurance that DOD plans for sharing facilities among the services, possible impacts on global basing and operations, or other departmentwide issues are adequately considered by the services in their basing decision making. While the Army, Marine Corps, and Air Force each have established comprehensive basing processes for determining where to base its force structure in the United States, the lack of completeness in two of the Navy’s five guidance documents and lack of clear linkage between its multiple guidance documents may limit the understanding of its process both internally and externally and the Navy’s ability to implement its process consistently. Without comprehensive basing processes with detailed guidance and instructions, DOD may not have assurance that the services’ basing decisions are transparent, repeatable, and defendable. Additionally, in light of the substantial costs and potential strategic and socioeconomic impacts on DOD operations and interests of the communities surrounding the bases that can result from the services’ basing decisions, it is important to include DOD-wide considerations, such as joint use of facilities by the services and global basing and operations, in the services’ basing processes. While DOD does exercise management control through its budget and program reviews, the department may not have sufficient guidance and oversight of the services’ basing processes to ensure that departmentwide priorities are fully considered in the services’ basing decisions. To improve the Navy’s ability to make well-informed basing decisions that are transparent, repeatable, and defendable, we recommend that the Secretary of Defense direct the Secretary of the Navy to take the following three actions to strengthen the Navy’s guidance and associated documentation for its basing decision process: 1. In its Strategic Laydown Flow Chart, clearly describe how risk is evaluated. 2. In its Strategic Dispersal Flow Chart, clearly describe how risk is evaluated and who conducts this analysis; how information flows within the organization; the means of communication with internal and external stakeholders; the areas of authority and responsibility and appropriate lines of reporting; how documents and related records are to be properly maintained to provide evidence that these activities were executed; how regular management and supervisory activities and other related actions are performed during the normal course of this process; and the organizational responsibilities for completing and coordinating the dispersal process actions. 3. Describe the link between Navy’s five guidance documents—the Chief of Naval Operations Instruction: Navy Organization Change Manual; Strategic Laydown Flow Chart; Strategic Dispersal Flow Chart; the Secretary of the Navy Instruction: Environmental Planning for Department of the Navy Actions; and the Chief of Naval Operations Instruction: Environmental Readiness Program Manual—used to implement the Navy’s overall basing decision process. We further recommend that the Secretary of Defense take the following two actions: Identify a lead office within OSD best suited for the authority and responsibility for providing oversight of the services’ domestic basing decision processes. Establish guidance for the services to ensure that they fully consider joint use of DOD facilities, impacts to global operations, and other departmentwide initiatives during the course of their basing processes. Officials from the Under Secretary of Defense for Policy, the Deputy Under Secretary of Defense for Installations and Environment, the Office of the Secretary of Navy (Installations and Environment), and the Office of the Chief of Naval Operations (Information, Plans, and Strategy) provided oral comments on a draft of this report. In the comments, DOD concurred with two, partially concurred with two, and nonconcurred with one of our recommended actions. DOD also provided an opinion on text contained in appendix II, which summarized the Navy’s decision to homeport a nuclear- powered aircraft carrier at Mayport, Florida. Specifically, DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy to clearly describe how risk is evaluated in the Navy’s Strategic Laydown Flow Chart. DOD stated that our report identified a seam between existing Secretary of the Navy instructions, which generally deal with how to conduct homeport analysis, such as Environmental Impact Studies and National Environmental Policy Act compliance, and existing Office of the Chief of Naval Operations guidance. However, DOD does not identify any actions it plans to take to implement what we recommended. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy to clearly describe in the Navy’s Strategic Dispersal Flow Chart several areas of considerations, such as how risk is evaluated and who conducts this analysis, how information flows within the organization, and the means of communication with internal and external stakeholders. DOD stated that factors involved in homeport decisions are codified and implemented by the Navy Organization Change Manual. However, the Navy Organization Change Manual currently addresses none of the elements of our recommendation with regard to the Strategic Dispersal Flow Chart process and instead provides guidance only for the strategic laydown process. Regarding the Strategic and Force Structure Planning assessment, DOD also acknowledges that providing specific guidance and reference to the above- recommended considerations in a Secretary of the Navy or Chief of Naval Operations instruction would likely improve the overall clarity of homeporting decisions. Nonetheless, DOD does not identify any actions that the Navy plans to take to implement our recommendation. DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy to describe the link between its five guidance documents—the Chief of Naval Operations Organization Change Manual; Strategic Laydown Flow Chart; Strategic Dispersal Flow Chart; the Secretary of the Navy’s environmental planning document; and the Chief of Naval Operations environmental planning document—used to implement the Navy’s overall basing decision process. DOD agreed that a linkage between the Chief of Naval Operations and Secretary of the Navy guidance documents is necessary in order to better streamline and designate responsibilities for strategic homeporting decisions. However, DOD’s comment addresses only three of the relevant documents and omits discussing linkages with the other two. We continue to believe that the explicit connection between all five guidance documents is needed to ensure that stakeholders have a complete understanding of the process used to make basing decisions. Furthermore, the Navy did not indicate what actions it plans to take to implement our recommendation or the timeframe for doing so. DOD nonconcurred with our recommendation that the Secretary of Defense identify a lead office within OSD best suited for the authority and responsibility for providing oversight of the services’ domestic basing decision processes. DOD asserted that the Secretary of Defense has adequate oversight of the services’ domestic basing decision processes through the budget review and Global Posture Executive Council. However, if DOD relies on the budget process, OSD may lack reasonable assurance that it can effectively influence domestic basing decisions because OSD may not have been a stakeholder in the services’ basing decision during the planning and budgeting phases of the decision. Moreover, as our report clearly states, OSD told us that it has not used the Global Posture Executive Council for conducting oversight, raising questions about how a process not used for OSD oversight will assist OSD in actually exercising oversight. Our recommendation was intended to fortify OSD management oversight of the services’ basing decision processes and we continue to believe that a lead office should be designated within OSD that could provide the necessary proactive management oversight and guidance over service basing processes and decisions. DOD partially concurred with our recommendation that the Secretary of Defense establish guidance for the services to ensure that they fully consider joint use of DOD facilities, impacts to global operations, and other departmentwide initiatives during the course of their basing decision processes. DOD stated that the Secretary of Defense provides guidance on joint use of DOD facilities through several means, including the Quadrennial Defense Review and the program review. In addition, DOD stated that the department will periodically review and revise this guidance as appropriate to ensure that consideration and application of joint-use principles and cross-service impacts are institutionalized. Even though OSD may issue guidance on joint use of DOD facilities through these means, the Quadrennial Defense Review is intended to occur only every 4 years, which does not provide timely information regarding departmentwide initiatives since the initiatives do not necessarily only occur at 4-year intervals. Moreover, DOD did not explain how the program review is useful in influencing service basing decisions. While DOD did state that it would periodically review and revise guidance, DOD did not identify guidance to be reviewed and revised. DOD additionally provided a comment on the text related to the Navy’s decision to homeport a nuclear-powered aircraft carrier at Mayport, Florida, which is summarized in appendix II. In regard to our statement in the report that “the Department of the Navy made its recent decision to homeport a nuclear-powered aircraft carrier at Naval Station Mayport using its strategic laydown and strategic dispersal processes and its environmental planning guidance documents,” DOD stated that while many of the principles for strategic laydown were used in making the Mayport decision, the decision preceded the 2007 Navy Organization Change Manual, which describes the current laydown goals. DOD stated that prior to 2007 the Navy conducted a strategic laydown that determined the East Coast-West Coast split of forces by platform type, but not the dispersal of specific ships to specific locations. However, a senior Navy official within the Office of the Chief of Naval Operations (Information, Plans, and Strategy) clarified to us that the decision did go through the strategic laydown process existing at the time and through the strategic dispersal process as the current concept was being developed when Navy made its decision. Consequently, we revised our appendix to clarify that the Navy used the strategic laydown process existing at the time the Mayport decision was in the process of being made. We are sending copies of this report to interested congressional committees; the Secretary of Defense; and the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact point for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To determine the extent to which the services have comprehensive basing decision processes in place that are designed to result in well-informed basing decisions within the United States (50 states and the District of Columbia) that are not made under the base realignment and closure (BRAC) legislation, we identified and examined the military service guidance, policies, instructions, regulations, and orders relevant to making basing decisions. We also identified other appropriate Department of Defense (DOD) documents, such as the 2001, 2006, and 2010 Quadrennial Defense Reviews, DOD’s 2008 and 2009 Strategic Management Plans, and 2007 Defense Installations Strategic Plan. In addition, to identify their participation in the services’ basing decision processes, we interviewed officials from the offices of the Under Secretary of Defense for Policy and Deputy Under Secretary of Defense for Installations and Environment; the Joint Staff; U.S. Joint Forces Command; U.S. Northern Command; U.S. Southern Command; U.S. Army Pacific Command; the offices of the Chief of Staff of the Army, Chief of Naval Operations, Commandant of the Marine Corps, and Chief of Staff of the Air Force; U.S. Fleet Forces Command; and Air Combat Command. We documented each process and then discussed each respective service’s process with officials from the offices of the Chief of Staff of the Army, Chief of Naval Operations, Commandant of the Marine Corps, and Chief of Staff of the Air Force to confirm our understanding of the service’s basing process. We used the services’ guidance documents and other pertinent documents, interviews with the service officials, and officials’ comments regarding our analyses of the services’ processes to determine the extent to which the services have comprehensive basing decision processes in place that are designed to result in well-informed basing decisions within the United States that are not made under BRAC legislation. To establish criteria to use in assessing each service’s current basing process, we developed an assessment tool to identify the key elements, factors, and management control standards of a basing decision process that would be comprehensive and ensure that the basing decisions are transparent, repeatable, and defendable. In developing this assessment tool, we conducted a literature search to identify relevant standards for criteria and planning processes in prior GAO reports on relevant subject areas, including results-oriented government, resource decisions, internal control, military force structure issues, defense management challenges, and BRAC legislation. Furthermore, as part of our review, we considered the factors included in the House Committee on Armed Services’ report on H.R. 2647—on changes to military force structure, strategic imperative and risk assessment, cost, input from combatant commanders, and environmental and socioeconomic impacts. Based on our research, we identified four key elements for the assessment tool: (1) strategic and force structure planning, (2) infrastructure analysis, (3) implementation considerations, and (4) authority for making the basing decision. In addition, we identified management control as part of our evaluation tool. We also determined factors within each key element and the standards within management control that were necessary evaluation criteria in our assessment tool. To determine the completeness and reasonableness of our assessment tool, we developed and distributed a structured data collection instrument to officials within the offices of the Under Secretary of Defense for Policy and the Deputy Under Secretary of Defense for Installations and Environment and to service officials in the Army, Navy, Marine Corps, and Air Force headquarters to obtain their comments. We held discussions with these officials to reach agreement on the key elements, factors within each element, and management control standards that were in our assessment tool. Based on the results of the data collection instrument and our follow-on discussions with DOD and service officials, we finalized our assessment tool. Our analyst team was assigned to assess and evaluate the four services’ basing decision processes, one service per team analyst. Using the assessment tool, we reviewed and assessed each of the processes used by the services to make basing decisions within the United States that was not made under the BRAC legislation. Each team analyst examined the collective evidence concerning his or her service’s basing decision process, which was found either in a service regulation, instruction, order, or other documents. Using the service’s regulation, instruction, or order; other pertinent documents; and discussions with service officials, each team analyst applied professional judgment to determine if the service’s process included a step (or multiple steps) that satisfied the defined factors within each of the key elements. We assigned a rating to each process based on the extent to which the service incorporated factors and standards within the key elements and management control, respectively, that our tool identified as necessary for a process to be comprehensive and its decisions to be transparent, repeatable, and defendable. Based on the extent that these factors and standards were incorporated in the service’s process, we assigned one of three possible ratings to each element: (1) incorporates to a large extent, (2) incorporates to some extent, or (3) incorporates to a little or no extent. According to our methodology, we assigned a rating of “incorporates to a large extent” when a factor showed sufficient, specific, and detailed support, as noted in the services’ basing guidance document(s) or during discussions with agency officials on whether the factor was carried out during the basing process. If the process addressed some of the factors within the key elements to some degree, but not completely, we assigned a rating of “incorporates to some extent,” and if the evidence showed that the factors were not included, we assigned a rating of “incorporates to little or no extent.” We used the same rating system for the presence of management control standards throughout a service’s basing process. If a team analyst could not clearly determine the extent to which a service’s process satisfied the criteria for a factor, the factor was rated as “unclear.” This same methodology was also applied to the five standards for management control. After each team analyst completed the evaluation and assessment of his or her service’s basing decision process, the evaluation was validated by discussion with the whole team in a group setting. Because we developed the key elements, factors within the elements, and management control standards, as noted in our assessment tool, with input and guidance from the Office of the Secretary of Defense (OSD) and the services, we also provided the services an opportunity to review and comment on our analysis of their respective processes against our assessment tool. After receiving comments from each service through a structured data collection instrument, including clarifying information to resolve any ratings of “unclear,” the team updated the ratings as necessary. In addition, to determine whether the ratings were accurate, the team analysts performed in-depth reviews of each other’s evaluations of the services’ basing decision processes. After rating each factor within each key element and the management control standards, each team analyst then analyzed and determined the summary for each key element and for management control. Because each individual factor and the management control activities were considered to be necessary for a process to be transparent, repeatable, and defendable, the factors and the management control standards were weighted equally. The summary of our rating describes the extent to which the service’s process incorporates the key elements or management control standards in figure 1 in the report. To determine the extent to which the Secretary of Defense exercises management control, such as providing DOD-wide guidance and oversight of the services’ basing decision processes, we reviewed DOD and military service guidance, policies, instructions, regulations, and orders and relevant law to identify whether an office within OSD has been clearly assigned a role and responsibilities over the services’ basing processes. We reviewed the 2007 Defense Installations Strategic Plan, which was developed by the office of the Deputy Under Secretary of Defense for Installations and Environment to determine DOD’s strategic goals for its installations. We also reviewed our prior report on global defense posture and the recommendations made in that report to improve the global defense posture policy. We also interviewed officials from the offices of the Under Secretary of Defense for Policy and the Deputy Under Secretary of Defense for Installations and Environment to obtain their perspectives on how DOD exercises management control, such as oversight to coordinate and facilitate basing decisions across the services. In addition, we interviewed military service officials regarding OSD guidance provided to them during the services’ basing decision processes. To address the request for information about the approach used by the Navy in making its decision to establish a homeport for a nuclear-powered aircraft carrier at Mayport, Florida, we reviewed key Navy and DOD strategy and planning documents, including reports of the Quadrennial Defense Reviews of 2001, 2006, and 2010; the Navy’s 2007 A Cooperative Strategy for 21st Century Seapower; and relevant Navy instructions and documents. In addition, we reviewed relevant law and legislative history concerning homeporting a nuclear-powered aircraft carrier at Mayport and examined a 1992 Navy report to Congress and a March 1997 Final Programmatic Environmental Impact Statement discussing the facility upgrades required to homeport a nuclear-powered aircraft carrier at Mayport. Furthermore, we reviewed the November 2008 Final Environmental Impact Statement for the Proposed Homeporting of Additional Surface Ships at Naval Station Mayport, Florida, and the January 2009 Navy Record of Decision for Homeporting of Additional Surface Ships at Naval Station Mayport, Florida. To identify and obtain an understanding of the decision process followed by the Navy, we interviewed officials from the offices of the Under Secretary of Defense for Policy, Deputy Under Secretary of Defense for Installations and Environment, Assistant Secretary of the Navy (Installations and Environment), and Chief of Naval Operations; the Office of Cost Assessment and Program Evaluation; U.S. Fleet Forces Command; Naval Facilities Engineering Command Southeast; and Naval Station Mayport. We visited facilities and interviewed officials at Naval Station Mayport, Florida, to understand the extent of the potential upgrades required to support homeporting a nuclear-powered aircraft carrier. We also visited Naval Air Station North Island, California, to observe and discuss with Navy officials the infrastructure upgrades made to increase its capabilities and capacities to berth and homeport nuclear-powered aircraft carriers on the West Coast and to increase our understanding of the potential scope of upgrades that would be needed at Naval Station Mayport. In addition, we interviewed OSD officials involved in the 2010 Quadrennial Defense Review to assess Navy’s decision to homeport a nuclear-powered aircraft carrier in the broad context of future threats, future Navy force structure, and likely cost-effectiveness. (App. II provides a summary of the Navy’s decision to homeport a nuclear-powered aircraft carrier at Naval Station Mayport, Florida, and information on DOD’s Quadrennial Defense Review of the Navy’s decision.) We conducted our performance audit from July 2009 through May 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The possibility of homeporting a nuclear-powered aircraft carrier at Naval Station Mayport was considered by Congress as early as 1990 in the National Defense Authorization Act for Fiscal Year 1991, which required the Secretary of Defense to submit to Congress a plan to upgrade Naval Station Mayport capability to enable the station to service nuclear- powered aircraft carriers and otherwise to serve as a homeport for these carriers. Since that time, provisions of other National Defense Authorization Acts have required, among other things, that the Secretary of the Navy (1) submit to the congressional defense committees a report on the Navy’s plan for developing a second East Coast homeport for nuclear-powered aircraft carriers and (2) begin design activities for such military construction projects as may be necessary to make Mayport capable of serving as a homeport for a nuclear-powered aircraft carrier. In addition, the National Defense Authorization Act for Fiscal Year 1993 included a congressional finding that Naval Station Mayport ought to be the second East Coast homeport for nuclear-powered aircraft carriers when an additional homeport was needed. The Navy has been reporting to Congress, since the late 1990s on the development of plans for making Naval Station Mayport a potential homeport for nuclear-powered aircraft carriers. In addition, in March 1997, the Navy released a programmatic environmental impact statement. In 2001, the Quadrennial Defense Review called for the Navy to provide more warfighting assets more quickly to multiple locations. In order to meet this new demand, the Navy made its preliminary decision to homeport additional fleet surface ships at Naval Station Mayport. As a result, the Navy prepared an environmental impact statement to evaluate a broad range of strategic homeport and dispersal options for Atlantic Fleet surface ships at this location and finalized its final environment impact statement. On January 14, 2009, the Navy issued its record of decision to homeport a nuclear-powered aircraft carrier at Naval Station Mayport, Florida. According to Navy officials, the Department of the Navy made its recent decision to homeport a nuclear-powered aircraft carrier at Naval Station Mayport using its strategic laydown and strategic dispersal processes and its environmental planning guidance documents. In addition, the Navy stated in its record of decision that the most critical considerations in making the decision were the environmental impacts, recurring and nonrecurring costs associated with changes in surface ship homeporting options, and strategic dispersal considerations. However, according to its record of decision, the need to develop a hedge against the potentially crippling results of a catastrophic event was ultimately the determining factor in the Navy’s decision to establish a second nuclear-powered aircraft carrier homeport on the East Coast of the United States at Mayport. The Navy has historically had multiple aircraft carrier homeports on each coast. Currently, the Navy has three nuclear-powered aircraft carrier homeports on the West Coast—Bremerton and Everett, Washington, and San Diego, California—and one East Coast carrier homeport in the Hampton Roads area, which includes Norfolk and Newport News, Virginia. According to Navy officials, the Navy used elements of its strategic laydown process existing at the time the Mayport decision was in the process of being made to apportion the fleet to the Pacific (West) Coast, to the Atlantic (East) Coast based on its force structure analysis. According to officials, the process relies on several documents, including conventional campaign plans; homeland defense requirements; the Cooperative Strategy for the 21st Century Seapower, Navy 2030 Ashore Vision; the 2001 and 2006 Quadrennial Defense Review, and the Global Maritime Posture. Based on these strategic laydown analyses, the Navy developed a baseline for the total Navy force structure to try to optimize the sourcing of forces based on the speed of response, the maritime strategy, and the Quadrennial Defense Review direction. Using the output from the strategic laydown process, Navy officials said that they performed its strategic dispersal process, which allowed the Navy to further assess and determine the distribution of the fleet by homeport based on strategic requirements and the ability to balance operational, fiscal, and infrastructure factors. Based on its analysis, the Navy decided to establish a second East Coast homeport for a nuclear- powered aircraft carrier. Navy officials said that the Navy worked on the assumption that it would not establish a new carrier homeport but upgrade an existing carrier homeport to support nuclear-powered aircraft carriers. Navy officials said that Naval Station Mayport was the best option because it was an existing conventional carrier homeport with underutilized facilities since the USS John F. Kennedy was retired in 2007. According to Navy officials, the Navy used its strategic dispersal process to evaluate key operational factors, such as response time to combatant commands, transit times to deployment areas and training, geographic location of air wings, historic aircraft carrier loading, physical pier capacity, transit times for pier side to open ocean, antiterrorism and force protection, and mitigation of natural and man-made risks for both the Hampton Roads area and Naval Station Mayport. For example, the Navy believes the following constitute risk factors associated with the nuclear- powered aircraft carrier consolidation in Hampton Roads: (1) singular homeport, maintenance, and support location; (2) all of the Atlantic Fleet nuclear-powered aircraft carrier trained crews, associated community support infrastructure, and nuclear carrier support facilities within a 15 nautical mile radius; (3) single 32 nautical mile access channel with two major choke points (bridges); (4) approximately 3-hour transit time from carrier piers to open ocean; and (5) the planned significant increase in commercial shipping volume because of the planned Craney Island upgrades. Furthermore, the Navy used the U.S. Coast Guard’s Port Threat Assessments for the Coast Guard Sectors of Hampton Roads and Mayport, which determined that the overall threat level for Hampton Roads is moderate, while the overall threat level for Mayport is low. According to the threat assessments, a moderate threat level indicates a potential threat exists against the port and that one or more groups have either the intention or capability to employ large casualty-production attacks or cause denial of commercial, military, and passenger vessel access to the port, while a low threat level indicates that little or no information exists on one or more groups with a capability or intention to damage the port. Navy officials also identified the following benefits associated with homeporting a nuclear-powered aircraft carrier at Naval Station Mayport: the shortest access to the Atlantic Ocean of any current Navy homeport, additional dispersed controlled industrial facility and nuclear physical separation of East Coast nuclear-powered aircraft carriers, physical separation between piers and shipping lanes, smaller commercial shipping traffic volume, and strategic and operational flexibility. Using the Navy’s environmental planning guidance documents, officials from the Navy’s Fleet Forces Command completed a final environmental impact statement in November 2008, in accordance with the National Environmental Policy Act, to evaluate a broad range of strategic homeport and dispersal options for Atlantic Fleet surface ships at Naval Station Mayport. Several analyses were conducted of geology and soils, wetlands and floodplains, water resources, air quality, noise, biological resources, cultural resources, hazardous and toxic substances and waste, and environmental health and safety. These analyses also included a summary of the environmental impacts and mitigation measures. As part of the environmental impact statement, cost estimates were also developed. The Navy’s environmental analysis included consultations with regulatory agencies, such as the U.S. Fish and Wildlife Service and the National Marine Fisheries Service, regarding impacts to endangered and threatened species, and the U.S. Army Corps of Engineers and the Environmental Protection Agency regarding dredging operations and the in-water disposal of dredged materials. In addition, public awareness and participation were integral components of the environmental impact statement process. The Navy took steps to provide members of the public, state agencies, and federal agencies with the opportunity to help define the scope of the Navy’s analysis as well as examine and consider the studies undertaken by the Navy. Fleet Forces Command prepared the National Environmental Policy Act documentation and supporting studies that defined the proposed action and range of alternatives and identified the potential mitigation options. The Navy’s final environmental impact statement for Mayport assessed the impacts of 13 alternatives, including the no action alternative: Alternative 1: Cruiser homeport, destroyer homeport, or both. Alternative 2: Amphibious Assault Ship homeport. Alternative 3: Nuclear-powered aircraft carrier capable. Alternative 4: Nuclear-powered aircraft carrier homeport. Alternative 5: Amphibious Ready Group homeport. Alternatives 6-12: Seven different combinations of the first four alternatives. Alternative 13: No action. No additional fleet surface ships would be homeported at Naval Station Mayport, and Mayport would retain the ability to berth a nuclear-powered aircraft carrier in a limited fashion. The 13 alternatives evaluated a broad range of options for homeporting surface ships at Navy Station Mayport, such as permanent assignment of various types of surface ships and personnel. In addition, Alternatives 3 and 4 differ because a nuclear-powered aircraft carrier capable alternative provides for port services—loading and unloading cargo and sailors and access without restrictions for visits up to 63 days per year. The nuclear- powered aircraft carrier homeport would permanently assign a carrier and its personnel to Naval Station Mayport, which would provide facilities to perform depot-level maintenance at that location. In the final environmental impact statement, the Navy identified alternative 4 as the preferred alternative; which involves homeporting one nuclear-powered aircraft carrier at Naval Station Mayport and includes dredging, infrastructure and wharf improvements, on-station road and parking improvements, and construction of nuclear-powered aircraft carrier propulsion plant maintenance facilities. Other factors that influenced the selection of alternative 4 as the preferred alternative included impact analyses in the environmental impact statement and estimated costs of implementation, including military construction costs and other operation and sustainment costs. For example, the Navy’s analysis showed that there are no environmental impacts associated with homeporting a nuclear-powered aircraft carrier at Naval Station Mayport that cannot be appropriately addressed or mitigated, including impacts to endangered species, such as the Florida manatee and sea turtles. In addition, the Navy reported that the projected recurring and nonrecurring costs for the preferred alternative are less than 10 percent of the cost of a single nuclear-powered aircraft carrier and less than 1 percent of the cost of the Department of the Navy’s nuclear-powered aircraft carrier assets. The Navy believes that homeporting a nuclear-powered aircraft carrier at Naval Station Mayport is a way to provide additional security for the carrier and enhance deployment capability. In November 2008, the Navy made its final environmental impact statement available, and the Assistant Secretary of the Navy (Installations and Environment) signed the Navy’s formal record of decision on January 14, 2009, to homeport a nuclear- powered aircraft carrier at Mayport. After the Navy decided to homeport a nuclear-powered aircraft carrier at Naval Station Mayport, Florida, the Secretary of Defense announced that he would review the Navy’s decision as part of DOD’s 2010 Quadrennial Defense Review. The Secretary of Defense directed the Quadrennial Defense Review working group to assess the Navy’s Mayport decision. According to OSD officials, the Navy provided supporting documentation regarding its decision to the working group, which used this information in conducting its analysis. In conducting its review, the Quadrennial Defense Review working group assessed the Navy’s decision against nine implementation criteria: (1) execution of current or planned operations, (2) operational flexibility, (3) operational management of the force, (4) institutional provisions of the force, (5) organizational friction, (6) execution of future missions successfully against an array of future challenges, (7) consideration of the whole of government programs and initiatives, (8) international relations, and (9) environmental concerns. In addition, OSD officials stated that the working group assessed transit times for a nuclear-powered aircraft carrier to leave both the Norfolk and Mayport ports and arrive in the Atlantic Ocean. As a part of the working group’s review, officials in DOD’s Office of Cost Assessment and Program Evaluation stated that they evaluated the reasonableness of the Navy’s cost estimate to establish a homeport for a nuclear-powered aircraft carrier at Mayport. Specifically, the officials said that they reviewed and assessed the military personnel, operations and maintenance, and military construction costs associated with the Navy’s decision and found that the Navy’s cost estimates were reasonable. For example, OSD officials stated that the working group was provided the following dollar amounts—a onetime cost of $565 million to build the necessary infrastructure at Mayport and $25 million as the recurring cost for operations and maintenance for homeporting a nuclear-powered aircraft at Mayport. In addition, the officials said that the working group used these analyses and cost estimates to brief the Secretary of Defense on its results. The February 2010 Quadrennial Defense Review report reiterated the Navy’s decision that homeporting an East Coast carrier in Mayport would contribute to mitigating the risk of a terrorist attack, accident, or natural disaster. In addition to the contact named above, Mark J. Wielgoszynski, Assistant Director; Clarine S. Allen; Pat L Bohan; John H. Edwards; Ron La Due Lake; Joanne Landesman; Christopher R. Miller; Stephanie Moriarty; John Van Schaik; Michael C. Shaughnessy; and Michael D. Silver made m ajor contributions to this report. | Decisions by the military services on where to base their force structure can have significant strategic, socioeconomic, and cost implications for the Department of Defense (DOD) and the communities surrounding the bases. Each service uses its own process to make basing decisions. The House Committee on Armed Services directed GAO to review the services' basing decision processes. GAO examined the extent to which (1) the services have comprehensive processes in place that are designed to result in well-informed basing decisions and (2) DOD exercises management control of these processes. GAO reviewed and analyzed DOD and service guidance, studies, and relevant documents on implementation and oversight of the services' basing processes. The Army, Marine Corps, and Air Force basing decision processes fully incorporate the key elements, associated factors, and management control standards that GAO identified as necessary in a comprehensive process; however, the Navy needs additional guidance for its process to be complete. GAO found that while the Army, Marine Corps, and Air Force each have issued comprehensive guidance for their basing possesses that describes the organizational roles and responsibilities within the service, establishes links among all of the service's strategic and environmental guidance documents, and identifies the service's basing criteria, some of the Navy's guidance documents lacked detailed information about specific actions taken during the process and defined responsibility for completing certain types of analyses. For example, the Navy's Strategic Dispersal Flow Chart--one of the five guidance documents used to implement the Navy's process--shows that some types of analyses are conducted to review a range of considerations, such as access to training areas, sailor and family quality of life, and ship size, for a particular basing decision. But the document does not describe in any detail how and by whom these analyses will be conducted. Additionally, Navy guidance does not provide a clear explanation of how its five guidance documents are linked together in implementing the Navy's overall basing process. Without comprehensive and clear guidance on all aspects of the Navy's overall basing decision process, the Navy may lack the completeness and management control to ensure that Navy basing decisions can facilitate external stakeholders' examination and scrutiny or ensure effective implementation of the Navy's basing process. The Secretary of Defense has not set a policy or assigned an office a clear role for providing management control of the services' basing decision processes within the United States, and as a consequence may lack reasonable assurance that certain departmentwide initiatives will be fully supported in the services' basing decisions. The Office of the Secretary of Defense (OSD) officials said that OSD is promoting joint sharing of DOD facilities and seeking to ensure that domestic basing decisions support global operations. However, OSD has not fully promoted service consideration of the joint sharing, global operations, and potentially other initiatives because the Secretary of Defense has neither provided a comprehensive policy for, nor clearly assigned an office within OSD to oversee domestic service basing processes. Without OSD guidance and an office to provide effective oversight of military service basing decision processes, the Secretary of Defense lacks reasonable assurance that departmentwide initiatives are adequately considered by the services in their domestic basing decision making. |
Estimates of the size of the alien population subject to removal vary. A report from the Pew Research Center estimated the population of unauthorized aliens in the United States to be approximately 12 million as of March 2006. According to DHS, the population of aliens subject to removal from the United States has grown in recent years. DHS’s Office of Immigration Statistics estimated that the population of aliens subject to removal has increased by half a million from January 2005 to January 2006. Additionally, DHS has estimated that the removable alien population grew by 24 percent from 8.5 million in January of 2000 to 10.5 million in January of 2005. Aliens who are in violation of immigration laws are subject to removal from the United States. Over 100 violations of immigration law can serve as the basis for removal from the United States, including, among other things, criminal activity, health reasons (such as having a communicable disease), previous removal from the United States, and lack of proper documentation. ICE investigations resulted in 102,034 apprehensions, or about 8 percent of the approximately 1.3 million DHS apprehensions in 2005. Four main categories constituted the basis for aliens removed by DHS in 2005: (1) aliens entering without inspection, by, for example, illegally crossing the border where there is no formal U.S. port of entry; (2) aliens attempting to enter the United States without proper documents or through fraud, at U.S. ports of entry; (3) aliens with criminal convictions or believed to have engaged in certain criminal activities, such as terrorist activities or drug trafficking; and (4) aliens who are in violation of their terms of entry (e.g., expired visa). Our review of ICE policies and procedures, along with interviews at ICE field offices, showed that officers exercise discretion throughout various phases of the alien apprehension and removal process, but the initial phases of the process—initiating removals, apprehending aliens, issuing removal documents and detaining aliens—involve the most discretion. Officers in OI and DRO field offices told us that they exercise discretion for aliens with humanitarian issues and aliens who are not investigation targets on a case-by-case basis with guidance and approval from supervisors. Officers told us they typically encounter (1) aliens who are the target of an investigation and (2) aliens who are not the target of an investigation but who are encountered through the course of an operation and are subject to removal. While officers told us that discretion with regard to aliens who are fugitives, criminals, and other investigation targets is limited by clearly prescribed policies and procedures, they told us that they have more latitude to exercise discretion when they encounter aliens who are not fugitives or criminals and are not targets of ICE investigations, particularly when encountering aliens with humanitarian issues. The alien apprehension and removal process encompasses six phases: (1) initial encounter, (2) apprehension, (3) charging, (4) detention, (5) removal proceedings, and (6) final removal. Our review of federal regulations, ICE policies, guidance, and interviews showed the parts of the removal process from the time officers encounter aliens as part of an operation to the time they determine whether to detain an alien involve the most discretion. During removal proceedings and final removal, ICE attorneys and DRO officers can exercise discretion only in clearly delineated situations prescribed by ICE policies and statutory and regulatory requirements. Officers told us that during the initial phases of the apprehension and removal process, they encounter situations that require them to pursue alternate ways to initiate removals, in lieu of apprehending aliens. During encounters with aliens, officers told us that they decide how to exercise discretion for aliens on a case-by-case basis with input from supervisors or experienced officers. Specifically, officers told us that they exercise discretion when they encounter aliens who (1) present humanitarian concerns such as medical issues or being the sole caregiver for minor children or (2) are not the primary target of their investigations. DRO and OI officers told us that their primary goal is to initiate removal proceedings for any alien they encounter who is subject to removal. However, officers told us that in some instances, they might decide not to pursue any action against an alien who they suspect to be removable. Officers at two OI and one DRO field office told us that, in some instances, they are unable to initiate removal action against every alien they encounter during the course of an operation. Officers noted that several factors—such as the availability of detention space, travel time to an alien’s location, and competing enforcement priorities—affect their decisions to initiate removal action against an alien. Officers at one of the seven OI field offices we visited also told us that because of limited resources they have to make trade-offs between dedicating resources to aliens who pose a threat to public safety and those who do not—that is, noncriminal aliens—which in some instances result in decisions to not initiate removal action against noncriminal aliens. Our review of DHS and ICE guidance showed that officers’ ability to exercise discretion is limited for aliens who are investigation targets, such as criminal aliens and fugitive aliens who have ignored a final removal order. Discretion for apprehending these aliens is limited due to clearly prescribed policies, and procedures—such as requirements under the INA to detain terrorists or certain criminals—governing the handling of these aliens. By contrast, officers at all seven DRO and seven OI field offices we visited told us that they have discretion to process and apprehend aliens who are not investigation targets or aliens who present humanitarian circumstances. In such circumstances, officers told us that they can exercise discretion by deciding to (1) apprehend an alien and transport the alien to an ICE facility for processing, (2) issue the alien an NTA by mail, or (3) schedule an appointment for the alien to be processed at an ICE facility at a later date. For example, in looking for a criminal alien who is the target of an investigation, a fugitive operations team may encounter a friend or relative of the targeted alien—who is also removable—but not the primary target of an ICE investigation. If the friend or relative has a humanitarian circumstance, like being the primary caregiver for small children, the officers can decide to not apprehend the friend or relative and opt for processing at a later time after reviewing the circumstances of the case and determining that no other child care option is available at the time. In such instances, ICE headquarters officials told us that officers are to confirm child welfare claims made by an alien and determine whether other child care arrangements can be made. Headquarters officials also told us that aliens do not always divulge that they are the sole caretakers of children but explained that if ICE agents became aware of an alien’s child welfare responsibility, agents must take steps to ensure that the child or children are not left unattended. In addition, officers at two OI offices and one DRO office told us that in some instances, such as when aliens are sole caretakers for minor children or are ill, they will schedule appointments for aliens who are not investigation targets to process them at a later date. Officers at five of the seven OI field offices and two of the seven DRO offices we visited also told us that they will mail an NTA—as an alternative to apprehension—to aliens who present humanitarian issues such as medical conditions or child welfare issues. At another OI field office, officers told us that when determining whether to apprehend aliens or use an alternative to apprehension—for aliens who are not investigation targets—they also consider manpower availability. Our review of ICE guidance and procedures showed that most of an officer’s discretion in the charging phase relates to the decision to grant voluntary departure. Officers told us that when not statutorily prohibited from granting voluntary departure, they have some discretion in determining whether to issue an NTA and thus initiate formal removal proceedings or grant voluntary departure in lieu of initiating formal removal proceedings, which typically results in a hearing before an immigration judge. Officers told us that they may consider factors like humanitarian concerns and ICE priorities when exercising discretion to grant voluntary departure. On the basis of our review of ICE data, we noted significant variation in the use of voluntary departure across field offices. Our review of OI apprehension data also showed that three OI field offices near the U.S. southwestern border initiated a relatively higher number of voluntary departures (equal or greater than the number of NTAs issued). ICE headquarters officials noted that officers at field offices near the U.S. southwestern border employ voluntary departure generally because of their proximity to the U.S.-Mexico border, which enables them to easily transport Mexican nationals to Mexico. Figure 1 illustrates the number of NTAs and voluntary departure issued by OI field offices. Our review of procedures also showed that if detention is not mandated by the INA, officers have discretion to determine if an alien will be detained or released pending the alien’s immigration court hearing. When making this determination, ICE guidance instructs officers to consider a number of factors, such as humanitarian issues, flight risk, availability of detention space, and whether the alien is a threat to the community. Officers at two DRO field offices we visited told us that they exercise discretion to release aliens from custody if appropriate facilities are not available or if detention space is needed for aliens who pose a greater threat to public safety. At one OI field office, officers provided an example of an operation where they released two women and two children on their own recognizance because of the lack of appropriate detention space to house women and children. Officers at another DRO field office also noted that detaining women and juveniles can be challenging because of limited space to accommodate them. Detention determinations made by officers can be reexamined by immigration judges upon an alien’s request. Our review of ICE policy and DRO’s field operational manual showed that ICE attorneys—who generally enter the process once proceedings have begun—and officers have less discretion in the later phases of the apprehension and removal process. Once an alien’s case arrives at the removal proceedings phase and is being reviewed by ICE attorneys, we found that the use of discretion at this stage is limited by clear policy and guidelines. Our review of ICE policy and interviews with attorneys at 5 of 7 Chief Counsel Offices showed that most aliens have few alternatives to appearing before immigration court after entering the removal proceeding phase. Circumstances in which ICE might not pursue proceedings include a legally insufficient NTA; an alien’s eligibility for an immigration benefit, such as lawful permanent residency; and an alien’s serving as a witness in a criminal investigation or prosecution. In these specific cases, ICE attorneys can exercise discretion not to pursue proceedings by asking the immigration court to terminate removal proceedings if the NTA has been filed with the court. ICE OPLA guidance also permits ICE attorneys to take steps to resolve a case in immigration court for purposes of judicial economy, efficiency of process, or to promote justice. Examples in the guidance include cases involving sympathetic humanitarian circumstances like an alien with a U.S. citizen child with a serious medical condition or disability, or an alien or close family member who is undergoing treatment for a potentially life-threatening disease. ICE policy states that DRO may exercise discretion and grant some form of relief to the alien, such as a stay of removal and deferred action at the final phase of the process. A stay of removal is specifically authorized by statute and constitutes a decision that removal of an alien should not immediately proceed. Deferred action gives a case a lower removal priority, but is not an entitlement for the alien to remain in the United States. While some aliens could be granted a stay or deferred action by DRO field office managers, DRO officers told us that DRO seeks to execute removal orders in the vast majority of cases. DRO officers in field offices told us that they could recall only a handful of cases when DRO officers did not execute a removal order after it was issued by an immigration judge. Supervisors in one DRO field office recalled a case in which a stay was granted to an aggravated felon who had a serious medical condition. Officers at DRO and OI field offices who are responsible for apprehending, charging, and detaining removable aliens are to rely on formal and on-the- job training, guidance provided by supervisors, and guidance provided in field operational manuals to inform their decision making regarding alien apprehensions and removals. Consistent with internal control standards, which call for training to be aimed at developing and retaining employee skill levels to achieve changing organizational needs, ICE has updated some of the training it offers to officers responsible for making alien apprehension and removal decisions. Some of the updated training includes, among other things, implementing worksite enforcement training, supervisory training for OI supervisors, and Spanish language training for newly hired DRO officers. These updates have the potential to provide critical information to officers and supervisors to better support their decision making. However, ICE guidance, including ICE’s field operational manuals and ICE memorandums, on the exercise of discretion during the alien apprehension and removal process does not serve to fully support officer decision making in cases involving humanitarian issues and aliens who are not primary targets of ICE investigations. For example, ICE has not completed efforts to provide officers with complete and up to date guidance to reflect expanded worksite and fugitive operations enforcement efforts. ICE headquarters officials told us that they do not have a time frame for completing efforts to update available guidance in field operational manuals. In addition, although Chief Counsel Offices provide information regarding legal developments to DRO and OI officers to guide their decision making, ICE does not have a mechanism to ensure that such information is disseminated consistently to officers across field offices. The lack of comprehensive guidance and a mechanism by which to help ensure that officers receive consistent information regarding legal developments puts ICE officers at risk of taking actions that are not appropriate exercises of discretion and do not support the agency’s operational objectives. Internal control standards state that training should be aimed at developing and retaining employee skill levels to meet changing organizational needs. Officers at DRO and OI field offices who are responsible for apprehending, charging, and detaining removable aliens rely on formal and on-the-job training and guidance provided by supervisors to inform their decision making regarding alien apprehensions and removals. ICE has recently begun undertaking reviews and revisions of training that are consistent with these internal controls by updating and revising existing training curricula and implementing new training curricula for OI and DRO officers to provide critical information to officers and supervisors to better inform their decision making. These actions are important steps for providing officers with relevant information to inform their decision making. In early 2007, OI instituted a 2-week worksite enforcement training course geared toward informing ICE officers regarding criminal investigation techniques and procedures, which also provides information on the exercise of discretion regarding aliens who present humanitarian issues. OI headquarters officials identified worksite enforcement as a training need, since these operations are expanding, and an OI headquarters official told us that most OI officers had not participated in major worksite enforcement operations since 1998 and that many of the officers who participate are temporarily assigned to the operation from other duties or locations. Because of expanded worksite enforcement operations, officials told us that OI instituted worksite enforcement training, which will be offered to 100 OI officers per year. Headquarters officials told us that resource constraints preclude ICE from offering worksite enforcement training to all officers. In addition to worksite enforcement training, OI officials told us that they are also in the process of instituting additional changes to training curricula that could better support officer decision making: OI officials told us that they developed a 3-week training course for first-line supervisors, with 1 week of the course designed to provide information on legal issues pertaining to removal dispositions, such as instances when to issue an NTA or grant voluntary departure. An OI official told us that OI is developing a 3-week refresher training course for experienced OI officers, to reinforce these officers’ knowledge of alien apprehension and removal operations. According to OI’s chief of training, this course should be implemented by the second quarter of fiscal year 2008. OI officials have revised an “On the Job” training manual that tracks tasks that new officers must complete in their first 18 months on the job. According to an OI training official, by completing the tasks outlined in the manual, officers should have a full understanding of the requirements for processing aliens, which include exercising discretion throughout the apprehension and removal process (e.g., whether to immediately apprehend the alien or to mail an NTA). Like OI officials, DRO officials have also taken steps to strengthen training for DRO officers. In April 2007, DRO added a Spanish language course to its basic training curriculum. According to DRO headquarters training officials, this training will better equip officers to communicate with aliens and thus help ensure that officers make appropriate decisions about how to exercise discretion for aliens. In addition, DRO is developing a 3-week refresher training for experienced DRO officers designed to provide officers with skills, tactics and legal updates pertaining to alien apprehension and removal operations and plans to implement this course in October 2008. DRO headquarters officials also told us that they will institute a 2-year “On the Job” training program in September 2007. According to officials, this program is to provide newly appointed officers with additional training on immigration laws, competencies, and tasks related to their jobs. While the recent changes to the OI and DRO training curricula are positive steps in better aligning ICE training with operations, it is too soon for us to assess the effectiveness of these efforts. According to internal control standards, management is responsible for developing and documenting the detailed policies, procedures, and practices to ensure that they are an integral part of operations. DRO and OI officers generally rely on (1) OI and DRO field operational manuals; (2) DHS and ICE memorandums; and (3) an OI-developed worksite enforcement operational guidebook for guidance and policies to perform their duties, including making decisions regarding alien apprehensions and removals. However, ICE guidance to instruct officer decision making in cases involving humanitarian issues and aliens who are not primary targets of ICE investigations during the alien apprehension and removal process is not comprehensive and has not been updated by headquarters officials to reflect ICE’s expanded worksite and fugitive operations. In addition, although officers exercise discretion when deciding whether or not to take action to initiate the removal process, ICE does not have guidance on officers’ exercise of discretion on who to stop, question, and arrest when initiating the removal process. Without comprehensive policies, procedures, and practices, ICE lacks assurance that management directives will be conducted as intended and that ICE officers have the appropriate tools to fully inform their exercise of discretion. ICE’s OI and DRO field operational manuals were created by ICE’s legacy agency—Immigration and Naturalization Service (INS), which was reorganized under the newly formed Department of Homeland Security in March of 2003. Both of these manuals, which are largely unchanged from the guidance developed and employed by INS, are currently undergoing revisions. Our review of these manuals shows that they do not offer comprehensive and updated guidance to instruct officers on the exercise of discretion in cases involving aliens with humanitarian issues and aliens who are not targets of ICE investigations. For example, OI’s field operational manual offers some guidance on options for addressing aliens with caregiver issues who are encountered during worksite operations, such as ensuring that an alien’s dependents receive timely and appropriate care. However, the guidance does not include, for example, provisions for aliens with medical conditions. OI headquarters officials told us that they are in the process of revising OI’s field operational manual but have not yet updated the sections corresponding to alien apprehensions and removals. With respect to DRO’s field operational manual, some guidance is available to help officers decide whether to detain aliens pending their immigration hearings, but it does not clarify how officers should exercise discretion to determine detention for nonmandatory detention cases, especially for aliens with humanitarian issues or aliens who are not targets of ICE investigations. DRO headquarters officials told us that they are revising a chapter in the manual on fugitive operations but the revisions are not yet available to DRO officers in the field. For both the OI manual and the fugitive operations chapter in the DRO manual, headquarters officials told us that they did not yet know if the revisions would include guidance on the use of discretion for aliens with humanitarian issues or aliens who are not the targets of ICE investigations. Moreover, OI and DRO officials could not provide a time frame for when the revisions will be completed. The various ICE organizational units with removal responsibilities have issued some guidance to help guide their own officers’ and attorneys’ exercise of discretion for aliens with humanitarian issues, but the guidance either is not comprehensive with regard to the various circumstances the officers and attorneys may encounter or does not apply to officers who have the authority to initiate removal proceedings. A memo issued in 2006 by DRO to its field offices, outlines severe medical illnesses as a basis for exercising discretion when deciding whether to detain aliens who are not subject to mandatory detention. While this memo provides important guidance for exercising discretion during the detention phase for aliens with medical issues, it does not address child welfare and primary caretaker issues. In addition, a 2005 memo issued by OPLA permits ICE attorneys to take steps not to pursue proceedings by asking the immigration court to terminate removal proceedings if the NTA has been filed with the court. Examples in the guidance include cases involving sympathetic humanitarian circumstances like an alien with a U.S. citizen child with a serious medical condition or disability, or an alien or close family member who is undergoing treatment for a potentially life- threatening disease. However, this memo is directed at Chief Counsel attorneys, who do not have the authority to initiate removal proceedings. Instead, only supervisory DRO and OI officers can initiate removals, and as a result the memo is not clearly applicable to them. In addition, DHS, OI, DRO, and OPLA have also issued their own separate memorandums that guide officers’ actions at different points of the apprehension and removal process. Each memorandum is generally directed to officers and attorneys under the respective ICE unit that issues it, resulting in a number of memos distributed via a number of different mechanisms within each ICE unit. These memorandums do not offer comprehensive guidance on exercising discretion for aliens with humanitarian circumstances or aliens who are not the primary targets of ICE investigations. For example, OI issued a memo in May 2006, which instructs officers to schedule appointments as a last resort for juvenile aliens, elderly aliens, or aliens with health conditions to be processed at a later date, rather than apprehend these aliens at the time of the encounter or mail them an NTA. This guidance addresses important humanitarian issues, but it is only directed to ICE officers who are responding to calls from local law enforcement agencies. Furthermore, it does not define or fully delineate circumstances that might constitute “last resort.” Another memo issued by DHS in October 2004 provides officers and supervisors with flexibility on detaining aliens (who are not subject to mandatory detention) depending on the circumstances of the case, such as available bed space. However, this memo does not offer specific guidance on determining detention for aliens with humanitarian circumstances or aliens who are not primary targets of ICE investigations. In addition to ICE field operational manuals and various memorandums, an OI headquarters official told us that ICE has recently instituted a worksite enforcement operational guidebook to assist in the proper planning, execution, and reporting of worksite enforcement operations. Our review of this guidebook showed that it discusses, among other things, operational planning and coordination, including instructions on reporting requirements at the arrest site and working with other ICE units, like DRO. However, although ICE plans to regularly update its worksite enforcement operational guidebook based on lessons learned from past worksite operations, the current guidebook that ICE provided us in August 2007 does not include any guidance about how officers should factor humanitarian issues into their decision making during the apprehension and removal process. Finally, in our review of the worksite enforcement operational guidebook, we did not find guidance to inform officers’ exercise of discretion on whom to stop, question, and arrest when initiating the removal process—guidance that was also lacking in the various operational manuals and memorandums. In our review of documents from 26 OI field offices, we also noted that only 3 of these field offices have developed local guidance to guide officers’ discretion in the initial phases of the apprehension and removal process. However, the local guidance we reviewed is not comprehensive because the 3 offices do not have guidance that covers the use of discretion throughout the phases of the alien apprehension and removal process when officers can exercise discretion. For example, 1 of the 3 offices has guidance on scheduling appointments for future processing for aliens with humanitarian concerns. Another office has guidance that covers factors to consider when exercising discretion for cases involving humanitarian issues as well as guidance on deciding whether to detain aliens who are not investigation targets. ICE has recently expanded its worksite enforcement and fugitive operations, increasing the probability that officers in the field will have to exercise discretion in their encounters with aliens who present humanitarian issues or aliens who were not the targets of their investigations—particularly noncriminal aliens. With these expanded operations, the need for up to date and comprehensive guidance to reduce the risk of improper decision making becomes increasingly important. According to ICE data, in fiscal year 2006, ICE made, through its worksite enforcement operations, 716 criminal arrests, which include aliens subject to removal who are charged with criminal violations, and 3,667 administrative arrests, which refer to alien workers who are unlawfully present in the United States but have not been charged with criminal violations. These data show a sharp increase from fiscal year 2005, as noted in figure 2. Through July 2007 of fiscal year 2007, ICE made 742 criminal arrests and 3,651 administrative arrests in its worksite operations; these arrests surpassed the combined arrests for worksite enforcement operations from fiscal year 2002 to fiscal year 2005. According to a senior ICE headquarters official, from fiscal year 2003 through the third quarter of fiscal year 2007, ICE has also experienced over a six-fold increase in the number of new officers dedicated to worksite enforcement operations, many of whom are temporarily assigned to worksite operations. ICE reported that it has also expanded fugitive operations and plans to increase the number of fugitive operation teams from 18 in 2006 to 75 by the end of fiscal year 2007. Annual performance goals for each of these teams call for 1,000 apprehensions per team. As of April 27, 2007, ICE officers had arrested 17,321 aliens through its fugitive operation teams in fiscal year 2007, a 118 percent increase in arrests since fiscal year 2005. ICE’s expanding worksite enforcement and fugitive operations both present officers with circumstances that could require the use of discretion, specifically cases that involve aliens with humanitarian issues or aliens who are not ICE targets. Expanded fugitive operations may increase the number of encounters that officers have with removable aliens who are not the primary targets or priorities of ICE investigations. For cases involving these aliens, additional guidance could provide ICE with better assurance that its officers are equipped to exercise discretion and prioritize enforcement activities appropriately. In large-scale worksite enforcement operations, officers have encountered numerous aliens who have presented humanitarian issues. For this type of case, comprehensive guidance on how to weigh relevant aspects of aliens’ circumstances or humanitarian factors would provide ICE with enhanced assurance that officers are best equipped to appropriately determine whether aliens should be apprehended, how they should be charged, and whether they should be detained. A recent large-scale worksite enforcement operation in Massachusetts highlights the importance of having comprehensive and up to date guidance to help inform officers’ decision making when they encounter aliens with humanitarian issues. In this operation, ICE officers encountered aliens who had humanitarian issues, including aliens who were primary caretakers of children and had to assess the totality of the circumstances in numerous cases, in real time, to decide how to handle each case in coordination with other entities, such as social service agencies, state government, and local law enforcement. ICE issued a fact sheet about this operation on its external Web site that discussed difficulties in coordinating and communicating with these entities on issues of operational plans, detention space, access to detainees, and information about arrestees. The fact sheet noted that ICE arrested 362 removable aliens and transported over 200 of these aliens to detention facilities in Texas due to a lack of bed space in Massachusetts. In addition, 60 aliens were initially released during administrative processing at the time of the operation for child welfare or family health reasons, and additional aliens were released later for these reasons. According to ICE officials, another concern ICE officers face as they attempt to exercise discretion is that these officers encounter aliens who sometimes do not divulge their status as sole caregivers for children. Complex environments like the one described here demonstrate the need for up to date and comprehensive guidance that supports ICE’s operational objectives and use government resources in the most effective and efficient manner. Internal control standards state that effective communications should occur in a broad sense with information flowing down, across, and up the organization. This includes communicating information in a form and within a time frame that enables officials in carrying out their duties. In carrying out their duties, ICE officers require information on relevant legal developments—such as court decisions modifying existing interpretations of immigration laws—to help inform their decision making regarding removal dispositions (e.g., NTA or voluntary departure). However, ICE has not instituted a mechanism to ensure that legal developments are consistently disseminated to ICE officers across all field offices. For example, officers at only two DRO field offices and one OI field office we visited received current information on legal developments from their Chief Counsel Office, which is responsible for disseminating this information, while others did not receive such information at all or did not receive it when they needed it for case processing. In addition, officers at two of the seven OI field offices we visited expressed a need for more information regarding legal developments to better inform their decision making regarding removal dispositions. Officers at one OI field office told us that there are occasions when they do not receive the necessary legal guidance until they have already processed a case. Chief Counsel offices independently decide when and what information to disseminate regarding legal developments. Officers at seven DRO and six OI field offices we visited told us that they can consult Chief Counsel attorneys to seek guidance on legal issues. Although relying on Chief Counsel field offices to disseminate information and advise officers on legal issues can help officers when making decisions, without a formalized mechanism to consistently disseminate information that officers can use when they process cases, officers might not receive information necessary to make sound removal decisions that comply with the most recent legal developments. ICE has two control mechanisms in place to monitor its removal operations—established supervisory review practices and procedures and an inspection program. However, ICE does not have a mechanism to allow it to analyze information specific to the exercise of discretion. Internal control standards advise agencies to design internal controls to ensure that ongoing monitoring occurs in the course of normal operations. This monitoring includes regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. ICE relies primarily on the judgment of experienced field officers and supervisory reviews to provide assurance that officers’ decision making complies with established policies and procedures. In addition to supervisory reviews, ICE has recently taken steps to institute an inspection program designed to oversee field offices’ compliance with established policies and procedures. However, neither supervisory reviews nor ICE’s newly instituted inspection program offers a mechanism for management to collect and analyze information specific to officers’ exercise of discretion in alien apprehension and removal decisions across all field offices. The ability to collect and analyze data about the exercise of discretion across field offices could provide ICE with additional assurance that it can identify and respond to areas that may require some type of corrective action. Moreover, without these data and analyses, ICE is not positioned to compile and communicate lessons learned to help support officers’ decision making capacity. One way for agencies to help ensure that ongoing monitoring occurs in the course of normal operations is to design appropriate supervision to help provide oversight of internal controls. Consistent with this activity, ICE policy requires supervisory review of officer decisions on a case-by-case basis to ensure that officers’ decisions comply with established policies and procedures for alien apprehension and removal decisions. ICE officers are to document the specific immigration charges lodged against an alien, as well as the custody decision made by officers, on a standardized form. Throughout the alien apprehension and removal process, supervisors are responsible for reviewing and authorizing decisions made by officers. For example, when officers are determining whether to detain or release an alien from custody, ICE memorandums state that supervisors must approve an officer’s decision. In addition, according to ICE headquarters officials, supervisors at both DRO and OI field offices are to review officers’ apprehension and removal decisions to ensure that officers use the most appropriate removal disposition and to ensure that officers’ decisions comply with legal requirements, policies, and procedures. Headquarters officials also told us that supervisors are responsible for approving and signing off on decisions to grant voluntary departure and issue NTAs and other removal dispositions issued by officers. Officials at all seven DRO and seven OI field offices we visited also told us that supervisors are responsible for reviewing instances when officers have exercised discretion, such as when encountering aliens with humanitarian issues. Officers at field offices we visited also noted that they consult with experienced officers or supervisors when making these decisions and that operations are typically conducted by teams where officers’ collective knowledge is used to make discretionary decisions. Table 1 outlines the types of reviews conducted by experienced officers, supervisors, and managers at DRO and OI field offices. ICE’s Office of Professional Responsibility instituted an inspection program for OI field offices in July 2007, consistent with internal control standards for monitoring operations by designing mechanisms for identifying and communicating deficiencies to managers. According to the headquarters official responsible for overseeing the inspection program, ICE plans to implement a similar inspection program for DRO field offices in the fall of 2007. According to this official, the inspection program is designed to determine whether field offices are complying with the established policies and procedures selected for review. The inspection program consists of two areas: (1) an annual self-inspection process under which all field offices must respond to a Web-based questionnaire covering operational activities and (2) a field inspection program under which all OI and DRO field offices are to be inspected by headquarters officials at least once during a 4-year cycle. In instances where field offices are not compliant, field officials must develop a plan of action to address discrepancies that are identified. For OI offices, examples of areas that are to be reviewed include procedures for processing aliens, as well as methods for ensuring that operational plans are prepared and approved before arrests are conducted. For DRO field offices, areas that are to be reviewed, among other things, include compliance with procedures to ensure that aliens are served with a copy of an NTA, as well as procedures for completing and obtaining approval for operational plans in advance of fugitive operations. Our review of the self-inspection questionnaires and our discussion with the program manager showed that the inspection program is not designed to analyze information on officer decision making regarding alien apprehensions and removals. An important purpose of internal control monitoring is to allow agencies to assess the quality of performance over time. Specifically, internal control standards recommend that managers compare trends in actual performance to expected results throughout the organization in order to identify any areas that may require corrective action to help ensure operations support operational objectives. Although, ICE has some controls in place to monitor operations related to alien apprehensions and removals, neither supervisory review nor its inspection program offer managers information to specifically analyze officer decision making for trends across the 75 OI, DRO and Chief Counsel field offices that might indicate the need for a corrective action, such as additional training or clarification of procedures, or that might reveal best practices for achieving desired outcomes. ICE does not have a mechanism for collecting and analyzing data on officers’ exercise of discretion in determining what removal processing option to employ, such as officers’ basis for scheduling an appointment to process an alien at a later date for aliens who present humanitarian circumstances or the frequency of such actions. Additionally, ICE does not collect and analyze the actions taken by officers (e.g., scheduling an appointment, or mailing an NTA) in addressing aliens presenting humanitarian issues. Such information could be used by managers to identify trends in actions taken by officers to address aliens with humanitarian issues that could in turn be used to make any necessary modifications to guidance, policies or training. ICE policy outlines a mechanism to capture and analyze information regarding officers’ discretionary decisions made as part of worksite enforcement operations, but this inspection mechanism has not been used consistently. ICE officials told us that, as part of worksite enforcement operations, its officers make decisions in the field on a case-by-case basis in a time-constrained environment. In recent worksite operations, officers have apprehended thousands of aliens in operations conducted in various cities across the nation. Our review of ICE’s worksite enforcement training curriculum and OI’s field operational manual showed that ICE policy outlines a key internal control—after-action reports—which are to capture, among other things, information on significant or unusual incidents or circumstances that may have occurred during an operation; a listing of the number of aliens arrested, reasons for the release of detained or arrested aliens, and any allegations of civil rights violations or other complaints. However, a senior headquarters official responsible for overseeing OI’s worksite enforcement division told us that although after- action reports are still outlined as requirements in OI’s training curriculum (dated April 2007) and in the OI field operational manual, ICE has eliminated this requirement. According to OI headquarters officials, prior to the reporting requirement change, after-action reports had only been prepared for one worksite enforcement operation, which was conducted in 2006, since ICE was created. The senior headquarters official told us that, in lieu of after action reports, OI intends to collect information on lessons learned as part of its worksite enforcement guidebook. Our review of the guidebook provided to us by ICE showed that the guidebook did not yet reflect lessons learned. The scale and complexity of recent ICE worksite operations, such as an operation in Massachusetts involving difficulties coordinating and communicating with social service agencies, state government, and local law enforcement on issues of operational plans, detention space, access to detainees, and information about aliens who were apprehended, highlight the need for ICE to be able to learn from past experiences, thereby providing ICE officers with a richer knowledge base to inform their decision making under difficult circumstances. Moreover, since ICE has experienced a more than six-fold increase (between fiscal year 2003 and the third quarter of fiscal year 2007) in the number of new officers participating in worksite enforcement operations, more officers are making decisions and exercising discretion in these complex environments. Having a mechanism that provides ICE with information regarding its enforcement operations across all field offices would help identify areas needing corrective action regarding officer decision making. For example, having comprehensive information on factors considered by officers and actions taken by them (e.g., scheduling an appointment for later processing, or mailing an NTA) to address aliens with humanitarian issues could lead to revised policies and procedures. In addition, such a mechanism could help ICE protect its credibility and integrity against allegations of alien mistreatment by having readily available information to ensure that officer decision making complies with established policies and procedures. Without a mechanism to catalog and collect information— agencywide—on the exercise of discretion, ICE managers cannot analyze trends to provide additional assurance that officer decision making complies with established ICE policies and operational objectives, nor is ICE positioned to refine operational approaches based on a review of best practices across field offices. ICE relies on two databases to document officers’ decisions regarding alien apprehensions: (1) the Enforcement Case Tracking System (ENFORCE), which is primarily used to collect alien biographical information and removal option employed, such as voluntary departure or an NTA, and (2) the Deportable Alien Control System (DACS), which is used to track the location of detained aliens, as well as the status of aliens’ immigration court hearings. However, headquarters officials told us that the details of discretionary decisions (e.g., factors considered in deciding whether to apprehend an alien or detain an alien, based on humanitarian reasons) are not recorded in ENFORCE and DACS. Officials explained that officers may record information explaining their decisions in each of these systems’ narrative sections. However, according to officials, inputting this information is not a requirement, and information recorded by officers in the narrative sections of these databases is not analyzed by field managers or headquarters officials. Headquarters officials responsible for overseeing ENFORCE and DACS told us that ICE plans to update these systems to provide other capabilities. A headquarters official responsible for overseeing ENFORCE told us that ICE plans to integrate aspects of ENFORCE with another system—the Treasury Enforcement Communications System (TECS)— used by officers to track criminal investigations. According to this official, the proposed changes will allow officers to more easily access information pertaining to apprehended aliens and associated criminal investigations. In addition, a headquarters official responsible for overseeing the DACS system told us that ICE is piloting a program to merge DACS with ENFORCE, with the goal of creating one case management system for collecting information on alien apprehensions and for tracking the progress of alien removal proceedings. However, it is unclear whether these plans and the resulting systems would provide information ICE managers need to monitor and analyze officer decision making across all field offices. The DHS Office of Inspector General (OIG) has recognized the need to upgrade ICE data systems so that management has reliable data to make programmatic decisions and assess performance with regard to detention and removal programs, including identifying trends associated with underlying decisions made during the alien removal process. In April 2006, the OIG reported that DACS lacks the ability to readily provide DRO management with the data analysis capabilities to manage the detention and removal program in an efficient and effective manner because (1) the information stored in DACS was not always accurate or up to date and (2) DRO could not readily query DACS to obtain statistical reports on detentions and removals. The OIG stated that the lack of reliable program analysis capabilities could detrimentally affect DRO’s ability to identify emerging trends and identify resource needs. According to the OIG, this data system should, at a minimum, be able to provide quality immigration- related data on various factors including, among other things, the rationale underlying DRO’s decision to release an alien from detention or not to detain individual aliens. OIG recommended that ICE expedite efforts to develop and implement a system capable of meeting data collection and analysis needs relating to detention and removal, including a plan showing milestone dates, funding requirements, and progress toward completing the project. DHS and ICE concurred with the OIG’s recommendation and said that it would prepare a project plan for developing and deploying the system in an expedited manner. Although DHS and ICE said that the new system is to allow users to capture, search, and review information in specific areas, including information on detention and removal case details, the response was not specific about whether it would contain information on the rationale for making these decisions. Having information on officers’ exercise of discretion, including their rationale for making decisions, would provide ICE managers a basis for identifying potential problems, analyzing trends, and compiling best practices. ICE headquarters officials told us that collecting and managing data that detail decisions made by officers could be costly. However, ICE has not evaluated the costs or alternatives for creating a mechanism capable of providing ICE with usable information that it can analyze to identify trends in the exercise of discretion. For example, ICE has not considered the costs and benefits of such a mechanism in connection with planned or ongoing information system updates. Until ICE assesses costs and alternatives for collecting these data, it will not be in a good position to select and implement an approach that will provide ICE assurance that it can identify any best practices that should be reinforced or areas that might require corrective actions—by, for example, modifying policies, procedures, or training. Given that 75 field offices are involved in the alien apprehension and removal process and that oversight of these offices lies with three ICE units, a comprehensive mechanism for reviewing officers’ decision making could provide ICE with meaningful information to promote the appropriate use of discretion, identify best practices, and analyze any significant differences across field offices in order to take appropriate action. Appropriate exercise of discretion during the alien removal process is an essential part of ICE’s law enforcement efforts as it conducts operations in complex environments and with finite resources to identify, locate, and remove many of the estimated 12 million aliens subject to removal from the United States. Internal controls, like training, guidance, and monitoring that are designed to help ICE ensure that its officers are well equipped to consistently make decisions that support its operational objectives, are crucial for ICE to help provide assurance that its officers exercise discretion in a manner that protects the agency’s integrity, advances its mission, and provides the greatest value to the nation. Although ICE has taken steps in the area of training to develop and retain officer skills, ICE’s guidance does not comprehensively address key aspects of the alien apprehension and removal process, such as dealing with humanitarian issues and aliens who are not investigation targets. In light of the increased number of circumstances that might call for the exercise of discretion in ICE’s expanded enforcement efforts, comprehensive guidance—including factors that should be considered when officers make apprehension, charging, and detention determinations for aliens with humanitarian issues—to better support officers’ decision making to provide ICE with enhanced assurance that discretion is exercised appropriately. Without established time frames for updating guidance, ICE lacks a means to track progress and ensure accountability for accomplishing the updates. Moreover, developing a mechanism for consistently disseminating legal information would help to ensure that officers have the most recent information on legal developments that may affect the decisions they make. Finally, collecting information on officers’ exercise of discretion could provide ICE with enhanced assurance that officers and supervisors across field offices are making decisions that reflect the agency’s operational objectives regarding alien apprehensions and removals and could also help managers identify best practices or areas that may require management action. Although ICE officials have noted that collecting and managing data about the exercise of discretion could be costly, ICE has not evaluated the costs of and alternatives for collecting such information. For instance, as ICE updates the systems it uses to manage other operational data, it could consider the costs and benefits of integrating this data collection function as part of other planned system redesigns. However, without an assessment of the costs and alternatives for collecting data on officer decision making, whether in association with planned system updates or not, ICE is not in the best position to select and implement an approach that provides ICE assurance that it can identify best practices to support decision making capacity or, more importantly, recurrent or systematic issues that could jeopardize its mission. To enhance ICE’s ability to inform and monitor its officers’ use of discretion in alien apprehensions and removals, we recommend the Secretary of Homeland Security direct the Assistant Secretary of ICE to take the following three actions: develop time frames for updating existing policies, guidelines, and procedures for alien apprehension and removals and include in the updates factors that should be considered when officers make apprehension, charging, and detention determinations for aliens with humanitarian issues; develop a mechanism to help ensure that officers are consistently provided with updates regarding legal developments necessary for making alien apprehension and removal decisions; evaluate the costs and alternatives of developing a reporting mechanism by which ICE senior managers can analyze trends in the use of discretion to help identify areas that may require management actions—such as changes to guidance, procedures, and training. We requested comments on a draft of this report from the Secretary of Homeland Security. In an October 4, 2007 letter, DHS agreed with our three recommendations and discussed the actions ICE plans to take to address them, which are summarized below and included in their entirety in appendix II. With regard to our recommendation that ICE develop time frames for updating existing policies, including factors that should be considered when making apprehension, charging, and detention decisions, DHS said that ICE would reevaluate and republish all existing policies, guidelines, and procedures pertaining to the exercise of discretion during calendar year 2008. With regard to our recommendation that ICE evaluate the costs and alternatives of developing a mechanism by which to analyze trends in the use of discretion, DHS said that ICE anticipates initiating this evaluation by December 1, 2007. With regard to our recommendation to develop a mechanism to help ensure that officers are consistently provided with updates regarding legal developments, DHS explained that ICE believes that policies are in place to address the needs of the operational components for up to date legal guidance, and that officers rely primarily on local Chief Counsel Offices for information on legal developments. DHS said that this localized approach reflects the fact that significant developments in case law often result from decisions of the 12 United States Courts of Appeal and that such decisions are often inconsistent and only have application within the geographic boundaries where they arise. Nonetheless, DHS commented that ICE recognizes that consistency in the dissemination of legal updates is of great importance to agents and officers and said that ICE will look to develop best practices to ensure the latest legal updates are disseminated to agents and officers through each Chief Counsel’s office. We believe ICE identification and implementation of best practices would be important in helping ensure that updates on legal developments are consistently provided to officers. We are sending copies of this report to selected congressional committees, the Secretary of Homeland Security, the Assistant Secretary of U.S. Immigration and Customs Enforcement, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. In addition, the report will be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss them matter further, please contact me at (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors are listed in appendix III. This review examined how Immigration and Customs Enforcement (ICE) ensures that discretion is used in the most fair, reasoned, and efficient manner. Along these lines, we examined whether ICE has designed internal controls to guide and monitor officers’ exercise of discretion when making alien apprehension and removal decisions, consistent with internal control standards for the federal government. Specifically, this review addresses the following three questions: 1. When and how do ICE officers and attorneys exercise discretion during the alien apprehension and removal process? 2. What internal controls has ICE designed to guide officer decision making to enhance its assurance that the exercise of discretion supports its operational objectives? 3. What internal controls has ICE designed to oversee and monitor officer decision making during the alien apprehension and removal process to enhance ICE’s assurance that the exercise of discretion supports its operational objectives? To address these objectives, we obtained and analyzed information at ICE’s Office of Investigations (OI), Office of Detention and Removal Operations (DRO), and the Office of the Principal Legal Advisor (OPLA) within the Department of Homeland Security (DHS) in Washington, D.C. We also carried out work at 14 ICE field offices—seven OI and seven DRO field offices—located in seven cities throughout the United States: Chicago, Detroit, Los Angeles, New York, Philadelphia, Phoenix, and San Diego and seven ICE Chief Counsel Offices (which serve as OPLA’s field offices) at these same locations. We selected these locations considering field office size, ICE data on alien apprehensions, and geographic dispersion. Regarding alien apprehensions, about 40 percent of all ICE Office of Investigations apprehensions during fiscal year 2006 were made by the seven OI offices selected for our review. As we did not select a probability sample of field offices or Chief Counsels’ offices to review, the results of our work at these locations cannot be projected to field offices nationwide. To identify when and how officers and attorneys exercise discretion during the alien apprehension and removal process, we reviewed relevant laws and regulations as well as applicable policies, memorandums, operational manuals, and training materials developed by OI, DRO, and OPLA headquarters offices. We also spoke with headquarters officials in the OI, DRO, and OPLA operational divisions regarding the exercise of discretion in the alien apprehension and removal process. At each of the field locations we visited, we collected and reviewed available locally developed field guidance, memorandums, and training materials applicable to the exercise of discretion during the apprehension and removal process. We also conducted small group interviews with officers, supervisors, and managers at the 14 OI and DRO field offices we selected as part of our nonprobability sample to determine when and how officers at those locations exercise discretion, and when and how officers are expected to exercise discretion, during the alien apprehension and removal process. In addition, we conducted small group interviews with attorneys, supervisors, and managers at the 7 Chief Counsel offices we visited to determine when and how attorneys exercise discretion, and when and how they are expected to exercise discretion, once formal removal proceedings have been initiated by OI and DRO officers. As we did not select probability samples of ICE officers and attorneys, supervisors, and managers to interview at the field offices we selected, the results of these interviews may not represent the views of ICE officers and attorneys and their supervisors and managers nationwide. To address internal controls ICE has designed to guide officer decision making, we reviewed field operational manuals, policy memorandums, and training materials developed by OI, DRO, and OPLA headquarters offices. We also requested locally developed written guidance and policies and procedures regarding alien apprehension and removal procedures from all DRO, OI and Chief Counsel field offices. We received and reviewed locally developed guidance from 13 of OI’s 26 field offices and 12 of Chief Counsel’s 26 field offices. The purpose of this review was to identify the range of policies and guidance developed by field units that we did not capture as part of our nonprobability sample of ICE field offices. We did not receive locally developed guidance from DRO’s 23 field offices, as DRO headquarters officials told us that DRO field offices do not rely on locally developed guidance and instead rely on national policies and memorandums. As part of our work at the ICE field offices we visited, we also discussed and identified guidance and training provided to officers and attorneys with regard to the guidance and information available to them when exercising discretion during the apprehension and removal process, including guidance about nontargeted aliens, humanitarian issues, and updates on legal developments. We then compared the national and local guidance, memorandums, and training materials in place with internal control standards to determine whether these controls were consistent with the standards. In addition, we met with headquarters officials responsible for the development of policy and training of field unit operations for OI and DRO and we interviewed OPLA officials responsible for developing policy and training for Chief Counsel Offices to discern their role in developing and providing guidance and information to ICE officers, attorneys, supervisors, and managers involved in the alien apprehension and removal process. To address what internal controls ICE has designed to oversee and monitor officer decision making during the alien apprehension and removal process, we reviewed relevant laws, regulations, and field operational manuals. We also interviewed OI, DRO and OPLA headquarters officials, field officers, and field attorneys to identify the types of oversight that are in place. We examined what controls were in place to provide assurance that removal decisions are consistent with established policies, procedures, and guidelines across field offices, and examined whether these controls were designed to be consistent with the internal control standards. We did not test ICE controls in place as part of our review. We also interviewed headquarters officials responsible for overseeing ICE’s enforcement operations to examine controls in place to monitor enforcement activities. We met with ICE headquarters officials responsible for overseeing ICE databases containing information pertinent to alien apprehension and removal outcomes, and we inquired about information collected in these databases regarding officer decision making, including cases involving humanitarian issues and cases involving aliens who are not targets of ICE investigations. We also interviewed ICE officers, supervisors, and management personnel at the ICE field offices we visited to identify the types of supervisory reviews and approvals required for decisions made by ICE officers and attorneys and the documentation to be reviewed and approved by supervisors in regard to these decisions. We reviewed data on alien apprehensions for worksite enforcement operations, for fiscal year 2002 through fiscal year 2007, to identify trends in ICE’s expanded enforcement efforts. We also reviewed data on alien apprehensions resulting from fugitive operations. To determine the reliability of the data, we interviewed headquarters officials responsible for overseeing and verifying the data, reviewed existing documentation regarding the data, and interviewed headquarters officials responsible for tracking statistics pertaining to the data. We conducted our work between August 2006 and September 2007 in accordance with generally accepted government auditing standards. In addition to the above, John F. Mortin, Assistant Director; Teresa Abruzzo; Joel Aldape; Frances Cook; Katherine Davis; Kathryn Godfrey; Wilfred Holloway; and Ryan Vaughan made key contributions to this report. | Officers with U.S. Immigration and Customs Enforcement (ICE) within the Department of Homeland Security (DHS) investigate violations of immigration laws and identify aliens who are removable from the United States. ICE officers exercise discretion to achieve its operational goals of removing any aliens subject to removal while prioritizing those who pose a threat to national security or public safety and safeguarding aliens' rights in the removal process. The General Accountability Office (GAO) was asked to examine how ICE ensures that discretion is used in the most fair, reasoned, and efficient manner possible. GAO reviewed (1) when and how ICE officers and attorneys exercise discretion and what internal controls ICE has designed to (2) guide decision making and (3) oversee and monitor officers' decisions. To conduct this work, GAO reviewed ICE manuals, memorandums, and removal data, interviewed ICE officials, and visited 21 of 75 ICE field offices. ICE officers exercise discretion throughout the alien apprehension and removal process, but primarily during the initial phases of the process when deciding to initiate removals, apprehend aliens, issue removal documents, and detain aliens. Officers GAO interviewed at ICE field offices said that ICE policies and procedures limit their discretion when encountering the targets of their investigations--typically criminal or fugitive aliens, but that they can exercise more discretion for other aliens they encounter. Officers also said that they consider humanitarian circumstances, such as sole caregiver responsibilities or medical reasons, when making these decisions. Attorneys, who generally enter later in the process, and officers told GAO that once removal proceedings have begun, discretion is limited to specific circumstances, such as if the alien is awaiting approval of lawful permanent resident status. Consistent with internal control standards, ICE has begun to update and enhance training curricula to better support officer decision making. However, ICE has not taken steps to ensure that written guidance designed to promote the appropriate exercise of discretion during alien apprehension and removal is comprehensive and up to date and has not established time frames for updating guidance. For example, field operational manuals have not been updated to provide information about the appropriate exercise of discretion in light of a recent expansion of ICE worksite enforcement and fugitive operations, in which officers are more likely to encounter aliens with humanitarian issues or who are not targets of investigations. Also, ICE does not have a mechanism to ensure the timely dissemination of legal developments to help ensure that officers make decisions in line with the most recent interpretations of immigration law. As a result, ICE officers are at risk of taking actions that do not support operational objectives and making removal decisions that do not reflect the most recent legal developments. Consistent with internal control standards, ICE relies on supervisory reviews to ensure that officers exercise appropriate discretion and has instituted an inspection program designed to ensure that field offices comply with established policies and procedures. However, ICE lacks other controls to help monitor performance across the 75 field offices responsible for making apprehension and removal decisions. A comprehensive mechanism for reviewing officers' decision making could provide ICE with meaningful information to analyze trends to identify areas that may need corrective action and to identify best practices. ICE officials acknowledged they do not collect the data necessary for such a mechanism and said doing so may be costly. Without assessing costs and alternatives, ICE is not in a position to select an approach that will help identify best practices and areas needing corrective action. |
The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), as amended, defines the federal government’s role during disaster response and recovery. The Stafford Act also establishes the programs and processes through which the federal government provides disaster assistance to state, tribal, territorial, and local governments, as well as certain nonprofit organizations and individuals. According to the act, the President can declare a major disaster after a governor or chief executive of an affected tribal government finds that a disaster is of such severity and magnitude that effective response is beyond the capabilities of the state and local governments and that federal assistance is necessary. That is, when the governor of a state or the chief executive of an Indian tribal government requests a declaration for a major disaster, FEMA evaluates the requests and makes a recommendation to the President, who decides whether or not to declare a major disaster and commit the federal government to provide supplemental assistance. Generally, state and local governments are responsible for the remaining share of disaster costs. If the President declares a major disaster, the declaration can trigger a variety of federal assistance programs for governmental and nongovernmental entities, households, and individuals. FEMA provides disaster assistance to states, tribal governments, localities and individuals through several programs including: the Public Assistance (PA) and the Individual Assistance (IA) programs.disaster assistance programs. It provides grants to fund debris removal, and the repair, replacement, or restoration of disaster-damaged facilities. PA also funds certain types of emergency protective measures that eliminate or reduce immediate threats to lives, public health, safety, or improved property. To determine whether to recommend that a jurisdiction receive PA funding, FEMA relies on a series of factors including the statewide per capita impact indicator. PA is the largest of FEMA’s FEMA’s IA program ensures that disaster survivors have timely access to a full range of programs and services to maximize their recovery, through coordination among federal, state, tribal and local governments, nongovernmental organizations, and the private sector. Among other things, IA programs provide housing assistance, disaster unemployment assistance, crisis counseling, and legal services. households may be eligible for financial assistance or direct services if, due to the disaster, they have been displaced from their primary residence, their primary residence has been rendered uninhabitable, or they have necessary expenses and serious needs that are unmet through other means, such as insurance. The IA program provides assistance up to $32,900 for fiscal year 2015 to eligible individuals and households who, as a direct result of a major disaster or emergency, have uninsured or under insured necessary expenses and serious needs that cannot be addressed by other means, such as through other assistance programs or insurance. Specific IA programs and areas of responsibility include: the Individuals and Households Program, including Housing Assistance and Other Needs Assistance; the Disaster Unemployment Assistance Program; Disaster Legal Services; the Crisis Counseling Assistance and Training Program; the Disaster Case Management Program; Mass Care and Emergency Assistance Coordination; Voluntary Agency Coordination; and Disaster Recovery Center and Disaster Survivor Assistance Coordination. If approved for federal disaster assistance, states, tribal governments, and localities are expected to contribute toward disaster response and recovery costs. The usual cost share arrangement calls for the federal government to pay not less than 75 percent of the eligible PA costs of a disaster and for nonfederal entities (e.g., state and local governments) to pay the remaining nonfederal share of 25 percent. The federal government covers 100 percent of the Individuals and Households Program but requires states to contribute 25 percent to the Other Needs Assistance component of this program. This component covers repair or replacement costs for personal property including furniture and personal belongings, and some uninsured medical, dental, funeral, and transportation expenses as well as child care and other expenses. If states are denied federal disaster assistance, they may choose to cover some of these costs. Disaster funding, like most other state expenditures, is typically part of a state’s annual operating budget providing appropriations through the fiscal year. Disaster costs typically compete with other state priorities unless states establish a separately sourced disaster fund outside of the funds tied to their state’s balanced budget requirements. Most states have constitutional or statutory provisions requiring that they balance their operating budgets, commonly referred to as their general fund. All 10 states in our review used a range of mechanisms to ensure the availability of funds for unforeseen disaster costs during the fiscal year or current budget cycle. While each state had its own set of budget mechanisms, all of the selected states provided disaster funds at the start of the fiscal year and as needed during the course of the fiscal year. The types of unforeseen disaster costs states encountered depended, in large part, on the kind of disaster, but were typically related to emergency response activities. For instance, the costs of clearing debris and repairing roads along with emergency policing were typical expenses that states incurred after a major storm. Many of those expenses qualified for federal reimbursement under a presidential disaster declaration. Statewide disaster accounts. Statewide disaster accounts provided funding for disaster expenditures across state agencies or for localities. As shown in figure 2, all 10 states in our review established one or more types of statewide disaster accounts that received funds from general fund appropriations or from other revenue sources. All 10 states funded these statewide accounts through general fund revenues and 6 states— Alaska, California, Florida, Indiana, North Dakota, and Vermont—used other revenue sources in addition to general fund revenues to cover unforeseen costs that arose during the fiscal year. For example, Florida imposed an annual surcharge on homeowners’ residential insurance policies and on commercial and business owners’ property insurance policies, which the state then deposited into a trust fund to be used for emergency management purposes. In addition, one of Indiana’s statewide disaster funds relied on public safety fees generated through the sale of retail fireworks, while North Dakota funded its statewide disaster account through a biennial appropriation from the revenues of the state’s share of oil and gas taxes. The states in our review based initial funding levels for statewide disaster accounts on a range of considerations, such as estimates of disaster costs based on past events and emergency response costs for unforeseen disasters. Although some statewide disaster accounts allow unexpended balances to be carried over into future fiscal years, states typically budgeted these costs for a single budget cycle. For example, based on its past disaster costs, Alaska typically budgeted disaster relief funds to cover the costs of two state-declared disasters (totaling $2 million) and two federally-declared disasters (totaling $5 million to $6 million). Some states, such as North Dakota and California, may also establish funding amounts in statute. Specifically, North Dakota’s Disaster Relief Fund receives an appropriation of $22 million every 2 fiscal years or each biennial budget cycle, while California’s Disaster Response- Emergency Operations Account receives an annual appropriation of $1 million at the beginning of each fiscal year, consistent with the state’s budget cycle. In establishing statewide disaster accounts, states typically defined the criteria under which the account funds could be used. For example, in Oklahoma, the governor is authorized to distribute funds from the state’s disaster account to agencies that requested funds for emergency situations including: (1) destruction of public property; (2) operation of the National Guard; (3) matching funds for federal disaster relief programs; (4) asbestos removal from public buildings; and (5) emergency response necessary to protect the public health, safety, or welfare of livestock or wild animals. In North Dakota, the state’s Disaster Relief Fund could be used to reimburse state agencies for disaster-related expenses incurred above the agencies’ normal operating costs. Budgets of state agencies. Nine of the 10 selected states also covered a portion of unforeseen disaster costs through the operating budgets of state agencies with missions relevant to disaster response and recovery, For example, in West Virginia, such as public safety and transportation.the state’s Division of Homeland Security and Emergency Management within the Department of Military Affairs and Public Safety used its regular operating budget to cover disaster response costs. Other agencies in West Virginia, such as the state’s transportation and police departments, also used funds in their operating budgets to cover major disaster costs. These agencies then submitted these costs to the emergency management office for reimbursement. As was shown in figure 2 earlier, of the 10 selected states, seven maintained contingency accounts for disasters. For example, Florida’s Department of Environmental Protection established a disaster contingency account funded through user fees on Florida’s state parks. In addition, the contingency fund for California’s Department of Forestry and Fire Protection typically received an appropriation based on the average emergency cost from the previous five years. Supplemental appropriations. Eight of the 10 states in our review made use of supplemental appropriations when the funds appropriated to statewide accounts or agency budgets at the beginning of the fiscal year were insufficient. When states’ general funds served as the source of supplemental appropriations, these funds were unavailable to spend on other budget areas. Statewide multipurpose reserve accounts, such as budget stabilization funds (also referred to as rainy day funds), could also be tapped in the event that funds were not available through other means. A few states expanded the conditions for which budget stabilization funds could be tapped to include similar unanticipated expenses not directly related to revenue shortfalls or budget deficits. For example, although initially intended to offset revenue shortfalls, West Virginia’s budget stabilization fund was subsequently modified to allow the state legislature to make appropriations from the fund for emergency revenue needs caused by natural disasters, among other things. However, budget officials from several states in our review told us that it was uncommon to access budget stabilization funds to cover disaster expenses because their state could generally provide disaster funding from a combination of general fund revenues and spending reductions in other areas. For example, despite having expanded its acceptable uses to include natural disasters, West Virginia only accessed its budget stabilization fund once since 2005 to cover disaster-related expenses. Similarly, in Florida, the state’s budget stabilization fund was last used for disaster costs during the 2004 and 2005 hurricane seasons. Funding transfers. In addition, nine states in our review had mechanisms to allow designated officials (e.g., the governor, budget director, or a special committee) to transfer funds within or between agencies or from statewide reserve accounts after the start of the fiscal year. For example, in Indiana, if funds within an agency’s budget are insufficient to cover the unexpected costs of a disaster, a special finance board can authorize a transfer of funds from one agency to another. In addition, the state’s budget director can transfer appropriations within an agency’s accounts if needed for disaster assistance. The authority to release funds from disaster accounts varied by state and resided with the governor, the legislature, or special committees. As we have previously reported, a state where the legislature is in session for only part of the year might give the governor more control over the release of disaster funds.legislature is out of session, the presiding officers of the legislature can agree in writing to suspend the $1 million limit placed on the Governor’s disaster spending authority. For example, in the event that the Alaska Also, if a state legislature already appropriated a portion of general fund or other revenues to a disaster account, the governor or budget director can exert greater control over access to the reserves. For example, in California, a gubernatorial emergency declaration grants the state’s Director of Finance the authority to tap into any appropriation in any department for immediate disaster response needs. All states in our review budgeted for ongoing costs associated with past disasters. Typically, these ongoing costs included recovery-related activities, such as rebuilding roads, repairing bridges, and restoring public buildings and infrastructure. Costs associated with past disasters included the state’s share of federal disaster assistance and disaster costs the state would cover in the absence of a federal declaration.for the costs of past disasters, all 10 states determined their budgets based on cost estimates for the upcoming fiscal year, even though each disaster declaration could span several budget cycles. As was shown in figure 2, all selected states used a range of budget mechanisms to cover the cost of past disasters. These mechanisms were similar to those the states used to budget for unforeseen disaster costs. States used some of the mechanisms to appropriate funds at the start of the fiscal year and used other mechanisms to provide disaster funds during the course of the fiscal year. For example, in Missouri, multiple agency accounts funded the expenses from past disasters incurred by state agencies, while a separate statewide account covered the non- federal match of disaster programs. The funding levels in states’ accounts varied from year to year depending on annual estimates of expected disaster costs, primarily determined through the project worksheet process—the means by which the estimated costs are determined by FEMA and the state. For example, Florida’s emergency management agency forecasts the ongoing costs associated with past disasters for three future fiscal years and reports these cost estimates on a quarterly basis. In New York, the Governor’s budget office, along with its emergency management agency, periodically estimated the amount of disaster program costs the federal government would cover in addition to costs the state would have to bear. Most states in our review had established cost share arrangements with localities and passed along a portion of the required nonfederal cost share to them. Two states—Alaska and West Virginia—covered the 25 percent cost share for federally declared disasters while only one state— Indiana—passed the 25 percent nonfederal cost share onto its affected localities. In Vermont, municipalities that adopted higher flood hazard mitigation standards could qualify for a higher percentage of state funding for post-disaster repair projects, ranging from a minimum of 7.5 percent to a maximum of 17.5 percent. In Florida, the state typically evenly splits the nonfederal share with local governments but would cover a greater percentage of the nonfederal share for economically distressed localities.each state in our review. None of the 10 states in our review maintained reserves dedicated solely for future disasters outside of the current fiscal year. As discussed earlier in this report, although funds in some states’ statewide disaster accounts could be carried forward into the future, funding for these accounts was typically intended to fund a single fiscal year. For example, unexpended balances from Indiana’s State Disaster Relief Fund—which receives an annual appropriation of $500,000, could be carried forward from one year to the next. Similarly, North Dakota’s Disaster Relief Fund, which receives a biennial appropriation, can carry forward unexpended fund balances into the next biennial cycle. According to a North Dakota state official, this procedure was established in statute to provide a ready source of disaster funding. Otherwise, according to this official, the state legislature would need to identify large amounts of funding from the general fund account at the start of each budget cycle. Some state officials reported that they could cover disaster costs without dedicated disaster reserves because they generally relied on the federal government to fund most of the costs associated with disaster response and recovery. During the past decade, the federal government waived or reduced state and local matching requirements during extraordinary disasters such as Hurricanes Katrina and Sandy. For Hurricane Sandy, however, 100 percent of the federal funding was only available for certain types of emergency work and for a limited period of time. As we have reported in our prior work on state emergency budgeting, natural disasters and similar emergency situations did not have a significant effect on state finances because states relied on the federal government to provide most of the funding for recovery. Alaska’s individual assistance program also provides reimbursement for personal property loss and assistance with housing repairs at 50 percent of the annual approved amounts for the federal IA Program. provided disaster assistance to localities on several occasions after being denied federal assistance. Overall, states did not make major changes to their approaches to budgeting for disaster costs between fiscal years 2004 and 2013. Some states in our review did take steps to increase the availability of disaster funds, while others changed procedures related to legislative oversight. Although the national economic recession occurred during this time (officially lasting from December 2007 to June 2009) and resulted in state revenue declines of 10.3 percent—states in our review reported that they were able to ensure the availability of funding to cover the cost of disasters. Officials in Alaska and North Dakota, for example, reported that state revenues generated from oil and gas taxes buffered their states from much of the fiscal distress that other states had experienced during the 2007 to 2009 recession. Three states in our review—Alaska, Indiana, and North Dakota—changed their budgeting approaches to further ensure the availability of disaster funding prior to a disaster rather than after a disaster. While these moves did not provide funding for future disasters beyond the current fiscal year, they did improve the availability of funds for disaster response within the current fiscal year. For example, Alaska established a statewide disaster fund in the late 1960s to ensure the availability of disaster funding. Prior to 2010, Alaska primarily funded the disaster fund through supplemental appropriations after a disaster had occurred and after the state’s administration and emergency management agency had requested funding. However, according to a state official, this approach did not provide funding timely enough for state agencies and localities to respond quickly to a disaster. Rather, the approach involved waiting for the state legislature to appropriate funds to the state’s disaster account, which could have taken weeks, particularly if the legislature was not in session. At that time, Alaska experienced multiple concurrent disasters. In addition, the nature of Alaska’s climate and the remote location of many of its communities resulted in a need for the state to take swift action to respond to disasters so that residents were able to repair or rebuild their damaged homes before the onset of winter. Consequently, the state began to forward fund the disaster fund to have more money available immediately after a disaster. According to this state official, the change in approach relied on cost estimates of multiple disasters to develop an annual budget figure. In 2006, Indiana began appropriating funds to its State Disaster Relief Fund from the revenues it generated from firework sales to ensure the availability of a dedicated source of disaster funding. Although the state established the disaster relief fund in 1999, it did not appropriate funds to the account due to fiscal constraints. In 2006, the state began dedicating funds from the sale of fireworks. Then in 2007, the state established in statute that the fund would receive an annual appropriation of $500,000 from revenues generated from the firework sales. Prior to 2006, the state relied on general revenue funds to pay for disasters on an as- needed basis. North Dakota established its Disaster Relief Fund during its biennial legislative session (2009 to 2011) to ensure the availability of funding in the event of a disaster. The state appropriated money to the fund at the beginning of the state’s biennial budget cycle with revenues generated from the state’s tax on oil and gas production. In order to respond to disasters prior to the establishment of this fund, state agencies with emergency response missions, such as the Department of Transportation, had to request funding directly from the state legislature during the time it was in session.session, state agencies were required to obtain a loan from the Bank of North Dakota to cover their immediate disaster costs. Then, to repay the loan, the agencies needed to request a supplemental appropriation when the state legislature reconvened. A North Dakota state official told us that this process was inefficient, so the state legislature established the Disaster Relief Fund to provide an easier means for accessing disaster relief funds. However, if the legislature was out of Legislatures in three of our review states— North Dakota, Missouri, and West Virginia—took steps to increase their oversight of disaster spending. After North Dakota established a dedicated revenue source to ensure the availability of disaster funding, the state legislature took subsequent steps to increase the oversight of disaster relief funds. In particular, the legislature required state agencies to submit a request to the state’s Emergency Commission in order to receive disaster funding. Established in 2011, the Emergency Commission, comprised of the Governor, Secretary of State, and the House and Senate majority leaders, has the authority to approve the appropriation of supplemental funding when there is an imminent threat to the safety of individuals due to a natural disaster or war crisis or an imminent financial loss to the state. Prior to 2011, the state’s emergency management agency had been authorized to access disaster relief funds directly without approval from the Commission. According to a state emergency management official, the legislature took this action in response to a number of instances in which federal PA funds initially awarded to the state were deobligated, leaving the state with unanticipated disaster response costs. In one instance, for example, federal PA funds were deobligated because the state did not properly document the pre-existing conditions of a parking lot damaged by the National Guard in responding to a disaster. In this particular case, the state had to appropriate funds from their disaster relief fund to cover the cost of repair, rather than rely on federal PA funding to cover these costs. To provide more oversight for disaster expenditures, the Missouri legislature changed its requirements for accessing funds from the State Emergency Management Agency (SEMA) budget. Specifically, the legislature required that the administration seek legislative approval for all supplemental appropriations to the SEMA budget. According to Missouri budget officials, SEMA used to submit a budget request that represented a rough estimate of anticipated costs for the upcoming fiscal year. If actual costs exceeded SEMA’s appropriation, the administration had the authority to appropriate additional money from general revenues for specific line items on an as-needed basis without additional legislative approval. West Virginia’s legislature increased oversight of disaster funding by restricting the use of funds appropriated to the Governor’s Contingent Fund. In prior years, the legislature appropriated funds to the Governor’s Contingent Fund as a civil contingent fund—a very broad term, according to a state budget official. Over the last few years, the legislature changed the appropriations bill language to limit spending flexibility for money appropriated to the fund. For example, appropriations bill language specified that funds were being appropriated for “2012 Natural Disasters” or “May 2009 Flood Recovery.” States rely on the assurance of federal assistance when budgeting for disasters. Based on current regulations, policies, and practices, the federal government is likely to continue to provide federal funding for large-scale disasters. In light of this federal approach to funding disaster response and recovery, the states in our review designed their budgeting approaches for disasters to cover the required state match for federal disaster assistance as well as the costs they incur in the absence of a federal declaration. For unforeseen disaster costs and for ongoing costs associated with past disasters, these states relied on a number of budget mechanisms including statewide disaster accounts, state agency budgets and supplemental appropriations, to ensure the availability of funding for disasters. However, none of the states in our review maintained reserves dedicated solely for future disasters outside of the current fiscal year. More frequent and costly disasters could prompt reconsideration of approaches to dividing state and federal responsibilities for providing disaster assistance. Given the fiscal challenges facing all levels of government, policymakers could face increased pressure to consider whether the current state and federal approach for providing disaster assistance balances responsibilities appropriately. Absent federal policy changes, the experience of the 10 states we reviewed suggests that states will likely continue to rely on federal disaster assistance for most of the costs associated with the response to large-scale disasters. We provided a draft of this report to the Secretary of the Department of Homeland Security for review and comment. The Department of Homeland Security generally agreed with our findings and provided technical comments, which we incorporated as appropriate. Additionally, we provided excerpts of the draft report to budget officers and emergency management officials in the 10 states we included in this review. We incorporated their technical comments as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of the Department of Homeland Security and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact Michelle Sager at (202) 512-6806 or sagerm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of our review were to determine (1) the approaches selected states use to budget for and fund state-level disaster costs; and (2) how, if at all, state disaster budgeting approaches have changed over time, including the factors influencing those changes and any challenges states encountered in budgeting for state-level disaster costs. To address the objectives, we selected a nonprobability sample of 10 states from the 50 states and the District of Columbia. To select the states for our sample, we obtained data from the Federal Emergency Management Agency’s (FEMA) Integrated Financial Management Information System on major disaster declarations by state during fiscal years 2004 through 2013. We focused on this time frame because it contained the most current data for major disaster declarations. We assessed the reliability of the FEMA data by discussing with another GAO team their recent access and use of the data in a prior year’s report and their determination that the data provided reliable evidence to support findings, conclusions, and recommendations. We also discussed data quality control procedures with FEMA officials who were knowledgeable about the specific types of data recorded in the database. Based on how we intended to use the information, we determined that the data were sufficiently reliable for the purpose of selecting states for our study. We sorted the data obtained based on the total number of major disaster declarations approved by state. We calculated the median number of major declarations approved by FEMA and identified states directly above the median. For those states, we also identified the number of major disaster declarations that had been denied by FEMA during the same time period, which ranged from zero denials to seven denials. We then calculated the statewide Public Assistance per capita amount of funding, based on FEMA’s statewide per capita indicator of $1.39 and the U.S. Census Bureau’s 2013 population estimate for each state. That is, we multiplied the 2013 population estimate for each state by the PA per capita indicator of $1.39. We then grouped the states according to low, medium, and high per capita threshold levels. To ensure geographic dispersion and a range of per capita amounts, we selected 10 states— four low per capita states (Alaska, North Dakota, Vermont, and West Virginia), two medium per capita states (Missouri and Oklahoma), and four high per capita states (California, Florida, Indiana, and New York) (see table 1 for additional information). The results of our study are not generalizable to state budgeting approaches for all states and the District of Columbia. We then developed and administered a semistructured interview to state budget officers and emergency management officials in the10 selected states regarding the approaches they used to budget for and fund state- level disaster costs and how, if at all, approaches changed over time. To address the first objective, we analyzed information from the semistructured interviews about selected states’ approaches to budgeting for disasters. We also obtained and analyzed state budget and other relevant documents to determine how states estimate, authorize, and appropriate state disaster funds, the extent to which states share costs with affected localities, and how cost share arrangements with affected localities are determined. To address the second objective, we analyzed information from the semistructured interviews about how states’ budgeting approaches have changed during the past decade, factors influencing any changes, and any challenges states face in funding disaster assistance. We focused our questions on the period covering fiscal years 2004 through 2013. We also analyzed FEMA data regarding major state disasters to identify possible trends in the frequency, severity, type, and cost of state disaster events during the period from fiscal years 2004 through 2013. For both objectives, we analyzed relevant state statutes and regulations that govern the use of state disaster funds. In addition, we interviewed FEMA officials who participate in making recommendations to the President as to whether state requests for federal disaster funding should be approved or denied. We conducted this performance audit from April 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The 10 selected states in our review used a range of budget mechanisms to cover the costs of disasters. This appendix provides additional detail on the range of disaster-specific funds, disaster assistance programs, and cost share arrangements in the 10 states. In addition to the contact named above, Stanley Czerwinski, Brenda Rabinowitz (Assistant Director), Kathleen Drennan (Analyst-in-Charge), Mark Abraham, Liam O’Laughlin, and Robert Yetvin made key contributions to this report. Aditi Archer, Amy Bowser, Jeffrey Fiore, Robert Gebhart, Carol Henn, Donna Miller, Susan Offutt, and Cynthia Saunders also contributed to this report. | In recent years, natural and human-made disasters have increased in the United States in terms of both numbers and severity. For presidentially declared disasters, the federal government generally pays 75 percent of disaster costs and states cover the rest. As a result of this trend, governments at all levels have incurred increased costs for disaster response and recovery. An understanding of the approaches states take to budget for disaster costs can help inform congressional consideration of the balance between federal and state roles in funding disaster assistance. GAO was asked to examine how states typically budget for costs associated with disasters and any changes to those budget approaches during the past decade. This report reviewed (1) the approaches selected states use to budget for and fund state-level disaster costs; and (2) how, if at all, state disaster budgeting approaches have changed over time. For this review, GAO selected 10 states based on criteria such as the number of major disaster declarations and denials for each state from fiscal years 2004 to 2013. GAO reviewed state statutes, budgets, and other documents explaining states' approaches to budgeting for disaster costs and interviewed state officials. Although GAO's findings are not generalizable, they are indicative of the variation in budget mechanisms among the states. GAO is not making recommendations. GAO received and incorporated, as appropriate, technical comments from the Department of Homeland Security and the 10 selected states. The 10 selected states in GAO's review—Alaska, California, Florida, Indiana, Missouri, New York, North Dakota, Oklahoma, Vermont, and West Virginia—had established budget mechanisms to ensure the availability of funding for the immediate costs of unforeseen disasters and the ongoing costs of past disasters. All 10 states provided disaster funds at the start of the fiscal year and then as needed during the course of the fiscal year. Each of the selected states had its own combination of budget mechanisms that generally fell into four categories: Statewide disaster accounts . These accounts provided the 10 states with the flexibility to fund disaster expenses across state entities or for local governments. States typically funded these accounts through general fund revenue. Six states also used other sources, such as revenues from oil and gas taxes and fees on homeowner's and commercial insurance. The amounts appropriated to these accounts at the start of the fiscal year were based on a range of considerations, such as estimates of disaster costs based on past events and emergency response costs for unforeseen disasters. State agency budgets . Nine of the 10 states also covered a portion of unforeseen disaster costs through the operating or contingency budgets of state agencies with missions relevant to disaster response and recovery. For example, West Virginia's Division of Homeland Security and Emergency Management used its operating budget to cover disaster response costs. Florida's Department of Environmental Protection had a disaster contingency account funded through user fees on state parks. Supplemental appropriations . When advance funding proved insufficient to cover disaster costs, eight of the 10 states provided supplemental funding to pay for the remaining costs. While reserve accounts such as rainy day funds could be used to provide this funding if general funds were unavailable, budget officials said their state rarely tapped these funds. Transfer authority . All 10 states in our review allowed designated officials (i.e., the governor, budget director, or a special committee) to transfer funds within or between agencies or from statewide reserve accounts after the start of the fiscal year. None of the 10 states in GAO's review maintained reserves dedicated solely for future disasters. Some state officials reported that they could cover disaster costs without dedicated disaster reserves because they generally relied on the federal government to fund most of the costs associated with disaster response and recovery. While some states have increased the oversight and availability of disaster funds, all 10 states' approaches to budgeting for disasters have remained largely unchanged during fiscal years 2004 through 2013. Specifically, three states—Alaska, Indiana, and North Dakota—changed their budgeting processes to ensure that funding for disasters was appropriated before rather than after a disaster occurred. In addition, legislatures in three states—Missouri, North Dakota and West Virginia—took steps to increase their oversight of disaster spending. |
ATVs are intended for use on various types of unpaved terrain. They have large, low-pressure tires; straddle seats; and handlebars for steering control. ATVs generally are intended for use by a single operator; however, some models have a seat for an additional passenger. ATVs are considered to be “rider-active” vehicles—that is, when the operator shifts his or her body weight, the operation of the vehicle is affected, including the turning and stability of the vehicle. Although three-wheeled ATVs originally were produced, nearly all ATVs in use today are four-wheeled vehicles. ATVs are available in a number of different models. For example, sport ATVs are designed for recreational trail riding. Utility ATVs are also recreational vehicles, but have cargo racks and can be fitted with attachments, including trailers, and have utility purposes such as doing chores and for farming. Additionally, ATVs are available in youth-sized and adult-sized models. (See fig. 1.) Youth-sized models typically have engine sizes that are typically 90 cubic centimeters or less and weigh less than 300 pounds, while adult-sized models have engine sizes that range from 90 cubic centimeters to 1,000 cubic centimeters and weigh between 300 and 700 pounds. A new “transition” model, which is designed for children who are 14 years old or older and are under adult supervision, has an engine size of about 150 cubic centimeters and weighs around 350 pounds. ATVs were first introduced in the 1970s. Sales of the vehicles increased substantially in the 1980s and more than tripled from 1980 to 1985. Moreover, according to Commission staff estimates, the number of four- wheeled ATVs in use nearly tripled during the last decade, from about 3.6 million in 1999 to about 10.2 million in 2008, an increase of 183 percent. (See fig. 2.) However, Commission staff estimates of the number of ATVs in use do not include information on how often the vehicles are used or how many miles they are ridden. According to a 2008 survey of ATV owners conducted by the Motorcycle Industry Council, a trade association that represents motorcycle manufacturers and distributors and works with the Specialty Vehicle Institute of America, which represents ATV manufacturers and distributors, nearly 40 percent of ATV riders are younger than 30 years of age. Although the survey also found that the majority of ATV riders—81 percent—are male, it indicated that the number of female riders had increased 138 percent since 2000. The ATV market in the United States has changed in recent years as new entrants, mainly from China and Taiwan, have gained market share from the traditional manufacturers in the United States (e.g., Honda, Polaris, and Yamaha). According to a market research firm that collects data on U.S. ATV sales, the market share represented by new entrants increased from 14 percent in 2004 to 42 percent in 2007 before declining to 34 percent in 2008. As shown in figure 3, total U.S. sales of ATVs declined from about 1.1 million units in 2007 to 689,000 units in 2008, a decrease of about 400,000 vehicles, which industry officials attributed to the economic recession. The market research firm also estimated that youth-sized models represented about 39 percent of all U.S. ATV sales in 2008 and that the new entrants sold about 83 percent of youth-sized models. According to the Specialty Vehicle Institute of America, youth-sized models typically cost between $1,600 and $3,400 and adult-sized models typically cost between $2,300 and $13,700. A transition model costs between $3,200 and $3,350. However, we also found some youth-sized ATVs being sold via the Internet that were priced at less than $500. The Commission is an independent federal agency charged with protecting the public from unreasonable risks of serious injury or death from thousands of types of consumer products, including ATVs. As part of its safety oversight responsibilities, the Commission regulates ATV performance to promote safe design practices and produces an annual report on fatalities and injuries. Commission staff told us they collect data on fatalities by reviewing death certificates collected from states, news reports, and other sources, and investigate, or attempt to investigate, every ATV fatality by examining official reports. In addition to counting the number of documented fatalities, because some ATV fatalities are n reported, Commission staff also estimate the number of fatalities. Moreover, because ATV fatalities may not be reported immediately, it normally takes 3 for a given year. years for Commission staff to receive data on fatalities In addition to collecting data on fatalities, Commission staff collect da on injuries from the agency’s National Electronic Injury Surveillance System, which is a national probability sample of hospitals in the Unite States and its territories. According to Commission staff, the agency’s estimated ATV injury data are current for that reporting year. ATV manufacturers that were submitted prior to August 14, 2008, and w grandfathered in, according to the statute, for a total of 37 “approved” plans. Commission staff told us they provide the U.S. Customs and Borde Patrol with the names of manufacturers and distributors with approved action plans so that companies without approved plans can be barred from importing their products into the United States. The mandatory standard defines youth-sized ATVs by their maximum speeds, rather than by engine size, as under the consent decree. Unde standard, maximum speed capabilities are provided for four types of youth-sized models. These capabilities depend on the rider’s age. In addition, the standard requires that all youth-sized models be equipped with a speed governor that limits the vehicle to a certain speed but can be adjusted or removed with the use of a tool or other specialized device that vehicle can then go faster. The standard addresses two types of maximum speed capabilities: (1) when a required speed governor is in use and (2) when the speed governor is disabled or removed. (See table 1.) For example, youth models designed for children 6 years of age and older are limited to a maximum speed of 10 miles per hour when the speed governor is used and 15 miles per hour when the governor is disabled. The standa he speed-limiting devices requires manufacturers to deliver ATVs with t adjusted for the respective youth categories. ATVs are primarily used for recreational purposes; however, ATVs are used for occupations such as farming; for government functions, such as policing and patrolling public lands; and for transportation in remote areas. Although we found little information that quantified the uses of ATVs and the advantages of their use, a survey of owners conducted by the Motorcycle Industry Council in 2008—according to Commission staff, the only recent national survey of ATV owners—indicated that 79 percent use their ATVs recreationally and that the most common recreational activities were pleasure riding and trail riding. (See fig. 4.) Owners also reported using their ATVs for other recreational activities such as hunting, camping, fishing, and racing. (See table 2.) The two types of ATVs available—sport and utility models—are used for specific recreational purposes. For example, sport ATVs are used for recreational trail riding and racing, while utility ATVs are used for activities such as camping because they are designed with cargo racks and can be fitted with attachments. One dealer told us that outdoors enthusiasts, such as campers and hunters, use ATVs because some outdoor areas cannot be reached with other four-wheel- drive vehicles, such as jeeps. He also said that ATVs can be easily towed and that some are light enough to be placed on a truck bed, allowing them to be transported to a trail or park and be packed with gear that otherwise could not be carried. In the owners’ survey, owners were asked to rank possible reasons for riding on a scale from 1 to 10, with 1 being “not important” and 10 being “very important.” Their top responses were having fun, enjoying the outdoors, and exploring hard to reach places. Owners also said that relaxation and family activity were important reasons for riding. (See table 3.) Other state-based surveys of ATV riders that we reviewed supported the results of the nationwide owners’ survey. For example, a 2005 survey of registered owners in Minnesota found that relaxation, being in a natural area, and being with friends and family were the most important advantages of ATV riding. In addition, a 2005 survey of ATV club members in New York found that riding with family and friends, relaxation, and viewing the scenery were some of the highest-rated advantages of ATV riding. Two officials from user groups told us that riding is a family activity and that ATVs have enabled a broader cross-section of the population, including older riders, to experience and appreciate the outdoors. We spoke with the owner of an ATV touring business in Alaska who told us that the company receives much of their business from families and travelers arriving on cruise ships who are seeking an easy way to see the Alaskan countryside. Although the owners’ survey indicated that most recreational riding occurs on private property, ATV trails and riding areas have been designated on federal, state, and private lands throughout the country. An official from the National Off-Highway Vehicle Conservation Council, an organization that promotes the availability of trails for off-highway vehicles, including ATVs, told us that there are a number of riding locations, especially areas on private lands that have designated difficulty levels, ranging from ones designed for beginners to others for riders with more advanced skills. Trail managers told us that some of these trails have become popular destinations for groups of riders, particularly for families. For example, a former Paiute Trail manager told us that the trail has gained popularity and the number of ATV riders accessing the Paiute Trail in south-central Utah increased from about 23,700 in 1995 to about 85,000 in 2009, a 259 percent increase. He told us the Paiute Trail provides access to over 700 miles of connected trails. These riding areas host ATV events that attract enthusiasts from all over the country. For example, the former Paiute trail manager told us that the number of participants in the Rocky Mountain Jamboree, an event that includes an ATV parade and guided tours, grew from 41 in 1993 to over 500 in 2009. Some studies have shown that spending by ATV riders on items such as food, lodging, gasoline, and vehicle accessories has had a positive economic impact on local communities and has increased state and local tax revenues. For example, a study on the economic impact of ATVs in West Virginia, conducted by researchers from Marshall University who were hired by the Hatfield-McCoy Trails (Hatfield-McCoy), a major trail system in southern West Virginia, found that the number of ATV-related businesses in the localities near the trail had increased, causing these businesses to hire additional employees. In addition, in Minnesota, where many ATVs are ridden and manufactured, the University of Minnesota surveyed riders about their spending and asked retailers and manufacturers to determine the economic impact of all ATV-related activities. This study found that ATVs have had a positive economic impact on the state, in terms of both retail sales and job creation. However, these study findings cannot be generalized to all trails nationwide, and the studies did not report whether the spending would have occurred for other recreational activities, if not for ATV use. We visited the Hatfield-McCoy Trails, which, according to officials at a regional transportation institute, were created and are supported by the West Virginia state legislature to increase tourism in the state. Hatfield- McCoy consists of six distinct trail loops, located in or close to cities in West Virginia, totaling more than 500 miles. According to the Hatfield- McCoy trail manager, the trail has become increasingly popular among ATV riders. He said that over 30,000 trail permits were sold in 2009, the majority of which were sold to people outside the state, compared with almost 4,000 permits sold in 2000. We spoke with hotel and restaurant owners in Gilbert, West Virginia, where one of the Hatfield-McCoy trailheads is located, who said that their businesses are directly supported by ATV riders visiting the trail. (See fig. 5.) The hotel owner told us that his hotel, which lodges up to 14 guests, is generally booked weeks in advance. He also said that finding a place to stay during the annual Trail Fest near Gilbert is particularly difficult and some of the event participants are forced to stay an hour outside of the city. The Hatfield-McCoy trail manager told us that the city of Gilbert supports the ATV trail, recognizing the importance of the trail to the city’s businesses, and has passed an ordinance allowing ATV riders to use public roads to access trailheads. Owners and manufacturers report that ATVs have unique features that make them advantageous for completing certain work and chore activities. The owners survey found that ATVs are used for activities such as hauling or towing, doing yard work, maintaining property, and other tasks. (See table 4.) Officials from a manufacturer told us that utility ATVs are ideal for these types of tasks because they are able to maneuver in all types of terrain, be fitted with a number of different accessories such as snow plows and winches, and carry about six times more weight and travel more than eight times faster than a person. An official from the National Farmers Union, a national farming organization, told us that ATVs were used on her family’s farm for tasks such as carrying supplies around the farm, fixing fences, and herding cattle and livestock. One official from the National 4-H Council told us that 37 percent of their member families own ATVs for use on their farms. An official from another user group told us that ATVs can traverse areas such as irrigation ditches that are inaccessible to other vehicles. Some farmers purchase accessories to attach to ATVs, such as sprayers that are attached to the back of an ATV to spray pesticides. (See fig. 6.) Manufacturing officials told us that ATVs are useful for a number of other occupations. For example, ATVs are used in oil production, construction workers use them for large roofing jobs, utility companies use them to maintain power lines, and lifeguards use them to patrol beaches. In addition to private-sector users, government agencies, such as the Census Bureau, the military, land agencies, and police and search-and- rescue units use ATVs to carry out their functions. For example: A Census Bureau official said that some census workers use ATVs to deliver questionnaires to people living in remote areas in states such as Alaska and Maine. Officials from the U.S. Department of Defense told us that military personnel use ATVs for base maintenance, carrying heavy gear, and maneuvering through extreme terrain. (See fig. 7.) Officials at land management agencies, such as the Bureau of Land Management, U.S. Forest Service, and U.S. Fish and Wildlife told us they use ATVs to monitor and inspect public lands. For example, a Bureau of Land Management official in Alaska told us that between 30 and 40 ATVs are used for firefighting support, transporting employees and gear, accessing remote areas, and spraying pesticides. An officer from the San Antonio Police Department told us that the department uses 12 ATVs to patrol local nature parks. (See fig. 7.) Also, an Alaska Wildlife Trooper told us that the Alaska Department of Public Safety has a fleet of 45 ATVs to monitor its lands for illegal hunting and fishing. Additionally, search-and-rescue officials use ATVs for their missions. For example, an official from the Southeast Regional Emergency Services Council in Alaska told us that first responders cannot wait for ambulances and are trained to use ATVs to transport victims on a sled attached to the back of an ATV. (See fig. 8.) She told us that Alaska has a statewide program that provides funding for about 70 ATVs equipped for emergencies. We also observed an ATV at a fire station in West Virginia that was equipped with emergency rescue equipment to help rescuers reach victims in rugged areas. (See fig. 8.) We also found that people in rural and remote areas use ATVs for transportation. Thirty-five percent of the owners surveyed in 2008 reported using their ATV for general transportation purposes. (See table 4.) An official from the Alaska Department of Health told us that some communities in Alaska use ATVs as a primary mode of transportation. For example, residents of Kotzebue, Alaska, a community we visited, routinely use ATVs for transportation and subsistence hunting. Community representatives told us during an interview that ATVs are more convenient to use than other vehicles because the city has only two paved roads within the city limits. They told us that the only way to travel by land to other cities or villages is by using trails developed specifically for ATVs or snowmobiles. In addition, they said that an ATV is generally less expensive to operate than a car or truck because it has better gas mileage and is less expensive to maintain and ship. One Kotzebue police officer told us that residents use ATVs as their “family car,” commuting to work and school and running errands. (See fig. 9.) ATV fatalities and injuries have increased substantially since 1999, but not as rapidly as the number of four-wheeled ATVs in use, which nearly tripled. Commission staff estimated 6,556 fatalities occurred from 1999 through 2007, an average of about 700 people per year. In 2007, the most recent year for which the Commission staff estimated the number of fatalities, an estimated 816 fatalities occurred, compared with 534 in 1999, an increase of 53 percent (an average of about 6 percent per year). As shown in figure 10, the estimated number of fatalities rose steadily from 1999 through 2005. Although the estimated numbers of fatalities in 2006 (907) and 2007 (816) were less than the number in 2005 (932), Commission staff said that it is likely that the estimated number of fatalities for 2006 and 2007 will change as the agency collects more information. Hence, according to Commission staff, it is premature to make any conclusions about the estimated number of fatalities for those years. According to Commission staff estimates, from 1999 through 2005—the most recent period for which fatality estimates are complete—the risk decreased from 1.4 deaths per 10,000 four-wheeled ATVs in use to 1.1 deaths per 10,000 ATVs in use, or 21 percent. (See fig. 11.) Commission staff estimated that the risk of death per 10,000 ATVs in use was 1.0 in 2006 and 0.8 in 2007, but said those numbers will change as the agency collects more information and that it is premature to make any conclusions about the risk for those years. In addition, as noted earlier, the Commission staff estimates of the number of ATVs in use do not include data on how often the vehicles are used or how many miles they are ridden. In addition to estimating the number of fatalities, Commission staff count the number of ATV fatalities using a variety of sources, including death certificates, news reports, and information from field staff. For individual years from 1999 through 2005, the estimated number of fatalities ranged from 11 percent to 35 percent higher than the number of documented cases. However, it normally takes Commission staff 3 years to receive data on fatalities for a given year. Therefore, the number of documented deaths counted for 2006 through 2008 (832 in 2006, 699 in 2007, and 410 in 2008) are incomplete and are expected to increase for those years. (See table 6 in app. II for the documented number of fatalities.) By contrast, data on ATV injuries, presented later in this report, are current for the reporting year and are not expected to increase. As discussed earlier, in addition to estimating the number of fatalities, Commission staff count the number of documented cases, which they use for various analyses. For example, Commission staff data on documented fatalities from 1999 through 2008 show that they occurred in all states and the District of Columbia. From 1999 through 2005, the states with the most ATV fatalities were Kentucky (240), Texas (213), West Virginia (208), Pennsylvania (191), and Florida (184). (Table 8 in app. II shows the total number of fatalities in each state, the District of Columbia, and Puerto Rico from 1999 through 2005 and preliminary data for 2006 through 2008.) From 1999 through 2005, 86 percent of ATV fatalities were male and the average age was 30. The Commission staff’s preliminary data for 2006 through 2008 indicated that 85 percent of fatalities were male and the average age was 33. Children under the age of 16 represented about 22 percent of all documented ATV fatalities from 1999 through 2008. (See table 9 in app. II.) The annual number of fatalities involving children increased from 1999 through 2004, but declined in 2005. Commission staff counted 163 children under the age of 16 among documented fatalities in 2005, compared with 90 in 1999, an increase of 81 percent. Preliminary data collected by Commission staff for 2006 through 2008 reported 143 children among ATV fatalities in 2006, 124 in 2007, and 74 in 2008. Commission staff data indicate that children who died were mainly riding adult-sized ATVs. These data on ATV fatalities involving children that occurred in 2005 include information on the engine size for 51 percent (84 of 165) of ATVs involved in those crashes. This information can be used to determine whether the ATVs were adult-sized or youth-sized models. Our analysis of those fatalities for which vehicle the engine size was recorded indicated that about 94 percent of the children who died in 2005 (79 of 84 children for whom the engine size was recorded) were riding adult-sized ATVs. Preliminary data collected by Commission staff for 2006 through 2008 indicated that 93 percent of children who died (175 of 189 children for whom the ATV engine size was recorded) were riding adult- sized ATVs. The estimated number of nonfatal ATV-related injuries, including those involving children, also increased from 1999 through 2007, but decreased in 2008. (See fig. 12.) Commission staff estimated that a total of 413,339 nonfatal injuries occurred in 2008, which was 10 percent less than the estimated number of injuries in 2007 (457,394), but 70 percent higher than the estimated number of injuries in 1999 (243,664). The total estimated number of nonfatal injuries in 2008 included 134,900 that were treated in emergency rooms, which Commission staff said were the most serious; 209,549 that were treated outside of emergency rooms, but were treated in other settings such as doctors’ offices and clinics; and 68,890 that were not treated. In 2008, an estimated 134,900 nonfatal ATV-related injuries were treated in emergency rooms, compared with 81,800 in 1999, an increase of 65 percent. However, the estimated number of injuries treated in emergency rooms decreased from 150,800 in 2007 to 134,900 in 2008, or 10 percent. Commission staff said that although the decrease from 2007 to 2008 is statistically significant, there is insufficient information to examine what caused the decrease. According to the staff, the decrease could have been influenced by one or more of the following factors: (1) exposure, as evidenced by the number of people who rode ATVs and the amount of time that was spent riding, considering that recreational activities often are reduced in times of recession; (2) the state of the ATV market in particular (i.e., sales decreased dramatically in 2008, likely because of the economic recession); (3) legislative and regulatory activities (e.g., the Consumer Product Safety Improvement Act, for example, may have led to a reduction in the availability of imported and youth-sized ATVs beginning in late 2008 and in a relative increase in the availability of mechanically safer vehicles); and (4) information and education activities at both national and local levels (i.e., increased safety awareness may have led to increased safety practices while riding ATVs). Furthermore, Commission staff stressed that the decrease may be limited to one year. However, an ATV association official said that the increased number of ATVs in use can account for the increase in injuries before 2008 and that decreased sales in 2008 likely would have had a marginal effect on injuries. Although the absolute number of injuries increased from 1999 through 2007, as the number of ATVs in use increased, the estimated risk of an emergency room-treated injury per 10,000 four-wheeled ATVs in use decreased from 193 injuries per 10,000 four-wheeled ATVs in use in 1999 to 129.7 injuries in 2008, or 33 percent. (See fig. 13.) According to data collected by Commission staff, about one-third of ATV- related injuries from 1999 through 2008 involved children younger than 16 years of age. Commission staff estimated that 37,700 children were treated for injuries in emergency rooms in 2008, compared with 27,700 in 1999, an increase of 36 percent, or an average of about 4 percent per year. (See table 10 in app. II for the estimated numbers of injuries treated in emergency rooms, including injuries to children; the estimated numbers of injuries that were treated in other medical settings besides emergency rooms; and the estimated numbers of injuries that were not medically treated.) According to the American Academy of Pediatrics, injuries sustained by children riding adult-sized ATVs are often very serious, including severe brain, spinal, abdominal, and complicated orthopedic injuries. Public health and medical officials said that injuries are typically bone fractures, or cranial or spinal injuries. According to Commission staff, an estimated 27 percent of the injuries treated in emergency rooms in 2008 were diagnosed as contusions and abrasions, 25 percent as fractures, 16 percent as sprains or strains, and 11 percent as lacerations. In addition, Commission staff data indicated that 87 percent of the people who visited emergency rooms for treatment of ATV injuries were treated and released, while 11 percent were treated and admitted or transferred to other facilities. One nationwide study of ATV fatalities and injuries indicated that fracture of the lower limb was the most common type of injury. According to a doctor who is an official with the Orthopaedic Trauma Association, the severity of injuries depends on driving speed, terrain, riding behavior, helmet usage, and other conditions involved. This doctor said that injuries often involve multiple traumas that can require long-term treatment. He said that a broken bone in the leg or foot, for example, can cause patients to suffer from muscle weakness or arthritis, resulting in a lifetime of difficulty standing or walking. Moreover, a public health official in a state where ATVs are widely used said that the impact of brain injuries can be considerable, including lifetime medical costs that may be incurred. Safety stakeholders, including Consumer Product Safety Commission, public health, and industry officials; consumer safety advocates; and medical professionals we contacted generally said that children should not operate adult-sized ATVs because they do not have judgment and skill to handle the power, speed, and weight of these vehicles. Instead, industry officials said that youth-sized models are more appropriate for children (except for some larger children) because these models are smaller, less powerful, slower, and lighter than adult-sized ATVs. An industry association official also emphasized that adult supervision is an important part of allowing children to operate ATVs. However, a consumer safety advocate questioned how adult supervision can occur when the adult may be some distance from where the child is riding. A 2004 Commission staff study indicated that if children rode youth-sized ATVs rather than adult- sized models, the risk of injury might be reduced by about one-half. Some public health officials we contacted agreed that injuries involving children who were riding youth-sized ATVs would be less severe if the children were riding youth-sized, rather than adult-sized models, because less energy associated with motion would be released during collisions. An official representing a major manufacturer said that the industry is encouraging children to ride appropriately sized ATVs, but parents may be reluctant to buy youth-sized models because their children will outgrow them and the parents then will need to buy larger models. Although we found general agreement among stakeholders that children should not operate adult-sized vehicles, some consumer safety advocates and medical professionals we contacted said that children under the age of 16 also should not operate youth-sized models. For example, the American Academy of Orthopaedic Surgeons and the American Academy of Pediatrics have taken the position that children younger than 16 years of age should not operate ATVs. A doctor who has treated ATV injuries at a Midwestern children’s hospital trauma center and is a member of the American Academy of Pediatrics said there is a misimpression that youth- sized ATVs are safer than adult-sized vehicles, but that they are still heavy, motorized vehicles capable of reaching high speeds. He said that the impression that youth-sized ATVs are safer means that more children will ride them, resulting in more injuries. This doctor added that by nature, children have less impulse control than adults and are prone to take increasing risks, seeking greater thrills, which can result in serious crashes. The doctor explained that crashes involving children typically occur when ATVs tip over, even at slow speeds, and that victims can suffocate because they cannot extricate themselves from underneath the heavy vehicles. However, an industry association official said that crashes involving children occur mainly when they are riding adult-sized models at higher speeds without parental supervision. A manufacturing official added that, unlike adult-sized ATVs, youth-sized models do not have headlights (to discourage nighttime riding) and have smaller engine sizes and speed governors. In addition to concerns about fatalities and injuries involving children, some safety advocates and researchers said they have become concerned about the safety of older ATV riders, as the vehicles have become faster and more powerful. A study of older riders in West Virginia indicated that older adults are at an increased risk of serious injury associated with ATV crashes because age-related changes in physical reserve and sensory limitations, pre-existing medical conditions, and related medication use may exacerbate the risk of injury. Our analysis showed a slight increase in the percentage of ATV fatalities of people who were 55 years old or older during the last decade. From 1999 through 2005, about 12 percent of all fatalities were 55 years of age or older (an average of 69 people per year), compared with about 16 percent from 2006 through 2008 (an average of 103 people per year, based on preliminary data). In addition, the number of injuries treated in emergency rooms involving people 55 and older increased from 2,400 in 1999 to 5,800 in 2008, or 142 percent. Safety stakeholders said that ATV fatalities and injuries occur for several reasons, including reckless driving, speed, alcohol use, riding with passengers, riding on paved roads, or riding on roads used by other vehicular traffic. In addition, some consumer safety advocates said that ATVs are inherently unstable because of their high center of gravity, short wheel-base, short turning radius, high-powered engines, and weight. A safety advocate also said that ATV speed governors were not frequently used and could be easily disabled by youth. Although more than one hazard pattern may be involved in ATV crashes, Commission staff data included information on what caused the crashes for 94 percent of fatalities (3,874 of 4,122) from 1999 through 2005, which indicated that they occurred when the vehicles involved collided with objects, such as other vehicles (48 percent); the vehicles tipped or flipped over (34 percent); drivers or passengers were ejected from the vehicles (11 percent); or the terrain changed (7 percent). For fatalities from 2001 through 2005, our analysis found that 41 percent of drivers had been drinking or were suspected of drinking alcohol before the crash and that 25 percent of crashes occurred when two or more passengers were riding the ATV. Our analysis of preliminary data on fatalities from 2006 through 2008 indicated that 41 percent of drivers had been drinking or were suspected of drinking alcohol before the crash and 24 percent of crashes occurred when two or more passengers were riding the ATV. For 2005, we found that in 37 percent of fatalities, the operator had been drinking alcohol before the crash and that at least one person riding on the vehicle was not wearing a helmet. Figure 14 illustrates typical fatality and injury scenarios. Officials representing manufacturers disagreed with the argument made by some safety advocates that ATVs are inherently unsafe. One official, for example, said that the design of ATVs has improved over time—the suspension is less tiring on the body and speed-limiting devices are better and harder to disable. Another manufacturing official said the vehicles’ high center of gravity must be maintained to allow sufficient clearance for trail riding. However, two emergency room doctors who have treated ATV injuries told us that, although the stability of the vehicles could be improved, they are still dangerous because of the severity of injuries sustained by energy transfers that occur during crashes. Commission staff said they are studying the stability issue. Commission staff estimated that the costs of ATV crashes have doubled in the last decade. (See fig. 15.) For 2007, Commission staff estimated that the total cost of crashes was $22.3 billion ($17.7 billion in nonfatal injury costs and $4.6 billion in fatality costs), compared with $10.7 billion in 1999 ($7.7 billion in nonfatal injury costs and $3 billion in fatal injury costs), an increase of 108 percent (in 2009 dollars). (See table 11 in app. II for data.) Commission staff have not yet estimated the number and costs of fatalities in 2008, but estimated the costs of nonfatal injuries in 2008 to be $15.6 billion, compared with $17.7 billion in 2007, a 12 percent decrease, reflecting fewer injuries in 2008 than 2007. In estimating the cost of a death, Commission staff assumed a $5 million value for a statistical life—a figure they also have used in their regulatory analyses of other consumer products. Commission staff said they developed the $5 million figure by reviewing empirical literature that estimated the value of a statistical life at between $2 million and $10 million and choosing a mid-range estimate. The $5 million estimate includes the value of work loss and pain and suffering. To develop injury cost estimates, Commission staff used data from the agency’s National Electronic Injury Surveillance System—a national probability sample of hospitals in the United States and its territories—plus estimates of injuries not treated in hospitals. The injury cost estimate takes into account factors such as a victim’s age, gender, body part injured, which are used to estimate the cost of medical and legal expenses, work wages lost, and pain and suffering. The Commission staff’s cost estimate of ATV fatalities was similar to that developed by public health researchers who estimated $4.5 billion in costs for 2005, compared with the Commission staff’s estimate of $4.7 billion for the same year. For this study, the researchers identified ATV deaths from the National Center for Health Statistics Multiple Cause-of-Death public-access file, which draws data from death certificates, and used the National Highway Traffic Safety Administration’s method of estimating the value of a statistical life. Another study indicated that the highest hospitalization costs were for spinal cord and intracranial injuries. Two emergency room doctors who have treated children who were ATV crash victims told us that these injury treatment costs are among the highest because of the severity of the injuries sustained. Safety stakeholders said that the number and severity of ATV injuries could be reduced through training and by wearing proper equipment such as helmets. A state trauma director said that parents are usually surprised when they see the results of crashes involving children because they were not aware of the dangers of ATV riding. In addition, an injury prevention official said that in many communities where ATVs are widely used, people are not aware of the connection between wearing helmets and preventing brain injuries. According to the Commission, training is important because operating an ATV seems “deceptively easy.” The Commission indicated that even at relatively low speeds (20 to 30 miles per hour), ATVs can take as much skill to operate as an automobile because the operator requires (1) situational awareness to negotiate unpaved terrain with both eye-level hazards, such as trees and other ATVs, and trail-level hazards, such as ditches, rocks, and hidden holes, and (2) quick judgments relating to steering, speed, and braking as well as relating to terrain suitability, weight shifting, and other active riding behaviors. Formal, hands-on training teaches operators how the ATV responds in situations that are typically encountered. Manufacturers and distributors agree in their action plans to provide training to ATV buyers. According to Commission staff, manufacturers and distributors, through their dealers, must offer free, hands-on training to first-time purchasers and their immediate families, plus incentives valued at a minimum of $100 for taking the training. As of February 2010, Commission staff indicated that only the training provided by the Specialty Vehicle Institute of America’s ATV Safety Institute met the requirement. The Safety Institute’s course, which takes about 4 to 5½ hours, encompasses safe riding practices, such as how to operate ATVs on hills and on various types of terrain; the importance of wearing protective gear; and the hazards of improperly operating the vehicles. In addition, the 4-H youth organization awards grants to local communities and schools to promote ATV safety for children, including hands-on training. The National Off-Highway Vehicle Conservation Council, an organization that promotes the recreational use of off-highway vehicles, also offers an adventure trail activity book and interactive compact disk that provide children with information on safe riding practices. Other organizations also conduct educational campaigns to publicize ATV risks and encourage safe riding practices. The American Academy of Orthopaedic Surgeons and the Orthopaedic Trauma Association, for example, have conducted a public service advertising campaign since 2007 consisting of advertisements displayed in newspapers and airports, warning people not to think of ATVs as toys and encouraging them to ride safely. (See fig. 16.) Organization members have also participated in media tours to promote the campaign and raise safety awareness. State governments are also involved in promoting ATV safety. Transportation officials in West Virginia, for example, make 30- to 45- minute presentations to schools and community groups, emphasizing safety practices such as wearing a helmet, as part of a safety awareness program established in 2005. Under state law, all ATV riders under the age of 18 in West Virginia must wear a helmet and satisfactorily complete a rider safety awareness course approved by the Commissioner of Motor Vehicles. A state highway safety official said that to receive certification, children must watch a 10-minute video and children as young as 5 years old have received certification. Commission staff also have taken steps to promote safety awareness through television and radio public service announcements in areas where crashes occurred, creating a Web site (www.ATVSafety.gov), and partnering with organizations to promote safety. However, despite safety educational and training programs that focus on the importance of practices such as wearing helmets, rider helmet usage appears to be low. Our analysis of ATV fatalities that occurred during 2005 indicated that 17 percent of the victims were wearing helmets and at least 83 percent were not wearing them. Similarly, preliminary data on ATV fatalities from 2006 through 2008 indicated that 18 percent of the victims were wearing helmets and at least 82 percent were not wearing them. A consumer safety advocate said that extensive, multiyear educational efforts on ATV safety are needed to make a positive impact, similar to those efforts used to educate people about the safety benefits of using seat belts in automobiles. The consumer safety advocate said that advertising campaigns promoting ATV safety should be expanded and cited the success that public service announcements have had in reducing smoking, but said the effort should be coupled with enforcement. Similarly, a state public health official told us that injury prevention and safety programs are not effective alone and must be part of a package that includes laws with enforcement. West Virginia’s Hatfield-McCoy Trails, for example, employs law enforcement personnel to enforce riding rules such as wearing helmets, maintaining reasonable speeds, not consuming alcohol, and not allowing children to ride adult-sized ATVs. Trails officials said these rules have been effective in preventing crashes on the trails. They said that there have been 3 ATV fatalities on the trails since 2000, compared with 50 to 60 fatalities that have occurred per year throughout the state, including crashes that occurred on private property where unsafe riding practices may not be prohibited and where many ATV crashes occur. Because much ATV riding occurs on federal lands, the Consumer Federation of America, a consumer safety advocacy group, has recommended that guidelines for federal lands be developed prohibiting children from riding adult-sized ATVs; requiring the use of helmets; and banning riding with passengers, on paved roads, and at night. The American Academy of Pediatrics, citing research on the effectiveness of helmet usage by motorcycle and bicycle riders, has also recommended that the federal government require ATV riders on public lands to wear helmets. Laws addressing the behavioral aspects of ATV use, such as helmet usage and training, are generally found at the state level and vary greatly. (See table 5.) Some state laws apply only to riding on public lands. The Specialty Vehicle Institute of America has developed model state legislation calling for, among other things, hands-on training for operators, that children under age 16 be under continuous adult supervision while operating an ATV on public land, and restrictions on the sale of adult-sized ATVs for use by children. To sell ATVs in the United States, the Consumer Product Safety Improvement Act requires manufacturers and distributors to file “ATV action plans,” which must be approved by the Commission, and to comply with all provisions of the plans and the ATV industry standard. The mandatory industry standard also contains provisions pertaining to the use of ATVs by children, such as requiring manufacturers and distributors to affix warning labels on ATVs about preventing crashes and identifying the appropriate (minimum recommended) age for vehicle operators. (See fig. 17.) Sales of ATVs for use by children are covered by provisions in the plans under which manufacturers and distributors agree to (1) refrain from recommending, marketing, or selling adult-sized ATVs for use by children younger than 16 years old and (2) monitor their dealers to check, using independent investigators, whether dealers are willing to sell adult- sized ATVs for use by children and take action against dealers who disregard the standard’s operator age recommendations. We were told that many manufacturers and distributors use dealership agreements to obligate dealers to comply with practices such as not selling adult-sized ATVs for use by children. Furthermore, some manufacturers require their dealers to have their customers sign statements at the time of sale indicating that they have been informed and understand that children should not operate adult-sized ATVs. A Commission compliance official said that the Commission has the authority to enforce the terms of ATV action plans. However, there are several weaknesses in the Commission’s ability to enforce provisions of the plans aimed at preventing the sale of adult-sized ATVs for use by children. First, the Commission does not have direct recourse against ATV retailers (dealers) under the action plans. The Consumer Product Safety Improvement Act applies directly only to the manufacturers and distributors who are responsible for ensuring that their dealers are not selling adult-sized ATVs for use by children. Although many manufacturers and distributors use dealership agreements to obligate dealers not to sell adult-sized ATVs for use by children, the Commission does not routinely require these agreements to be made available for its inspection. In addition, these agreements are typically governed by state law, which the staff indicated can make enforcement difficult because a number of manufacturers and distributors specify in their action plans that they will follow their plans only to extent allowed under state law. Moreover, details about how manufacturers and distributors will use their “best efforts” to prevent the sale of adult-sized ATVs by their dealers vary from company to company. Another problem with enforcing the age recommendations is that many retailers, such as sporting goods stores, sell a variety of ATV brands and even if one manufacturer discontinues its arrangement with that seller, the retailer can still sell several other brands. Finally, most of the action plans are not publicly available, precluding public examination of companies’ monitoring plans. Although we found a broad consensus that children who operate adult- sized ATVs are at significant risk of serious injury or death, the ages specified in the industry standard for operating ATVs of various sizes are only recommended ages, which are reflected in warning labels and hang tags. Adult-sized ATVs are designed for use by persons who are 16 years of age or older, and are placarded as such. However, some 15-year-old children are physically and mentally more mature than typical 16-year- olds; conversely, some 16-year-olds are not as physically or mentally mature as typical 15-year-olds. The fact that adult-sized ATVs are designed for persons who are at least 16 years old does not preclude their safe operation by some 15-year-olds. Because the age provisions in the ATV standard are recommendations, there are no specific age requirements that the Commission could enforce. In addition, the Commission’s major focus is on ensuring the safety of consumer products when used as they are designed. Some products are not safe for children; some require a higher level of mental and physical maturity than young children possess if they are to be operated safely. These differences are taken into account in determining whether a product is safe for use by those who are its intended users. Although the Commission can sometimes act to discourage unsafe use or products, it has limited ability to prevent product misuse by purchasers, such as misuse of adult-sized ATVs by children whose parents disregard the placards, training, and other safety warning designed to put them on notice that operation of adult-sized ATVs by children can be unsafe. Industry officials said they are taking actions to prevent the sale of adult- sized ATVs for use by children and Commission staff said they have taken steps to ensure compliance. During our discussions with three major manufacturers, officials said the companies were monitoring their dealers and had taken corrective action when dealers were found to have disregarded the minimum age recommendations. Actions taken were said to include financial penalties, attempted termination of dealership agreements, and termination of dealership agreements. Since 1998, Commission staff have conducted undercover inspections of ATV dealers, by posing as buyers, to check compliance with the age recommendations. Nevertheless, compliance rates of the ATV dealers that Commission staff checked decreased from 85 percent in 1999 to 63 percent in 2007. (See table 13 in app. IV for annual compliance rates.) A Commission compliance official said no undercover inspections of dealers had been conducted since early 2008 because Commission staff were focused on preparing to implement the Consumer Product Safety Improvement Act, but that inspections will be resumed in the future. According to Commission staff, the agency expects manufacturers and distributors to conduct at least 50 undercover checks of their dealers each year. Manufacturers and distributors with recently approved ATV action plans are also required to check each of their dealers at least twice a year. Commission staff indicated that if a dealer is found to have committed more than one violation, manufacturers and distributors should initiate terminating their agreements with the dealer. were willing to ell lt-ized ATV for usy children if the customer retrned to the tore the following week nd said the vehicle were for the lt. A saleperon forth delerhip initilly said tht he cold not ell lt-ized ATV for us child, but lter asked the customer how mch he was willing to pend nd then recommended lt-ized vehicle. Because Commission staff had not conducted any undercover inspections of dealers since 2008 and because the number of new entrants in the marketplace that had not been checked (as of February 2010, 37 companies had ATV action plans authorizing them to sell ATVs in the United States, compared with 8 companies in 2008), we conducted undercover operations of selected dealers to check whether dealers were willing to sell adult-sized ATVs for use by children under the age of 16. In our undercover checks, we selected a variety of dealers who were selling ATVs both in stores and through the Internet, including those selling ATVs manufactured by new market entrants, focusing on dealers located in some states with the highest numbers of ATV fatalities involving children. We checked some retailers that exclusively sold a single brand as well as other retailers, such as sporting goods stores that sold a variety of brands. We followed the same protocol that Commission staff had use d in their undercover dealer checks and indicated to sales staff that we were seeking to purchase an ATV for a child who was 12 or 13 years old. We found that most of the dealers we visited (7 of 10) were willing to sell adult-sized ATVs for use by children. In addition to visiting 10 dealers’ stores, we sent e-mails to 6 dealers indicating that we were seeking to purchase an adult-sized ATV for a child who was 13 or 14 years old. On these dealers responded to the e-mail by recommending an adult- sized ATV; a salesperson from another dealer said she that liked an adult-sized model, but did not explicitly recommend it for the child; and the othe dealers did not respond. The dealers who were willing to sell adult-sized ATVs for use by children included retailers that sold ATVs made by the traditional manufacturers and new market entrants as well as those that sold a single brand and a variety of brands. In some cases, sales staff subtly and in other cases blatantly admitted that they should not be selling adult-sized ATVs for use by a 13-year-old, but would do so anyway. In addition, one dealer we visited was selling ATVs manufactured by a company without an ATV action plan. (See sidebar on previous page for examples.) During our review, stakeholders raised additional issues involving ATV safety that Commission staff could explore in carrying out the agency’s oversight responsibilities. For example, officials from three major manufacturers said their companies are no longer selling ATVs designed for children 12 years of age and younger because of restrictions contained in the Consumer Product Safety Improvement Act on manufacturing children’s products containing lead. One company official said, for example, there is no evidence that the small lead content in ATV components presents, or has ever presented, any risk to child operators, and that having fewer youth-sized ATVs on the market may result in more children riding adult-sized ATVs, which could result in more crashes. The Commission has granted a temporary stay of enforcement of lead content limits for certain metal components of youth-sized ATVs to ensure that such models remain available, given what it called the “mortal danger” presented when children 12 years of age and younger use adult-sized ATVs. However, an official from one manufacturer that has stopped making ATVs for children 12 years of age and younger said that complying with the lead requirements under the stay of enforcement is too burdensome in terms of testing and reporting. A Commission compliance official acknowledged that complying with the requirements under the stay of enforcement is burdensome, but said that the agency needs an adequate justification and record of support to provide the stay. An official from another manufacturer that is still making ATVs for children 12 years of age and younger said the company is meeting its obligations under the stay by finding lead-compliant parts and, in some cases, redesigning vehicles to make some lead-containing parts inaccessible, but is uncertain whether such youth-model ATVs will be available that meet the Commission’s interpretation of the law when the stay expires in May 2011. Also during our review, stakeholders expressed various opinions on how youth-sized and adult-sized ATVs should be classified. Some safety advocates objected to the ATV standard’s classification of youth-sized models by speed, rather than engine size. One consumer advocate, for example, said that engine size is a better measure because it encompasses power, vehicle size, and speed, compared with the single dimension of speed, and that evidence is lacking that children can handle ATVs at the speeds that the standard allows. Another consumer advocate said that the standard does not limit ATV size in classifying youth models, which she said is a factor in roll-over incidents. Moreover, a trail manager said that children’s height and weight should be considered as well as age in determining appropriate model sizes for youth. However, a manufacturing official said that speed is a better classification measure because engine size does not necessarily determine the top speed. A Commission official also said an ATV that fits a child’s size may go faster than what someone his or her age can handle. Furthermore, in 2006, when the Commission proposed to categorize ATVs by engine size rather than speed, it indicated that categorizing adult-sized ATVs on the basis of engine size restricted the vehicles’ design and that engine size does not necessarily limit vehicle size or regulate maximum unrestricted speed. Some safety advocates also said that the size, power, and weight of ATVs are increasing, which they believe has led to more crashes and more severe injuries. An industry official also said that ATVs are becoming larger and more powerful. However, Commission staff said they did not have information that would document whether the size, power, and weight of ATVs have increased and, if so, whether those changes had increased the number or severity of injuries. An official from one manufacturer said the power of sport ATVs and the weight and power of utility ATVs have increased, but that the injury rate has dropped while sales have increased during the past 5 years. According to Commission staff, a 2001 injury and exposure study sponsored by the ATV industry in consultation with Commission staff showed large increases in the percentage of ATVs in use with engine sizes of 400 cubic centimeters or more and the risk associated with that engine size class between 1997 and 2001. Commission staff added that the market has changed since 2001 and that ATVs with both larger and smaller engine sizes now are now available on the market, but they have not studied the impact of this market change on the risk of injury or updated the 2001 study. Commission staff also said that studying the relationship between the physical characteristics of ATVs on the market and the risk of injury could be explored in any future rulemaking. However, they said that given the time and resources needed to conduct such a study, it would be important to first establish whether such a study would be useful and necessary for providing information about trends in injuries and fatalities and would help address the hazards associated with riding ATVs. In addition, some industry officials and a consumer safety advocate said there is a risk of unsafe ATVs being imported into the United States. For example, in May 2009, the Commission recalled an ATV made by a Chinese company that did not have an approved action plan and sold for between $250 to $350. An industry association official also said that it is possible for a company to have an approved action plan, but not comply with the ATV standard. A Commission compliance official said that the agency currently is focusing on enforcing the action plans, rather than the standard, and that the agency plans to increase testing of ATVs when it opens a new testing facility next year. In addition, Commission staff said they are addressing unsafe imports by providing lists of manufacturers and distributors with approved action plans to U.S. Customs and Border Protection so that unapproved ATVs will not be imported into the United States. A Commission compliance official, for example, said that in October 2009, U.S. Customs and Border Protection seized a container of Chinese-made ATVs at the port of Houston that was being shipped from a manufacturer that did not have an approved action plan and therefore was prohibited from importing its products into the United States. Moreover, a Commission compliance official noted that the Consumer Product Safety Improvement Act gives the agency enhanced authorities for overseeing ATV safety. This official said, for example, that before the Consumer Product Safety Improvement Act was enacted, the Commission was required to determine that unsafe products were defective and a substantial product hazard, which is a longer and more difficult process than stopping products from being imported that are unapproved or do not meet mandatory product safety standards. This official also said that it is now easier for the Commission to levy civil penalties against ATV manufacturers and distributors for violating the act, but has not yet done so. For example, Commission staff said that if manufacturers and distributors violate their action plans, they are subject to civil penalties of $100,000 per violation, up to $15 million, and criminal penalties of up to 5 years in prison. In addition, to help educate manufacturers about action plan and manufacturing requirements, Commission staff have conducted outreach efforts through public meetings and the Commission’s Web site, and recently conducted a Chinese-language Webinar for Chinese manufacturers. The effect of possible recent increases in ATV size, power, and weight— which safety advocates and medical professionals say are factors in the amount of energy released in collisions—on the frequency and severity of injuries is unknown. It is possible that any increases in size, power, and weight are too recent to be reflected in the latest fatality and injury data, which are still being collected. Determining whether such a relationship exists could help guide the Commission’s future rulemaking on ATV safety. Fatalities and injuries involving children have been a significant problem over the last decade, with children accounting for about one-fifth of fatalities and one-third of injuries. The Commission staff’s previous undercover checks to determine whether dealers were willing to sell adult- sized ATVs for use by children, as well as our recent checks, indicate that noncompliance is a persistent problem. Although it may be difficult for the government, at the local, state, or federal level to determine whether children are riding adult-sized ATVs, especially on private property, manufacturers and distributors have agreed in their action plans that they will not market, advertise, or sell adult-sized ATVs for use by children and will monitor their dealers’ sales practices. Commission staff could assess whether manufacturers and distributors are adequately monitoring their dealers by resuming their undercover checks of dealers and targeting new market entrants that have not yet been checked. Given that a substantial number of dealers that the Commission checked and the majority of dealers that we checked were willing to sell adult-sized ATVs for use by children, the Commission’s approach to preventing such sales appears to be relatively ineffective. Allowing the manufacturers and distributors to use nonspecific and conditional language in their action plans to describe how they will enforce dealer compliance with the age recommendations; not requiring manufacturers and distributors to make dealership agreements available for the Commission’s inspection; and not making all of the action plans publicly available weaken the Commission’s ability to enforce provisions of the action plans aimed at preventing the sale of adult-sized ATVs for use by children. Addressing these problems could help prevent the sale of adult-sized ATVs for use by children. To enhance the Consumer Product Safety Commission’s oversight of ATV safety, we recommend that the Commission take the following three actions: First, when sufficient data are available, assess whether the size, power and weight of ATVs have increased in recent years and, if so, whether and how those increases correlate with the severity of injuries. Commission staff should consider the results of this assessment in the agency’s future rulemaking on ATV safety issues. Second, resume undercover checks of ATV dealers, focusing on new market entrants, which have not been tested, to assess dealers’ willingness to sell adult-sized ATVs for use by children. Third, consider how the Commission’s enforcement of the age recommendations can be strengthened and act accordingly. Options could include, but are not limited to, requiring ATV manufacturers and distributors to (1) provide more specific language about how they will enforce their dealers’ compliance with the age recommendations and (2) make dealership agreements with dealers available for Commission staff to inspect how the agreements address the age recommendations. In addition, the Commission could consider making all of the action plans publicly available. We requested comments on a draft of this report from the Chairman of the Consumer Product Safety Commission. In response, a Commission official said that the report presented the information in a clear and well- organized manner and that the Commission accepts our recommendations. Commission staff provided some technical comments and clarifications that we incorporated. We are sending copies of this report to congressional subcommittees with responsibilities for consumer product safety; the Director, Office of Management and Budget; and the Chairman of the Consumer Product Safety Commission. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or you staff have any questions regarding this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. For background information on all-terrain vehicles (ATV), we reviewed descriptive information about the vehicles, including the sales price, engine size, and weight of various models. In addition, we reviewed the regulatory and legislative history concerning ATVs, including the Consumer Product Safety Commission’s (Commission) regulatory authorities and safety oversight efforts; relevant provisions of the Consumer Product Safety Improvement Act of 2008; and the 1988 consent decree between the government and ATV manufacturers. To describe recent changes in the ATV marketplace, we obtained from Power Products Marketing, a research and consulting firm, U.S. sales data from 2004 through 2008, broken down by sales of youth models and sales made by traditional and nontraditional (Chinese and Taiwanese) manufacturers. We also obtained information from Commission staff on the number of manufacturers and distributors that may legally sell their products in the United States and estimates of the number of ATVs in use from 1999 through 2008. For demographic information on ATV owners, we reviewed the results of a nationwide survey that the Motorcycle Industry Council conducted with the Specialty Vehicle Institute of America in 2008. We discussed with officials from Power Products Marketing, Commission staff, and the Specialty Vehicle Institute how their data were collected and determined that the data were sufficiently reliable for our purposes. To identify how ATVs are used and the advantages of their use, we interviewed industry officials, manufacturers, a dealer, and users, such as advocacy groups, trail managers, and industry groups and government agencies that use ATVs at work. We also analyzed the results of the Motorcycle Industry Council’s 2008 survey of ATV owners to determine the extent to which these vehicles are used for recreation and work and the reported advantages of ATV riding. We reviewed the methodology used to conduct the survey and determined that the process was sufficiently reliable for our purposes. In addition, we reviewed studies conducted during the last 5 years on the uses of ATVs and the economic impact of riding on areas surrounding ATV trails. Lastly, we visited Alaska and West Virginia because ATVs are used widely in these states. To document how ATVs are used for daily transportation, we visited a remote community in Alaska and spoke with its community leaders. In West Virginia, we visited a major ATV trail system and spoke with community business owners to obtain information on the economic impact of trail riding. To identify the nature, extent, and costs of fatalities and injuries, we reviewed studies conducted during the last 5 years. In addition, we reviewed ATV fatality and injury data that Commission staff collected for 1999 through 2008, assessed their quality, and identified trends, including trends involving children, engine size, and helmet usage. We discussed with Commission staff how they collected the fatality and injury data and estimated the numbers of fatalities and injuries, and determined that these data were sufficiently reliable for our purposes. Furthermore, we interviewed industry officials, user groups, consumer safety advocates, public health officials, and medical professionals about the causes of ATV fatalities and injuries and efforts to prevent them, including training and education. In addition, during our visits to Alaska and West Virginia, we discussed crash and injury prevention efforts with ATV users and dealers, public health officials, and other state government officials. Moreover, because Commission staff had not conducted any undercover inspections of dealers since 2008 and because the number of new entrants in the marketplace has recently increased, we conducted undercover checks at 10 dealers’ stores in 4 states and attempted to contact 6 dealers by e-mail to assess their willingness to sell adult-sized ATVs for use by children. For our undercover checks, we selected a variety of dealers, focusing on those selling ATVs manufactured by new market entrants and dealers located in states with the highest numbers of ATV fatalities involving children, both in stores and through the Internet. We used the same protocol that Commission staff had used in their undercover checks. This appendix presents various ATV fatality and injury statistics and estimated associated crash costs. This appendix provides information on the major provisions of state ATV laws as reported by the Specialty Vehicle Institute of America. However, in many cases, the requirements apply only in narrow circumstances, particularly with respect to requiring motor vehicle licenses to drive ATVs. See the table notes for more information about the types of exemptions allowed under these laws. In addition to the individual named above, James Ratzenberger (Assistant Director), Anne Dilger, Grant Fleming, Virginia Flores, Tim Guinane, Matthew Harris, Kenneth Hill, Bob Homan, Bert Japikse, Tara Jayant, Sara Ann Moessbauer, Josh Ormond, and Dae Park made significant contributions to this report. | All-terrain vehicles (ATV), which are off-road motorized vehicles, usually with four tires, a straddle seat for the operator, and handlebars for steering control, have become increasingly popular. However, ATV fatalities and injuries have increased over the last decade and are a matter of concern to the Consumer Product Safety Commission (Commission), which oversees ATV safety, and to others. Many ATV crashes involving children occur when they are riding adult-sized ATVs. Manufacturers and distributors have agreed to use their best efforts to prevent their dealers from selling adult-sized ATVs for use by children under the age of 16. The Consumer Product Safety Improvement Act requires GAO to report on (1) how ATVs are used and the advantages of their use and (2) the nature, extent, and costs of ATV crashes. GAO addressed these topics by reviewing ATV use and crash data and by discussing these issues with Commission staff, industry officials, user groups, and safety stakeholders. ATVs are mainly used for recreation, but are also used in occupations such as farming and policing. According to a 2008 industry survey of ATV owners, 79 percent use them for recreation and 21 percent use them for work or chores. ATVs are also used as primary transportation in some remote communities, such as in parts of Alaska. GAO found little information that quantified the advantages of ATV use. However, users surveyed in 2008 said that riding provides them with personal enjoyment, allowing them, for example, to view nature and spend time with their families. In addition, trail managers and local business officials in areas of the country where trails have been established, such as West Virginia, said the surrounding communities have benefited economically from spending by ATV riders. Injuries and fatalities increased substantially during the last decade, but not as rapidly as the number of ATVs in use, which nearly tripled. According to Commission staff, an estimated 816 fatalities occurred in 2007-- the agency's most recent annual estimate--compared with 534 in 1999, a 53 percent increase. However, from 1999 through 2005--the most recent period for which fatality estimates are complete--the risk decreased from 1.4 deaths per 10,000 four-wheeled ATVs in use to 1.1 deaths per 10,000 ATVs in use, or 21 percent. Regarding injuries, an estimated 134,900 people were treated in emergency rooms for ATV-related injuries in 2008, compared with about 81,800 in 1999, a 65 percent increase. However, the estimated risk of an emergency room-treated injury per 10,000 four-wheeled ATVs in use decreased from 193 injuries per 10,000 four-wheeled ATVs in use to 129.7 injuries in 2008, or 33 percent. About one-fifth of the deaths and about one-third of the injuries involved children. Crashes involving children frequently occurred when they rode adult-sized ATVs, which are more difficult for them to handle. Manufacturers and distributors have agreed to use their best efforts to prevent their dealers from selling adult-sized ATVs for use by children, but recent GAO undercover checks of selected dealers in four states indicated that 7 of 10 were willing to sell an adult-sized ATV for use by children. Commission staff suspended similar checks in early 2008 because of higher priorities. Commission staff have estimated that the costs of ATV injuries and fatalities more than doubled during the last decade from about $10.7 billion in 1999 to $22.3 billion in 2007 (in 2009 dollars). Safety stakeholders, including industry officials, said that ATV injuries could be reduced through training and wearing proper equipment such as helmets. |
Unsecured cargo or other debris falling from a moving vehicle can pose a serious hazard to other motorists and can lead to property damage, injuries, or fatalities (see fig. 1). Examples of unsecured-load debris that often ends up on roadways include objects such as mattresses or box springs, ladders, and furniture items. NHTSA’s mission is to prevent motor vehicle crashes and reduce injuries, fatalities, and economic losses associated with these crashes. To carry out this mission, the agency conducts a range of activities, including setting vehicle safety standards; conducting research on a variety of safety issues; administering grant programs authorized by Congress; providing guidance and other assistance to states to help them address key safety issues, such as drunken driving and distracted driving; and collecting and analyzing data on crashes. NHTSA analyzes crash data to determine the extent of a problem and to determine what steps it should take to develop countermeasures. Regarding unsecured loads, NHTSA collects some data regarding whether a crash involved an unsecured load. Determining the number of crashes involving unsecured loads can be a challenge because data are limited. NHTSA does track the number of crashes involving road debris. However, as mentioned previously, these data include all types of road debris, including debris resulting from human error (e.g., unsecured load) and debris that is from natural elements (e.g., a fallen tree branch). Based on available NHTSA data, such crashes comprise a small percentage of total police-reported crashes. For example, in 2010, out of a total of about 5,419,000 crashes, about 1 percent— 51,000 crashes—involved a vehicle striking an object that came off another vehicle or a non-fixed object lying in the roadway. Of these 51,000 crashes, there were almost 10,000 people injured and 440 fatalities—about 1 percent of the total number of fatalities from motor vehicle crashes in that year (32,855). States determine what laws, if any, to apply to non-commercial vehicles carrying unsecured loads and whether to develop prevention programs geared towards reducing crashes of non-commercial vehicles carrying unsecured loads. State and local law enforcement agencies are responsible for enforcing these laws. While NHTSA currently collects limited information on crashes involving unsecured loads, the agency intends to make changes to its data systems to follow Congress’s direction to distinguish road obstructions resulting from human error from those involving natural elements. NHTSA’s changes to its data systems will allow the agency to better track crashes involving unsecured loads, but NHTSA will still face challenges with collecting this information because the determination as to whether a crash involved an unsecured load is made by state law enforcement officials and can be difficult to make. Further, there are some limitations with respect to the state data collected in police crash reports, and data improvements will take time to implement. NHTSA collects data on crashes and fatalities that may involve both commercial and non-commercial vehicles carrying unsecured loads in two data systems—FARS and NASS GES. (see table 1). The FARS provides a census of police-reported traffic crashes nationwide in which at least one fatality occurred. The NASS GES provides national estimates of crash statistics based on a sample of police-reported crashes. For both data systems, police crash reports, which are unique to each state, are a key source of data. NHTSA gathers this information from states and recodes it into a uniform format. Currently, there are three data categories in these systems that track data on crashes involving road debris. However, as noted previously, these data categories do not currently distinguish between different types of roadway debris (i.e., debris resulting from natural/environmental sources versus debris resulting from human error). As a result, NHTSA cannot currently identify how many crashes involve vehicles carrying unsecured loads. In response to the congressional direction to improve its data on unsecured-load crashes, NHTSA officials stated that they are currently making changes to the FARS and the NASS GES to collect better information and better track crashes involving unsecured loads. Specifically, NHTSA has developed changes to both systems to (1) revise two existing data categories on road debris and (2) add two new data categories. The revised and new categories will provide more specific information on unsecured-load crashes. (See appendix III for current FARS and NASS GES data category definitions and planned 2013 changes.) For example, NHTSA will now be able to distinguish between the following two types of crash scenarios that involve an object being set in motion by one vehicle and striking another vehicle, a person, or property, causing injury or damage: Cargo, such as a mattress, being transported by one motor vehicle becomes dislodged and strikes another vehicle, a person, or property. An object in the road, such as a tree branch, is struck by a motor vehicle and then strikes another vehicle, a person, or property. NHTSA will also be able to distinguish between two types of crash scenarios that involve a vehicle striking an object already in the road (without striking another vehicle, a person, or property): A motor vehicle strikes a non-fixed object already at rest in the roadway, such as a mattress, and the object is known to have been cargo from an unsecured load. A motor vehicle strikes a non-fixed object already at rest in the roadway, such as a tree branch, and the object is known to have not come from a motor vehicle, or it is unknown if it came from a motor vehicle. NHTSA officials stated that they intend to analyze this data in the future to determine whether actions are needed to address this problem. They explained that in deciding when to take actions regarding a traffic safety issue, NHTSA first tries to determine the extent of the problem by looking at counts or trends. The agency then may conduct research to better understand the problem and work toward developing countermeasures. According to NHTSA officials, these changes will be effective in the FARS and NASS GES during the 2013 data collection year, which begins January 2013. To implement these changes, NHTSA plans to develop a 2013 coding manual between mid-August 2012 and December 5, 2012, and develop data-entry specifications by November 2012. NHTSA officials stated that they plan to train FARS analysts at the state level and NASS GES data coders on how to use the new and revised data elements in early December 2012. Public users will first have access to the 2013 data in 2014 after data collection and quality control checks are completed. While NHTSA’s changes to the FARS and NASS GES data systems will allow the agency to better track crashes involving unsecured loads, it still faces challenges collecting data on these crashes. Two primary factors affect NHTSA’s ability to collect this information: (1) law enforcement officials face difficulties in determining whether a crash involved an unsecured load and (2) states do not collect uniform data on unsecured loads in their police crash reports. Even with the changes that NHTSA is making in its data collection processes and procedures, the resulting data will be imprecise because it relies on state reporting of crashes and data improvements will take time to implement as acknowledged by NHTSA. NHTSA officials stated that they will make every effort to capture the data available in the source documents to provide the most accurate assessment of this safety issue. Even though NHTSA is improving its data systems, determining whether a crash is a result of an unsecured load will remain a challenge. Several law enforcement officials we spoke with indicated that classifying a crash involving an unsecured load is difficult in some cases, because it is unclear whether the object on the road was as a result of an unsecured load or another factor. One law enforcement official explained that if an object falls from a moving vehicle and immediately hits a vehicle or a person, the crash is generally classified as an unsecured-load crash. However, if an object falls from a moving vehicle onto the road and remains on the road for some time before another vehicle subsequently strikes the object, then the crash will generally not be classified as an unsecured-load crash unless there is a witness available to report that the object originally fell off of another vehicle (see fig. 2). The official explained that identifying the first incident as an unsecured-load crash is generally easier because of a higher likelihood of witnesses at the scene who saw the crash occur and saw the unsecured-load fall from the vehicle. In the second scenario, where debris remains on the road for some time, there may be no information to explain how the object on the road ended up there. According to this official, it is up to the reporting officer to determine how to classify or describe the crash in the police report. Under NHTSA’s planned data system changes, the agency will be able to specify in their data systems crashes that involve unsecured loads if all pertinent information is available to the reporting officer. However, if the incident is not identified by the reporting officer as an unsecured-load crash in the first place, it may not be flagged as such in NHTSA’s data systems. NHTSA officials acknowledged that it can be difficult in some cases to determine if something in the road fell off a vehicle if there is no evidence available. States do not uniformly define and report data on unsecured loads in police crash reports. NHTSA uses information from police crash reports to determine whether a crash is an unsecured-load incident or another type of incident. Some state crash reports contain a field where officers can check off a box indicating whether “unsecured loads” were a contributing factor in a crash while others rely on the officer to explain in the narrative section of the report whether the incident involving an unsecured load or other factor. NHTSA uses information from both sections of the report in developing their data. However, in some cases, information about whether a crash involved an unsecured load may not be included in the narrative portion of the police reports. According to NHTSA officials, reports on fatal crashes are more likely to have this information; however, the level of information that is included in the narrative report could vary from officer to officer. If a police crash report does not contain information indicating that a crash involved an unsecured load, then NHTSA cannot classify the crash as such. On a voluntary basis, most states have begun collecting a similar minimum core of information in their police crash reports. These core elements are outlined in the Model Minimum Uniform Crash Criteria (MMUCC), voluntary guidelines for the implementation of uniform crash data elements.guidelines to varying degrees. One avenue for ensuring that all states collect consistent information on unsecured loads in their police crash reports would be to include unsecured-load data as a core data element in the next edition of the guidelines. NHTSA does not have independent authority to seek changes in state police reports; however, NHTSA officials stated that they will likely recommend changes to MMUCC guidelines. In order for a new data element to be added, it must be approved by the MMUCC Expert Panel, which includes representatives from NHTSA, FMCSA, the Federal Highway Administration, the National Transportation Safety Board, the Governors Highway Safety Association, Insurance Institute for Highway Safety, Ford Motor Company, Emergency Medical System agencies, and local and state police agencies. Recommended changes to the guidelines can be submitted by any agency represented on the MMUCC expert panel. According to NHTSA officials, most states follow these Any changes to the guidelines cannot be made for quite some time as MMUCC operates on a 5-year cycle. MMUCC released its revised guidelines in July 2012, and the next update is not expected until 2017. NHTSA officials explained that they would be unable to recommend changes to the guidelines until 2016, when MMUCC begins the process updating the guidelines. If changes are made to the guidelines, these changes would not go into effect until after 2017. NHTSA officials also noted that making changes to police crash reports in response to changes in the guidelines can take from 12 to 18 months. Some police agencies now use electronic police crash reports, and as a result, changes to the police crash reports could require information technology infrastructure investments to update their electronic systems. Moreover, additional training of police officers regarding how to use the new data elements would be required. NHTSA officials stated that in the interim, state FARS analysts and NASS GES data coders will communicate to law enforcement officials that information on unsecured-load crashes should be included in the narrative portion of police crash reports. All fifty states and the District of Columbia have statutes regarding unsecured loads that pertain to non-commercial vehicles. While nine states reported having no exemptions related to their statute, a majority of states and the District of Columbia reported exempting vehicles from unsecured-load statutes most commonly for roadway maintenance or agriculture activities, but these exemptions are primarily related to commercial activities. All fifty states and the District of Columbia reported having fines or penalties for violating unsecured-load statutes ranging from $10 to $5,000; fifteen of these states add the possibility of imprisonment. (See appendix IV for summary of all fifty states and the District of Columbia’s laws, exemptions, and penalties/fines.) Ten states reported having a safety or education program related to unsecured loads. All fifty states and the District of Columbia have statutes regarding unsecured loads that pertain to non-commercial vehicles. While the statutes vary widely, many use a common construction similar to: “No vehicle shall be driven or moved on any highway unless such vehicle is so constructed or loaded as to prevent any of its load from dropping, shifting, leaking, or otherwise escaping there from,” a statement that is oftentimes followed by exemptions as discussed below. However, a few states such as Mississippi have short statutes that contain a shortened form of this common language. Other states such as Oklahoma set forth more specific instructions in the statute directing, for example, the covering of loads to be “securely fastened so as to prevent said covering or load from becoming loose, detached or in any manner a hazard to other users of the highway.” The state statutes on unsecured loads differ more frequently in their description of exemptions. According to our survey, 41 states and the District of Columbia have exemptions from unsecured-load laws in their statutes (see fig. 3). These exemptions most commonly applied to roadwork and agriculture. For example, the most common roadway exemption includes “vehicles applying salt or sand to gain traction” or “vehicles dropping water for cleaning or maintaining the highway.” Exemptions for commercial activities range from general wording such as “applies to all motor vehicles except those carrying agricultural loads,” to industry-specific exemptions such as “applies to all motor vehicles except logging trucks or those carrying wood, lumber, or sawmill wastes.” Nine states reported having no exemptions to their unsecured-load statute, including Delaware, Kentucky, Missouri, Nebraska, New York, South Dakota, Texas, Vermont, and Wisconsin. All states have some level of fines or penalties for violations of unsecured-load statutes. Most states have specific penalties ranging from as little as $10 to as much as $5,000; fifteen states include possible jail time. (See fig. 4.) Two states—Nevada and New Hampshire—reported the fine as unknown, because it is imposed at the local court level and could vary widely. Twenty states and the District of Columbia reported maximum fines of $10 to less than $500 and only two of those states—Tennessee and Colorado—add possible jail time in addition to the fine. Eight of these states have maximum fines between $10 and $100 for the first offense. Twenty-eight states reported more severe maximum fines of $500 to $5,000 for violating unsecured-load laws and thirteen of those states—Florida, Georgia, Illinois, Louisiana, Michigan, Mississippi, New York, Oklahoma, South Dakota, Virginia, Washington, West Virginia, and Wyoming—include possible jail time in addition to a fine. The states of Illinois, Virginia, and Washington have the highest maximum fines: $2,500 for Illinois and Virginia, and $5,000 for Washington. In addition, the law enforcement officials in all of the seven states we selected for interviews stated that additional criminal charges could be brought in their state against individuals who injured or killed a person as a result of negligently securing their load in addition to the specific penalties stated in unsecured-load statutes. Enforcement officials in some states told us that it is often difficult to write citations for unsecured-load violations. In five of the seven states, officials we interviewed noted that statutory language can be ambiguous, or require law enforcement officials either to witness the unsecured load falling or have the load actually fall to the ground to be considered a statutory violation. This language makes law enforcement respond reactively rather than proactively. All seven enforcement officials we interviewed told us they were not aware how anyone could distinguish between citations written for commercial vehicles (i.e., used for business purposes) and non-commercial vehicles (i.e., private vehicles used to move personal belongings or take trash to the local landfill for example) as written in their states. Therefore, counting violations of their states’ unsecure load laws specifically for non-commercial vehicles is not currently possible. Ten of the 50 states and the District of Columbia reported they have a safety or education program that pertains to unsecured loads on non- commercial vehicles. Those states include California, Illinois, Maine, North Carolina, North Dakota, South Carolina, South Dakota, Texas, Washington, and Wisconsin. Enforcement officials in all of the seven states we selected for interviews stated that in their experience, education—teaching drivers about the importance of properly securing the load in any vehicle or trailer before driving—is the key component to reducing unsecured-load incidents. See appendix V for examples of safety education materials from North Carolina and Washington. We provided a draft of this report to NHTSA for review and comment. NHTSA provided technical comments that were incorporated as appropriate. We are sending copies of this report to the Administrator of NHTSA, the Secretary of the Department of Transportation, and interested Congressional Committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-2834 or FlemingS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This report examines (1) efforts the National Highway Traffic Safety Administration (NHTSA) has undertaken to monitor crashes involving vehicles carrying unsecured loads and (2) existing state laws, exemptions, and punitive measures regarding non-commercial vehicles carrying unsecured loads. For the purposes of our review, we defined unsecured load to include a load or part of a load in transit that is not properly restrained, tied down, or secured with tarps, nets, or ropes to reasonably prevent a portion from falling off. We defined non-commercial vehicles to include passenger vehicles (cars or light trucks) transported for non-commercial purposes, and the towing of loads in an open trailer behind the passenger vehicle. Light trucks included trucks of 10,000 pounds gross vehicle weight rating or less, including pickups, vans, truck- based station wagons, and utility vehicles. Open trailers included trailers that can be obtained from personal or commercial sources, such as U- Haul, but used for non-commercial purposes. NHTSA collects data on crashes involving non-commercial and commercial crashes. We obtained NHTSA’s input in developing these definitions. To identify efforts NHTSA has undertaken to monitor crashes involving vehicles carrying unsecured loads, we obtained documents from and conducted interviews with NHTSA officials to obtain information on NHTSA’s current policies, procedures, and practices for monitoring crashes involving vehicles carrying unsecured loads. Specifically, we obtained information about what data on unsecured loads NHTSA currently collects; how NHTSA coordinates with state agencies on its data collection efforts; actions NHTSA has taken to date or plans to take to improve its data collection processes in response to its mandate; and challenges, if any, that NHTSA faces in improving its data on vehicles carrying unsecured loads. In addition, we conducted a literature search to identify and review relevant studies, reports, and available data on crashes involving vehicles carrying unsecured loads and to gain a better understanding of the magnitude of the problem. Finally, we analyzed NHTSA’s crash data from the Fatality Analysis Reporting System (FARS) and the National Automotive Sampling System General Estimates System (NASS GES) to identify the number of crashes in 2010 in which a vehicle struck falling or shifting cargo or an object lying in the roadway. We assessed the reliability of these data sources by, among other things, interviewing NHTSA officials and reviewing NHTSA policies and procedures for maintaining the data and verifying their accuracy. Based on this information, we determined that the data provided to us were sufficiently reliable for our reporting purposes. To identify existing state laws, exemptions, and punitive measures regarding non-commercial vehicles carrying unsecured loads, we conducted a literature review of and legal research on state(s) laws, penalties, and exemptions regarding properly securing loads on non- commercial vehicles. In addition, we conducted a survey of all 50 states and the District of Columbia to supplement, verify, and corroborate data obtained from our legal research and to obtain additional information on penalties, enforcement actions and education and prevention efforts in each state. (The survey is reproduced in appendix II.) The survey was completed primarily by law enforcement officers in each state’s Department of Public Safety. We selected three states in which to conduct pretests: Iowa, New Mexico, and Washington. In each pretest, we provided the state police official with a copy of our draft survey, asked this individual to complete it, and then conducted an interview to discuss the clarity of each question. On the basis of the feedback from the three pretests we conducted, we made changes to the content and format of the survey questions as appropriate. We launched our survey on June 20, 2012. We received completed responses from the 51 survey respondents for a response rate of 100 percent. We reviewed survey responses for inaccuracies or omissions, analyzed the data, and have presented the key findings in this report. We also conducted interviews with state police officials in seven states to collect information on enforcement actions and education and prevention efforts related to properly securing loads carried by non-commercial vehicles. We selected states that were (1) geographically diverse, (2) of varying sizes, and (3) varied in the types of laws related to non- commercial vehicles carrying unsecured loads. Using these criteria, we interviewed state police officials in California, Colorado, Maryland, New York, Texas, Washington, and Wisconsin. In addition, we also conducted interviews with associations and individuals active in highway safety issues, to obtain additional information on issues related to unsecured loads and efforts by states to deal with these issues. Interviewees included the American Automobile Association Foundation for Traffic Safety and one of the co-authors of a 2004 study for this foundation examining the safety impacts of vehicle-related road debris;Governor’s Highway Safety Association; and the Transportation Cargo the Safety Organization. We also requested interviews with the International Association of Chiefs of Police, American Association of State Highway and Transportation Officials, and the American Association of Motor Vehicle Administrators; these organizations replied that they did not have information on unsecured-loads issues. We conducted this performance audit from March 2012 to November 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Used when cargo on or parts from a motor vehicle are set in motion or an object in the road is struck by a motor vehicle and set in motion. In both cases, the cargo, parts, or object then strike another motor vehicle. Scenario A: A mattress transported by, or a hubcap from, vehicle 1 becomes dislodged and is set in motion. The mattress or hubcap flies into and strikes vehicle 2. Scenario B: Vehicle 1 hits a tree branch or a hubcap from an unknown source in the roadway, and sets it in motion striking vehicle 2. Used for all set-in-motion crashes described above. Revised: Will now be used only for scenario A (crashes where the object set in motion was originally cargo on, or parts from, a moving motor vehicle and this object strikes another vehicle, person or property causing injury or damage). Not a current category. New: Will be used for scenario B (crashes where the object set in motion was not originally cargo on or parts from a moving motor vehicle or it is unknown whether the object was the cargo or a part of an in- transport motor vehicle. In either case, the object strikes another motor vehicle, person or property causing injury or damage). Used for crashes wherein a motor vehicle strikes any non-fixed object, such as a mattress or a tree branch, lying in the roadway. Scenario C: Vehicle hits a tree branch already in the roadway. Scenario D: Vehicle hits a mattress already in the roadway. Revised: Will be used only for scenario C (when a motor vehicle strikes a non-fixed object already at rest in the roadway but known to have not come from a motor vehicle, or unknown if it came from a motor vehicle). How data category will be defined starting in 2013 New: Will be used for scenario D (when a motor vehicle strikes a non-fixed object already at rest in the road but known to have been the cargo or part of another motor vehicle in-transport). Unsecured-load law exemptions Motor vehicles carrying agricultural loads. Not more than $500. Unsecured-load violation fines/penalties (& separate penalty statute if not contained in unsecured-load law) Motor vehicles carrying agricultural, mining, and timber, vehicles applying salt or sand to gain traction, or public vehicles cleaning or maintaining the highway. Not more than $1000 and litter pickup. Motor vehicles carrying agricultural loads, cleaning or maintaining the highway or dropping sand for traction, minor pieces of agricultural materials such as leaves and stems from agricultural loads. $250–$1000. Motor vehicles depositing sand for traction or water for cleaning or maintaining the highway. $100 Arkansas Code Annotated §5-4- 201. Motor vehicles carrying clear water or live bird feathers. $211 ($146 fine plus $30 security fee and $35 conviction assessment) California Rules of Court; Rule 4.102, January 2010 Edition. Motor vehicles dropping material for traction or for cleaning or maintaining the roadway. Vehicles operating entirely in a marked construction zone, vehicles involved in maintenance of public roads during snow or ice removal operations, vehicles involved in emergency operations when requested by a law enforcement agency or an emergency response authority. $150–$300 and/or 10-90 days imprisonment. C.R.S. 42-4-1701. Conn. Gen. Stat. § 14-271 Farming vehicles, motor vehicles dropping $117–$158. sand for traction or water for maintaining roadway. None. First offense not less than $10 and no more than $28.75 and for each subsequent offense, no less than $28.75 and no more than $100. $150–$250. Motor vehicles dropping sand for the purpose of securing traction, or water or other substance sprinkled on the roadway in cleaning or maintaining the roadway. Unsecured-load law exemptions Farming vehicles traveling locally or vehicles dropping sand for traction or water for cleaning or maintaining the road. Unsecured-load violation fines/penalties (& separate penalty statute if not contained in unsecured-load law) $200 Fla. Stat. § 318.18, license suspension with second offense. Any person who willfully violates the provisions of this section which offense results in serious bodily injury or death to an individual within the confines of statute is also subject to fines of no more than $500 and prison for not more than 60 days; § 775.082 and § 775.083. Motor vehicles carrying agricultural, vehicles transporting agriculture or farm products. Up to $1000 and/or jail time not to exceed 1 year. O.C.G.A. § 17-10-3. Agricultural vehicles, vehicles carrying birds with feathers, and vehicles carrying rocks, sand, or gravel. $250 - $1000 + suspension of license (dependent on number of offenses). Vehicles that are government, quasi- government, their agents or employees or contractors thereof, in performance of maintenance or construction of a highway; vehicles owned by canal companies, irrigation districts, drainage districts or their boards of control, lateral ditch associations, water districts or other irrigation water delivery or management entities, or operated by any employee or agent of such an entity, performing construction, operation or maintenance of facilities; and vehicles transporting agricultural products.. $67. Motor vehicles dropping sand for traction or water for cleaning the highway, or agricultural vehicles. For 109: $120, Class A Misdemeanor, Illinois Supreme Court Rules, Rule 526. A conviction for this could result in a determinate sentence of imprisonment of less than one year or a fine not to exceed $2,500 for each offense or the amount specified in the offense, whichever is greater, may be imposed. Illinois Unified Code of Corrections (730 ILCS 5/5-4.5-55). For 109.1: Not to exceed $250. Motor vehicles transporting poultry or spreading sand/de-icing (removing ice). Up to $500 Indiana Code § 34-28-5-4. Motor vehicles carrying hay or stover (stalks and leaves, of corn); or sand for traction or water for maintaining roadway. $200 Iowa Code § 805.8A. Unsecured-load law exemptions Motor vehicles hauling livestock or spreading substances in highway maintenance or construction. Unsecured-load violation fines/penalties (& separate penalty statute if not contained in unsecured-load law) Not to exceed $500 K.S.A. § 8-1901. None. Motor vehicles dropping sand to secure traction, or dropping a liquid substance on a highway to clean or maintain. $500 and/or 6 months jail time. Motor vehicles carrying hay, straw, vines, cornstalks, or grain. $150–$500. Motor vehicles carrying agricultural products and those dropping materials to provide traction or clean the highway. $500. Motor vehicles dropping sand for the purpose of securing traction, or sprinkling of water or other substance on such a way in cleaning or maintaining the same. $50–$200. Highway maintenance vehicles engaged in ice or snow removal. Agricultural and horticultural vehicles. Not more than $500 and/or 90 days jail time. Motor vehicles carrying agricultural products such as small grains, shelled corn, soybeans, or other farm produce, or vehicles dropping material for traction or cleaning. Not more than $300 Minn. Stat. § 169.89. Motor vehicles dropping material for traction or for cleaning or maintaining the highway. Not more than $500 and not more than 6 months imprisonment or both. Miss. Code Ann. § 63-5-7, 63-9-11. None. Not to exceed $300 R.S.Mo. § 560.016. Commercial motor vehicles in compliance with state and federal laws; agricultural vehicles; vehicles performing road maintenance or in a marked construction zone. No more than $500 Mont. Code Anno., § 61-8-711. None. $100 - $500 R.R.S. Neb. § 28-106. Motor vehicles dropping materials for traction or cleaning the highway. Fines are addressed and set by individual courts, for example in Reno it’s $403. Local farmers, transportation of heavy scrap or crushed vehicles, or construction vehicles in a construction zone, vehicles driving at less than 30 mph. Fines are addressed and set by individual courts. Agricultural vehicles. Not more than $500 for each violation. Unsecured-load law exemptions Agricultural vehicles or those dropping sand for traction or water for cleaning the roadway. Unsecured-load violation fines/penalties (& separate penalty statute if not contained in unsecured-load law) $100 N.M. Stat. Ann. § 66-7-401; § 66- 8-116. None. $100 - $750 and/or imprisonment up to 30 days. N.C. Gen. Stat. § 20-116 Motor vehicles dropping material for $100 N.C. Gen. Stat. §20-176. traction or cleaning the highway. Motor vehicles dropping sand for traction or water for highway maintenance. $20. Agricultural and garbage vehicles or those dropping sand for traction or water for cleaning the roadway. $150 - $1000 ORC Ann. 2929.28; ORC Ann. 4513.99. Agricultural vehicles or those dropping sand for traction or water for cleaning the roadway. $5 - $500 or imprisonment for up to 6 months, or both. 47 Okl. St. § 17-101. ORS § 818.300; 818.310 No exemptions for vehicles, just for certain roads, private thoroughfares. $260 ORS § 818.300(4) and ORS § 153.019. Additionally, owners or drivers are liable for all damage done as a result of the violation if it occurs on certain roadways. ORS § 818.410. Logging and garbage trucks, the shedding or dropping of feathers or other matter from vehicles hauling live or slaughtered birds or animals, and spreading of any substance in highway maintenance or construction operations. $300–$1000. Logging trucks or those carrying wood, lumber, or sawmill wastes. Motor vehicles dropping sand for traction or water for highway maintenance. $85, R.I. Gen. Laws § 31-41.1-4; $100 to not more than $500, R.I. Gen. Laws § 31-25-10. Motor vehicles dropping sand for traction or water for highway maintenance. Agricultural and timber-related vehicles. $100. None. $500 or 30 days in prison or both, S.D. Codified Laws § 22-6-2. Vehicles carrying farm produce to the market. Vehicles which transport crushed stone, fill dirt and rock, soil, bulk sand, coal, phosphate muck, asphalt, concrete, other building materials, forest products, unfinished lumber, agricultural lime. Motor vehicles dropping sand for traction or water for highway maintenance. No more than $50 or not more than 30 days in prison or both. Tenn. Code Ann. § 40-35-111. Unsecured-load law exemptions None. Unsecured-load violation fines/penalties (& separate penalty statute if not contained in unsecured-load law) $25–$500 Tex. Transp. Code § 725.003. Vehicles carrying dirt, sand, gravel, rock fragments, pebbles, crushed base, aggregate, any other similar material, or scrap metal. Certain agricultural loads and vehicles spreading any substance connected with highway maintenance, construction, securing traction or snow removal. $100–$250. None. $99 –$156 § 1454. Motor vehicles dropping material for traction or for cleaning or maintaining the highway. § 10.1-1424. Motor vehicles used exclusively for agricultural purposes, or transporting forest products, poultry, or livestock. § 46.2-1156. Not more than $2,500 or not more than 12 months in jail for violating § 10.1- 1424, and a fine of not more than $250 for violating § 46.2-1156. Va. Code Ann. § 18.2-11. Vehicles carrying gravel, sand, and dirt if 6 inches of freeboard is maintained within the bed. Motor vehicles dropping sand for traction. Up to $5000 or up to a year in jail or both. Rev. Code Wash. (ARCW) § 9A.20.021. Motor vehicles dropping material for traction or for cleaning or maintaining the highway. Up to $500 fine or 6 months imprisonment or both. W. Va. Code § 17C-18-1. None. $10–$200 Wis. Stat. § 348.11. Motor vehicles spreading substance for maintaining or constructing the highway. Up to $500 fine or 6 months imprisonment or both. Wyo. Stat. § 31- 5-1201. In addition to the contact named above, Judy Guilliams-Tapia (Assistant Director), Margaret Bartlett, David Hooper, Maren McAvoy, Maria Mercado, Amy Rosewarne, Beverly Ross, Kelly Rubin, and Andrew Stavisky made key contributions to this report. | Vehicles carrying objects that are not properly secured pose a safety risk on our nation's roadways. Debris that falls from a vehicle can collide with other vehicles or pedestrians, causing serious injuries or fatalities. According to data collected by NHTSA, there were about 440 fatalities caused by roadway debris in 2010. However, the exact number of incidents resulting from vehicles carrying unsecured loads is unknown. Congress, through the Conference Report for the Consolidated and Further Continuing Appropriations Act, (2012), directed NHTSA to improve its data on unsecured-load incidents and directed GAO to report on state laws and related exemptions, and punitive measures regarding unsecured loads on non-commercial vehicles, such as cars and light trucks used for non-commercial purposes. This report examines NHTSAs data collection efforts as well as states laws related to unsecured loads. GAO reviewed NHTSA documents and interviewed officials from NHTSA, as well as representatives of highway safety associations and state police agencies. GAO also conducted a survey of all 50 states and the District of Columbia, with a response rate of 100 percent, and researched the laws, punitive measures, and education efforts in each state. GAO provided a draft of this report to NHTSA for review and comment. NHTSA provided technical comments that were incorporated, as appropriate. The National Highway Traffic Safety Administration (NHTSA) collects limited information on crashes involving vehicles carrying unsecured loads but plans to make changes to collect better information. Currently, NHTSA collects some data in the Fatality Analysis Reporting System and the National Automotive Sampling System General Estimates System. However, the systems do not currently have a data category to distinguish between debris resulting from natural sources (such as a tree branch) and debris resulting from human error (such as an unsecured load). As a result, NHTSA cannot currently identify how many crashes involve vehicles carrying unsecured loads. NHTSA intends to make changes to both its systems to better identify crashes involving unsecured loads. These changes will go into effect in 2013. However, NHTSA may still face challenges collecting this data because 1) law enforcement officials face difficulties in determining whether a crash involved an unsecured load and 2) states do not collect uniform data on unsecured loads in their police crash reports. NHTSA officials stated that they would likely recommend changes to the Model Minimum Uniform Crash Criteria (MMUCC)voluntary guidelines intended to create uniform data in police crash reports; however, the revised guidelines will not be released until 2017 because of MMUCCs 5-year cycle of updates. NHTSA officials acknowledged that even with the changes in its data systems, data improvements will take time to implement and data on unsecured-load crashes will likely continue to be imprecise. All 50 states and the District of Columbia have statutes regarding unsecured loads that pertain to non-commercial and commercial vehicles. A majority of states and the District of Columbia reported exempting vehicles from unsecured load statutes for primarily commercial activities such as roadway maintenance or agriculture activities, while 9 states have statutes that apply to all vehicles. All 50 states and the District of Columbia reported having fines or penalties for violating unsecured load statutes ranging from $10 to $5,000; fifteen states add the possibility of imprisonment. Ten states also reported having a safety or education program related to unsecured loads. |
GPS is a global PNT network consisting of space, ground control, and user equipment segments that support the broadcasts of military and civil GPS signals. Each of these signals includes positioning and timing information, which enables users with GPS receivers to determine their position, velocity, and time 24 hours a day, in all weather, worldwide. GPS began operations with a full constellation of satellites in 1995. Over time, GPS has become vital to military operations and a ubiquitous infrastructure underpinning major sections of the economy, including telecommunications, electrical power distribution, banking and finance, transportation, environmental and natural resources management, agriculture, and emergency services. GPS is used by all branches of the military to guide troop movements, integrate logistics support, enable components underlying battlespace situational awareness, and synchronize communications networks. In addition, U.S. and allied munitions are guided to their targets by GPS signals and GPS is used to locate military personnel in distress. Civil agencies, commercial firms, and individuals use GPS and GPS augmentations to accurately navigate from one point to another. Commercial firms use GPS and GPS augmentations to route their vehicles, as do maritime industries and mass transit systems. In addition to navigation, civil departments and agencies and commercial firms use GPS and GPS augmentations to provide high-accuracy, three-dimensional positioning information in real time for use in surveying and mapping and other location-based services. The aviation community worldwide uses GPS and GPS augmentations to increase the safety and efficiency of flight. GPS and GPS augmentations are also used by the agricultural community for precision farming, including farm planning, field mapping, soil sampling, tractor guidance, and crop scouting; the natural resources management community uses GPS for wildfire management and firefighting, pesticide and herbicide control, and watershed and other natural resources asset management. GPS is increasingly important to earth observation, which includes operational roles in weather prediction, the measurement of sea level change, monitoring of ocean circulation, and mitigation of hazards caused by earthquakes and volcanoes. GPS helps companies and governments place satellites in precise orbits, and at correct altitudes, and helps monitor satellite constellation orbits. The precise time that GPS broadcasts is crucial to economic activities worldwide, including communication systems, electrical power grids, and financial networks. GPS operations consist of three segments—the space segment, the ground control segment, and the user equipment segment. All segments are needed to take full advantage of GPS capabilities. (See fig. 1.) The GPS space segment is a constellation of satellites that move in six orbital planes approximately 12,500 miles above the earth. GPS satellites broadcast encrypted military signals and unencrypted civil signals. The baseline constellation consists of satellites occupying 24 orbital slots—4 slots in each of the six orbital planes. However, because the U.S. government commits to at least a 95 percent probability of maintaining this baseline constellation of 24 satellites, the typical size of the constellation is somewhat larger. Moreover in recent years, because numerous satellites have exceeded their design life, the constellation has grown to 31 active satellites of various generations. However, DOD predicts that over the next several years many of the older satellites in the constellation will reach the end of their operational life faster than they will be replenished, thus decreasing the size of the constellation from its current level, reducing satellite availability, and potentially reducing the accuracy of the GPS service. The GPS ground control segment comprises the Master Control Station at Schriever Air Force Base, Colorado; the Alternate Master Control Station at Vandenberg Air Force Base, California; 6 dedicated monitor stations; 10 National Geospatial-Intelligence Agency monitoring stations; and 4 ground antennas with uplink capabilities. Information from the monitoring stations is processed at the Master Control Station to determine satellite clock and orbit status. The Master Control Station operates the satellites and regularly updates the navigation messages on the satellites. Information from the Master Control Station is transmitted to the satellites via the ground antennas. The U.S. Naval Observatory Master Clock monitors the GPS constellation and provides timing data for the individual satellites. The U.S. Naval Observatory Master Clock serves as the official source of time for DOD and a standard of time for the entire United States. The GPS user equipment segment includes military and commercial GPS receivers. A receiver determines a user’s position by calculating the distance from four or more satellites using the navigation message on the satellites to triangulate its location. Military GPS receivers are designed to utilize the encrypted military GPS signals that are only available to authorized users, including military and allied forces and some authorized civil agencies. Commercial receivers use the civil GPS signal, which is publicly available worldwide. In 2000, DOD began efforts to modernize the space, ground control, and user equipment segments of GPS to enhance the system’s performance, accuracy, and integrity. Table 1 shows the modernization efforts for the space and ground control segments. Full use of military and civil GPS signals requires a ground control system that can manage these signals. Newer software will upgrade the ground control to a service-oriented or netcentric architecture that can support “plug and play” features and can more easily connect to broader networks. To use the modernized military signal from the ground, military users require new user equipment, which will be provided by the military GPS user equipment program. The 2004 U.S. Space-Based Positioning, Navigation and Timing policy established a coordinating structure to bring civil and military departments and agencies together to form an interagency, multiuse approach to program planning, resource allocation, system development, and operations. The policy also encourages cooperation with foreign governments to promote the use of civil aspects of GPS and its augmentation services and standards with foreign governments and international organizations. As part of the coordinating structure, an executive committee advises and coordinates among U.S. government departments and agencies on maintaining and improving U.S. space-based PNT infrastructures, including GPS and related systems. The executive committee is co-chaired by the deputy secretaries of DOD and DOT, and includes members at the equivalent level from the Departments of State, Commerce, Homeland Security, the Interior, and Agriculture; the Joint Chiefs of Staff; and the National Aeronautics and Space Administration (NASA). Figure 2 describes the national space-based PNT organization structure. The departments and agencies have various assigned roles and responsibilities. For example, the Secretary of Defense is responsible for the overall development, acquisition, operation, security, and continued modernization of GPS. The Secretary has delegated acquisition responsibility to the Air Force, though other DOD components and military services are responsible for oversight, for some aspects of user equipment development, and for funding some parts of the program. DOT has the lead responsibility for coordinating civil requirements from all civil departments and agencies. The Department of State leads negotiations with foreign governments and international organizations on GPS PNT matters and regarding the planning, operations, management, and use of GPS. The Air Force faces challenges to launching its IIF and IIIA satellites as scheduled. The first IIF satellite launched May 27, 2010, almost 3-½ years later than previously planned, and the IIF program appears to have resolved most outstanding technical issues. In addition, the program faces risks that could affect the on-orbit performance of some GPS satellites and subsequent IIF launches. The GPS IIIA program is progressing and the Air Force continues to implement an approach that should prevent the types of problems experienced on the IIF program. However, the IIIA schedule remains ambitious and could be affected by risks such as the program’s dependence on a ground system that will not be completed until after the first IIIA launch. Meanwhile, the availability of the baseline GPS constellation has improved, but a delay in the launch of the GPS IIIA satellites could still reduce the size of the constellation to below its 24- satellite baseline, where it might not meet the needs of some GPS users. Last year, we reported that under the IIF program, the Air Force had difficulty successfully building GPS satellites within cost and schedule goals, encountered significant technical problems that threatened its delivery schedule, and faced challenges with a different contractor for the IIF program. These problems were compounded by an acquisition strategy that relaxed oversight and quality inspections as well as multiple contractor mergers and moves and the addition of new requirements late in the development cycle. As a result, the IIF program had overrun its original cost estimate of $729 million by about $870 million and the launch of the first IIF satellite had been delayed to November 2009—almost 3 years late. Since our last review, launch of the first IIF satellite was postponed an additional 6 months—for an overall delay of almost 3-½ years—to May 2010. The first IIF satellite launched May 27, 2010, and the program appears to have resolved outstanding technical issues. The satellite was delivered to Cape Canaveral Air Force Station, Florida, in February 2010 to undergo final testing and preparations for launch. The GPS Wing attributes recent launch delays to launch vehicle and pad availability issues, but the late discovery of some technical issues also contributed to the launch delay. According to the GPS Wing, the technical issues were a result of inadequate oversight of the contractor earlier in the acquisition. To prevent an even longer launch delay, the program shipped the second IIF satellite to Cape Canaveral Air Force Station and conducted extensive system-level end-to-end tests. This enabled the program to take the time to address some technical issues on the first satellite while reducing risk using the second satellite—GPS Wing officials reported that it saved them approximately 60 days of schedule time. Although the first IIF satellite has launched, it is uncertain how the IIF satellites will perform on orbit and it is unclear how well positioned the program is to address any on-orbit problems without significantly affecting the IIF schedule. Only after the first satellite of a new generation, like IIF, has been launched and months of on-orbit tests have been conducted can a thorough understanding of its performance be obtained. Previously, the GPS Wing had planned to mitigate the risk of potential IIF performance issues by launching some satellites of the prior generation, the IIR-Ms, after the first IIF launch. Space programs in the past have used this practice to reduce risk in case there were on-orbit problems with the new generation of satellites. However, when the delivery of the IIF satellites was continually delayed, the Air Force launched the remaining IIR-M satellites to eliminate the Air Force’s dependence on the launch vehicle that was used for previous generations of GPS satellites. Two GPS Wing officials expressed concern that the GPS program is now in a riskier position than it has been for many years because it does not have any IIR-M satellites in inventory and ready to launch. In fact, the current IIF production and launch schedules indicate that there is little margin to address any potential on-orbit performance issues. Within little over a year after the first IIF launch, three additional IIF satellites are scheduled to launch and six—half of all IIF satellites—are scheduled to have completed production. If problems are identified during on-orbit testing of the first satellite, the satellites already in production will have to be retrofitted to correct the deficiencies, which could result in delays in launching some IIF satellites. Adding to these challenges, the need to compete for limited launch resources has increased across national security space programs and is likely to affect the Air Force’s ability to launch GPS IIF as planned. Until recently, the Air Force made use of four launch facilities on the East Coast and three on the West Coast to launch its national security space satellites. However, the Air Force now plans to launch most national security satellites, including the GPS IIF and IIIA, using one of two Evolved Expendable Launch Vehicle (EELV) rocket types—Delta IV or Atlas V. EELV launches are conducted from two launch facilities on the East Coast and two on the West Coast. With this transition to relying on the EELV, the Air Force has reduced its launch facilities from seven to four. The East Coast launch facilities are in greatest demand, particularly the Atlas V’s facility SLC-41. Not only does the Air Force plan to launch several high- priority satellites, including four IIF satellites, from that facility over the next 2 fiscal years, but NASA also plans to use it for the launch of two extremely time-sensitive missions within that same time period. However, historically no more than four satellites have been launched from the SLC- 41 facility in a single year, yet eight launches are planned for that facility in fiscal year 2011. Air Force officials stated that they are taking steps to improve their capability to launch more satellites per year on the EELV than in the past. The Air Force has acknowledged that it will be challenged to achieve its desired launch plans in the near future and is taking some steps to address this challenge. For example, the Air Force designed the GPS IIF satellites to be dual integrated—meaning they can fly on either the Delta IV or Atlas V launch vehicle—which gives the Air Force more flexibility than if it had relied on only one type of launch vehicle. The GPS program in particular plans to request funding to study the possibility of launching GPS satellites on the West Coast, which has the potential of offering a broader array of launch options. However, some of the potential solutions to these launch challenges, such as launching GPS satellites from the West Coast, are long- term solutions. Therefore, despite these efforts, the high demand for limited launch resources will likely affect the GPS program’s ability to achieve its planned launches in the near future. Last year, we reported that the Air Force structured the new GPS IIIA program to prevent mistakes made on the IIF program but that the IIIA schedule was optimistic. To avoid repeating past problems, the program was taking measures to maintain stable requirements, use mature technologies, and provide more contractor oversight. However, we also reported that the Air Force would be challenged to deliver IIIA on time because its satellite development schedule was optimistic given the program’s late start, past trends in space acquisitions, and challenges facing the new contractor. For example, the GPS IIIA schedule from contract award to first satellite launch is 72 months. We found that that time period was 3 years shorter than the schedule the Air Force had achieved under its IIF program as well as shorter than most other major space programs we have reviewed. Furthermore, we questioned the reliability of the GPS IIIA schedule because we found that it did not fully meet best practices. Since our prior report, we found that the GPS IIIA program appears to have furthered its implementation of the “back to basics” approach to avoid repeating the mistakes of GPS IIF and that it has passed a key design milestone. More specifically, the program has maintained stable requirements, has used mature technologies, and is providing more oversight than under the IIF program. There have not been any changes to the program to meet increased or accelerated technical specifications, system performance, or requirements. All critical technologies were reported to be mature at program start. The program held multiple levels of preliminary design reviews to ensure that the system was ready to proceed into detailed design. The preliminary design reviews were completed in May 2009, and the program completed its critical design review in August 2010. Furthermore, GPS Wing officials stated that they are requiring that the contractor follow military standards and specifications and that the contractor and subcontractors use earned value management. Since our last review, the GPS program has also made improvements to its integrated master schedule. The success of any program depends in part on having a reliable schedule and we found the GPS IIIA schedule to be highly integrated and of high quality. In our recent analysis of the IIIA schedule, we found that processes are in place to ensure that all activities are captured, are of reasonable duration, and are assigned resources. Our analysis also shows that in general the program office updates the schedule on a regular basis and logical relationships are used to determine important dates. However, our analysis also revealed instances of unreasonably high total float. Total float represents the amount of time an activity can slip before it affects the project finish date and is directly related to the logical sequencing of activities. High levels of float may interfere with management’s ability to properly align resources to ensure that critical activities are not delayed. We also found that schedule risk analysis is performed periodically on the schedule, but some risks may not be captured in the overall risk analysis because of issues at the individual project schedule level. Appendix II discusses our examination of the prime contractor’s schedule management process against best practices criteria in more detail. Despite these efforts to develop a stable and successful program, the GPS IIIA program faces challenges to launching its satellites on schedule. First, the 72-month time period from contract award to first satellite launch is 3- ½ years shorter than the schedule achieved for the GPS IIF program. Though the GPS IIIA program has adopted practices that should enable it to deliver in a quicker time frame than the GPS IIF program, the inherent complexities associated with the design and integration phases that have yet to be completed will make it difficult to beat the prior schedule by that order of magnitude. More specifically, the IIIA program is not simply replicating the IIF program in terms of design and production. The program is using a satellite bus, which although it has flown on many satellites in the past, has not yet been used in medium-earth orbit, an orbit that requires different control software and production processes, such as a higher level of radiation hardening. The contractor will add a new signal, L1C, to the satellite that has not been included on previous GPS satellites and will also increase the power of the military signal that has been used on previous satellites. These types of changes can increase the time it takes to complete the program because some level of discovery will need to be completed during design and integration and unanticipated technical problems that arise during these phases can have reverberating effects. Second, the time period from contract award to first satellite launch in the IIIA schedule appears to be compressed compared to what the program had previously estimated. DOD’s fiscal year 2004 funding request reported a schedule with 84 months from contract award to first satellite launch, but contract award took place 3 years later than had been planned while the first IIIA launch was only pushed back by 2 years, leaving that time period a year shorter than previously planned—a considerable amount of time given that requirements were not substantially changed to accommodate the schedule change. Third, according to GPS Wing officials, the program is trying to improve the quality of the satellites by requiring that the contractor follow military standards and specifications. This action is a positive step; however, using this more rigorous approach is likely to pose challenges to meeting the IIIA schedule. GPS Wing officials stated that GPS IIIA is currently the only major space system acquisition that is requiring the use of military standards and specifications and it is shouldering much of the burden of transitioning to these more rigorous standards. Officials report that some of the standards and specifications are out of date and familiarity with these standards has been lost. Updating the standards and specifications along with developing and implementing the necessary training and testing to apply them takes time and creates cost pressure. Lastly, it should be noted that no major satellite program undertaken by DOD in the past decade has met its schedule goals. The GPS IIIA program itself has done more than many programs in the past decade to position itself to meet its dates, but there are still actions that need to be taken across DOD to enable space programs to meet their schedule goals. As we testified in March 2010, these include strengthening the space acquisition workforce, clarifying lines of accountability and authority, and lengthening program manager tenures, among others. An additional challenge to launching the IIIA satellites on time is the GPS IIIA program’s dependence on a ground control system that is currently in development. More specifically, the first block of the ground system, called the Next Generation Control Segment, or OCX, is scheduled to be operational in fourth quarter fiscal year 2015, over 1 year after the launch of the first GPS IIIA satellite. GPS Wing officials stated that a complete system-level test cannot be conducted until OCX is available at which point GPS IIIA can become part of the operational constellation and be set “healthy.” They also stated that they would prefer not to launch a second GPS IIIA satellite until the first IIIA satellite is set healthy, meaning until OCX is available, only one GPS IIIA satellite should be launched. Yet the planned launch dates for the GPS IIIA satellites reflect a rapid series of IIIA launches with five launches taking place within 2 years after the first IIIA launch. If OCX is late, as some Air Force satellite ground control systems have been, several IIIA satellites may not be launched as currently scheduled. In October 2009, we reported that three of eight ground control systems were lagging significantly behind their satellite counterparts. Of the five that were not behind, some were still experiencing schedule delays; however, their satellite counterparts were also experiencing delays. To ensure that the GPS constellation can provide PNT information to GPS users located anywhere on the earth at almost any time of day, the performance standards for both (1) the standard positioning service provided to civil and commercial GPS users and (2) the precise positioning service provided to military GPS users commit the U.S. government to at least a 95 percent probability of maintaining a constellation of 24 operational GPS satellites. Last year, we reported that the estimated long- term probability of maintaining a constellation of at least 24 operational satellites would fall below 95 percent during fiscal year 2010 and would remain below 95 percent until the end of fiscal year 2014, at times falling to about 80 percent. We also reported that if a 2-year delay were to occur to the launch of the first and subsequent GPS III satellites, the U.S. government would be at a much greater risk of failing to meet this commitment. The availability of the constellation has shown considerable improvement since last year; the Air Force now predicts that the probability of maintaining a constellation of at least 24 operational satellites will remain above 95 percent for the foreseeable future—through at least 2025, the date that the final GPS III satellite is expected to become operational. However, the long-term impact of a delay to GPS III could still reduce the guaranteed size of the constellation to fewer than 24 satellites, which might not meet the needs of some GPS users. According to the Air Force, the impact of such a delay could be mitigated somewhat by shutting off a second payload on GPS satellites to save power and thereby extend the lives of aging satellites. However, our analysis shows that this approach alone would have a limited impact on enabling the U.S. government to meet its commitment to a 95 percent probability of maintaining a 24- satellite constellation—increasing the predicted size of the constellation (at the 95 percent confidence level) by 1 satellite. The Air Force, with technical support from the Aerospace Corporation, calculates satellite lifetime estimates for each on-orbit and production (not yet launched) GPS satellite based on detailed reliability analysis of the satellite’s primary life-limiting subsystems. We replicated this analysis for this review using parameters provided by the Air Force. The Air Force’s analysis is used to generate a reliability function for each satellite—that is, the probability that the satellite will still be operational as a function of its time on orbit. Each satellite’s reliability function is modeled as the product of two cumulative probability distributions—one that accounts for the wear out of life-limiting components and one that accounts for random failures. Individual satellite reliability functions can be combined with a launch schedule and launch success probabilities to predict the constellation availability—that is, the predicted size of the constellation as a function of time. (See app. I for a more complete description of the approach used to generate the reliability function for each satellite and to combine these reliability functions into a constellation availability analysis.) While the mathematical techniques used to combine satellite reliability functions are straightforward, the techniques used to generate the reliability functions themselves have inherent limitations. In particular, because the reliability functions associated with new (unlaunched) generations of GPS satellites are based solely on engineering and design analysis, instead of on-orbit performance data, the actual reliability of these satellites may be very different, and reliability functions may need to be modified once on-orbit performance data become available. For example, while the IIA satellites were designed to last 7.5 years on average, they have actually lasted more than twice as long, and the Aerospace Corporation has had to adjust the reliability functions of these satellites to account for this difference. Moreover, satellite operators work to develop innovative operational tactics to maximize the useful life of each GPS satellite. An official with the 2nd Space Operations Squadron, which operates and maintains the GPS constellation, noted that a healthy tension exists between the acquisitions community, which tends to be conservative in estimating the lifetimes of the things it acquires, and the operations community, which continues to evolve new techniques and procedures for getting more life out of old systems. Nevertheless, the Air Force appears to have a mature process in place to develop, certify, and routinely update satellite reliability functions, and we have found no evidence to suggest that this process is biased toward overly conservative estimates of satellite lifetimes. Last year, we reported that because there were 31 operational GPS satellites of various generations, the near-term probability of maintaining a constellation of at least 24 operational satellites would remain well above 95 percent for a brief period of time, but because older satellites were predicted to fail faster than they were scheduled to be replaced, we reported that the constellation would, in all likelihood, decrease in size. We noted that the probability of maintaining a constellation of 24 operational satellites would fall to below 95 percent in fiscal year 2009, and to as low as 80 percent before recovering near the end of fiscal year 2014. This situation is now much improved. There are still 31 operational satellites, 30 of which are currently working to performance standards and available to GPS users. Our updated analysis, based on the most recent satellite reliability data, indicates that the size of the constellation is still expected to decline somewhat over the next several years. However, if the current launch schedule holds, the probability of maintaining a constellation of 24 satellites will remain above 95 percent for the foreseeable future. Figure 3 compares the predicted size of the GPS constellation over time (at the 95 percent confidence level) that we calculated based on the GPS reliability data and launch schedule we used last year with the predicted size of the constellation over time that we calculated based on the latest available GPS reliability data and launch schedule. The improvement in the near-term predicted size of the constellation is the result of several factors, most notably the Air Force’s assumptions regarding an increased life expectancy for some of the on-orbit satellites. Other factors include the successful launches of the last two GPS-IIR-M satellites in March 2009 and August 2009 and some adjustments to the launch schedule. Our updated analysis does not include the contribution of several residual satellites that have been decommissioned but not yet been permanently disposed of. These satellites could be reactivated if there were an unexpectedly large number of satellite failures in the near future. However, the maximum size of the current constellation is limited to 31 operational satellites because of limitations of the current ground system, and none of these residual satellites is expected to continue operating beyond the end of fiscal year 2013. Consequently, while including these satellites in our analysis would further increase the probability of maintaining a 31-satellite constellation for the next few years, these residual satellites would have little or no impact on the size of the constellation beyond fiscal year 2013. Our updated analysis also assumes that GPS-IIR-M-20—otherwise known as satellite vehicle number 49 (SVN-49)—will remain operational. However, while this satellite is currently operational and broadcasting GPS signals, it has remained in an “unhealthy” status since it was launched in March 2009, and consequently remains unavailable to GPS users. The satellite remains unhealthy because of a small but permanent signal anomaly that could adversely affect GPS user equipment if it were activated without putting mitigation measures in place. This anomaly resulted from unexpected complications following the integration of a demonstration payload onto the satellite—a payload that broadcasts the third civil signal. The Air Force is examining several options to mitigate the impact of this anomaly, but no solution that would work for all GPS users has been identified. On March 26, 2010, DOT published a request seeking public comment on the Air Force’s proposed mitigation options in the Federal Register. However, a final decision as to whether SVN-49 will be set healthy is not expected to be made until June 2011. If SVN-49 were excluded from our analysis, the impact would be to reduce the predicted size of the constellation by about one satellite until around fiscal year 2020. Last year, we reported that a delay in the production and launch of GPS III satellites could have a big impact on the U.S. government’s ability to meet its commitment to maintain a 24-satellite GPS constellation. We noted that the severity of the impact would depend on the length of the delay, and that, for example, a 2-year delay (which is less than the average delay experienced by major space programs over the past decade) in the production and launch of the first and all subsequent GPS III satellites would reduce the probability of maintaining a 24-satellite constellation to about 10 percent by around fiscal year 2018. Put another way, we predicted that the guaranteed size of the constellation (at the 95 percent confidence level) would fall to about 17 satellites by that time. Our updated analysis based on the latest reliability data and launch schedule indicate that a 2-year delay in the production and launch of the GPS III satellites would still lead to a drop in the guaranteed size of the constellation (at the 95 percent confidence level) to about 18 satellites by fiscal year 2018. See figure 4 for details. This analysis assumes that the Air Force will be able to launch all 12 IIF satellites on schedule; a slower IIF launch rate would change the shape of the availability curve—reducing the amount of time that the guaranteed size of the constellation would remain above 24 satellites—but would not reduce the depth of the decline in the constellation’s guaranteed size. Moreover, while the performance of several of the on-orbit satellites has been somewhat better than was expected last year, there has been no change to the expected lifetimes of any of the IIF, IIIA, IIIB or IIIC satellites. Consequently, the predicted size of the constellation around fiscal year 2018—at a time when the constellation will be predominantly made up of IIF, IIIA, and IIIB satellites—is about the same as last year’s analysis had predicted. The drop-off in the predicted size of the constellation in fiscal year 2022 is the result of changes to the approved launch schedule for the IIIC satellites since last year. While the Air Force still plans to launch the first IIIC satellite in June 2019, the scheduled launch dates for the rest of the IIIC satellites have been pushed back from 5 months (for the second IIIC launch) to 28 months (for the 16th and final IIIC launch). Excluding random failures, the operational life of a GPS satellite tends to be limited by the amount of power that its solar arrays can produce. This power level declines over time as the solar arrays degrade in the space environment until eventually they cannot produce enough power to maintain all of the satellite’s subsystems. The effects of this power loss can be mitigated somewhat by actively managing satellite subsystems— shutting them down when they are not needed—thereby reducing the satellite’s overall consumption of power. The Air Force currently employs this approach—referred to as current management—to extend the life of GPS satellites. According to the Air Force, it would also be possible to significantly reduce a satellite’s consumption of power and further extend the life of its PNT mission by shutting off a second payload on a GPS satellite once the satellite could not generate enough power to support both the missions. Shutting off the second payload once the satellite cannot support both missions—known as power management—would further mitigate the impact of a delay in GPS III. However, the impact is limited to increasing the predicted size of the constellation by about 1 satellite. For example, if the GPS III program were delayed by 1 year, the guaranteed size of the constellation (at the 95 percent confidence level) would decline to about 21 satellites by fiscal year 2017 if current management were employed and to about 22 satellites if power management were employed. See figure 5 for details. If the GPS III program were delayed by 2 years, the guaranteed size of the constellation (at the 95 percent confidence level) would decline to about 18 satellites by fiscal year 2018 if current management were employed and to about 19 satellites if power management were employed. See figure 6 for details. Because the second payload relies on the PNT payload, there would be no operational benefit to retaining the second payload and shutting off the PNT payload at the point where a satellite cannot support both missions. However, the constellation availability analysis that employs power management does not address whether the constellation is satisfying the missions supported by the second payload. Moreover, according to Air Force Space Command officials, power management should not be used as the basis for official constellation availability analysis, given the uncertainties associated with predicting a satellite’s actual power usage. We agree, given the criticality of GPS to military and civilian users. If GPS constellation performance were to fall below the baseline constellation of 24 satellites, the constellation would continue to provide a high level of service to most users most of the time, although accuracy and availability could diminish in some locations for brief periods. Military users of GPS understand that a diminished constellation of fewer than 24 satellites will affect their operations. However, it is unclear whether military users of GPS understand the potential specific effects. The Army, Marine Corps, and Navy user representatives reported that their services had not conducted any studies to assess how their operations would be affected if the constellation were to drop below 24 satellites. Furthermore, while some user representatives pointed out that the effects of diminished constellation availability would vary depending on which satellites continued to be available, most did not provide very specific explanations of the potential effects of a decline below performance standards on their services’ operations. For example, the services reported the following: Air Force. The Air Force user representative stated that the Air Force has “a healthy concern for the ready viability, integrity, and availability of this system. Specific data points, analysis, and vulnerabilities would be classified.” Any system that would possibly function without its full designed or optimized capability would naturally have some operational degradation. Army. The Army user representative stated that effects largely depend on which satellites would remain available. If there is a decline just below 24 satellites, the effect would probably be minimal, but with each additional space vehicle lost the operational impact would increase. Marine Corps. The Marine Corps user representative stated that Marines are accustomed to using GPS for PNT; therefore the loss of GPS would severely affect Marines’ ability to navigate. Effects would vary depending on the situation in which a user operates. The most severely affected Marines would be those who use GPS in marginal but currently acceptable conditions, such as under foliage, in mountains, and in urban settings, where a smaller constellation is more likely to result in diminished or no service. Navy. The Navy user representative stated that there is no “one-size-fits- all” answer, that information regarding the effects would be classified, and that the Navy would continue to operate even if it could not use GPS, although missions might take longer to accomplish and require additional assets. Civil agency officials stated that if the constellation performance fell below the committed level of service, their operations would be affected; however, the effects vary by agency. For instance, Federal Aviation Administration (FAA) officials stated that a constellation smaller than the committed 24 satellites could result in flight delays and increased reliance on legacy ground-based navigation and surveillance systems. Likewise, U.S. Coast Guard officials stated that they could revert back to older methods of navigation if GPS service were diminished, but there would be a loss of efficiency. On the other hand, the National Institute of Standards and Technology, within the Department of Commerce, relies on GPS for timing data rather than navigation data and may be less sensitive to decreases in the number of GPS satellites. Furthermore, some civil agencies rely on both GPS and augmentation systems. For example, FAA augmentation systems increase the integrity of GPS for aviation purposes. However, officials from a few civil agencies explained that the augmentation systems cannot compensate for a drop in the size of the GPS constellation below the committed level. GPS modernization efforts across the space, ground control, and user equipment segments introduce new capabilities, such as improved resistance to jamming and greater accuracy. For most of these new capabilities, all three segments need to be in place in order for users to benefit from the new capability. However, the development of GPS ground control systems has experienced years of delay and in some cases will delay the delivery of new capabilities to users. In addition, although the Air Force has taken steps to enable quicker procurement of military GPS user equipment, there are significant challenges to these systems’ implementation. We previously reported that the Air Force had not been fully successful in synchronizing the acquisition and development of the next generation of GPS satellites with the ground control system, thereby delaying the ability of military and civil users to utilize new GPS satellite capabilities. The delay was due to funding shifts that were made to resolve GPS IIF satellite development problems. Since our last report, we found that the Air Force has faced technical problems and continued to experience delays in upgrading the capabilities of the current ground control system and that the delivery date of the follow-on ground system has further slipped. Table 2 highlights specific new capabilities for which there have been significant delays in the ground segments and additional delays that have occurred since last year’s review. Since our 2009 report, the contract for the newest ground system development effort—known as OCX—was awarded in February 2010, about 10 months later than the original contract award date was to occur. To account for the delay and increase confidence in the schedule, the Air Force extended the OCX delivery schedule by adding 16 months of development time. As a result, key OCX capabilities associated with the IIIA satellites will not be operational until September 2016—over 2 years after the first IIIA satellite launch. The Air Force is working on a mitigation strategy that calls for development of a separate effort to launch and control the first IIIA satellite. However, GPS Wing officials indicated that the effort will not enable new capabilities offered by IIIA, including a signal known as Military Code (M-code), which is designed to enable resistance to jamming, and three civil signals: the second civil signal (L2C), to improve the accuracy of the other signals; the third civil signal (L5), to be used for aviation; and the fourth civil signal (L1C), to offer interoperability with international global space-based PNT systems. The other delayed capability identified in table 2 is the Selective Availability Anti-Spoofing Module (SAASM), which will provide military users with improved security and information assurance. The ground control system software that precedes OCX deploys the SAASM functionality, which is a critical enabler of DOD’s navigation warfare strategy. Although new user equipment capable of exploiting SAASM was delivered to the warfighters in 2004, they were not able to take full advantage of this capability until January 2010—when the SAASM module was delivered as part of the ground control system. GPS has become an essential element in conducting military operations. GPS user equipment is incorporated into nearly every type of system used by DOD, including aircraft, spacecraft, ground vehicles, ships, and munitions. A key component of the GPS modernization is a new military signal—known as M-code—that will increase the jam resistance of the GPS military service. For military users to benefit from this new capability, they need to be provided with new military user equipment capable of receiving and processing the new military signal. In 2009, we found that the Air Force was not fully successful in synchronizing the acquisition and development of the next generation of GPS satellites with the user equipment, thereby delaying users’ ability to benefit from M-code. While the signal was to be made operational by the GPS satellites and ground control system in about 2013 (now 2016), we found that the warfighters would not be able to take full advantage of this new signal until about 2025—when the modernized user equipment is completely fielded. We also found that diffuse leadership was a contributing factor, given that there was no single authority responsible for synchronizing procurements and fielding of user equipment. More specifically, while the Air Force was responsible for developing the satellite and ground segments for GPS, the military services were individually responsible for procuring user equipment for the weapon systems they owned and operated. As such, there were separate budget, management, oversight, and leadership structures over the space, ground control, and user equipment segments. While there were valid reasons to segment procurement responsibility, DOD and GAO studies have consistently found that DOD has lacked the tools necessary to coordinate these procurements and ensure that they are synchronized to the extent that warfighters can take advantage of M-code and other new capabilities available to them through GPS satellites. Since our 2009 report, the Air Force has taken steps to enable quicker procurements of user equipment, but there are still significant challenges to its implementation. First, the Air Force intends to follow an acquisition approach that will enable the military services to contract separately with commercial GPS providers rather than develop entirely new, customized user equipment systems. To support this approach, the Air Force plans to develop a common module, which commercial providers could use, along with interface control documents, to produce their equipment. The Air Force’s current expectation is that it will issue requests for proposals in February 2011, formally initiate the military user equipment acquisition program in fiscal year 2012, and begin production in fiscal year 2015. At this time, however, the Air Force does not have approved requirements or an approved military user equipment acquisition strategy. Second, as a pathway to its new approach, the Air Force is working with three contractors to develop GPS receiver cards capable of receiving and processing legacy GPS signals and the new military signal, while incorporating a new security architecture into the design. However, the delivery of receiver cards from two contractors has slipped by about a year because of unforeseen challenges with software and hardware integration and antispoofing software development and integration. The third contractor is facing technical problems, the cause of which has not yet been identified, and the Air Force is uncertain as to when this contractor will deliver its receiver card. Even after the cards are developed and delivered, they still need to go through independent security and technology testing to demonstrate that the technologies are mature, which can take 9 months to a year. Moreover, since there is still no program of record for the military GPS user equipment, it is difficult to forecast when enough military GPS user equipment will be in place to utilize the M-code capabilities operationally. Third, some steps have been taken to better coordinate procurements of user equipment. Specifically, in January 2010, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics held its first annual GPS enterprise review. The purpose of this review, which will be held again in the fall of 2010, is to review the status of the GPS acquisition programs at one time and provide more visibility into how the GPS acquisitions and capabilities fit together. In addition, DOD recently created the Space and Intelligence Office within the Office of the Under Secretary for Acquisition, Technology and Logistics to ensure that all three segments of GPS stay synchronized in the development and acquisition processes. DOD has also documented GPS synchronization as one of its goals for the next 15 years in its March 2010 Net-Centric Portfolio Strategic Plan, used in part to identify areas requiring additional focus. More specifically, DOD plans to ensure synchronized development and fielding of GPS space, ground control, and user equipment segments to support delivery of advanced capabilities. This includes fielding user equipment to all designated users starting in 2014 and almost completing fielding by full operational capability of the GPS III satellite constellation. In DOD’s netcentric plan, M-code initial operational capability is defined as having 18 M-code satellites on orbit, having the control segment able to command and upload M-code capabilities to the satellites, and having enough military GPS user equipment in place across DOD to utilize M-code capabilities operationally. Furthermore, the Air Force has made significant changes to the definition of initial operational capability, which now takes into account all three GPS segments rather than only the satellite segment. DOD has taken some steps to coordinate GPS segments, but it is not likely that these will be sufficient to ensure that all GPS segments are synchronized to the maximum extent practicable, which we recommended last year. Specifically, we recommended that the Secretary of Defense appoint a single authority to oversee the development of GPS, including DOD space, ground control, and user equipment assets, to ensure that the program is well executed and resourced and that potential disruptions are minimized. The creation of the Space and Intelligence Office is a positive development; however, the office does not have authority over all user equipment. In addition, we recently reported that DOD program officials believe that the primary reason that user equipment is not optimally synchronized is a lack of coordination and effective oversight of the many military organizations that either develop user equipment or have some hand in the development. The GPS interagency requirements process remains relatively untested and civil agencies continue to find the process confusing. The lack of detailed guidance on the process is a key source of confusion and has also contributed to other problems, such as disagreement and inconsistent implementation of the process. In addition, we found that the interagency requirements process relies on individual agencies to identify their own requirements but does not identify PNT needs across civil agencies. We previously reported that DOD and civil agencies considered the process for approving civil GPS requirements rigorous but relatively untested, and that civil agencies found the process confusing. We stated that prudent steps had been taken to manage requirements and coordinate among the many organizations involved with GPS. However, we reported that civil agencies had not submitted many requirements proposals to date. We focused on two proposals: those for the Distress Alerting Satellite System (DASS) and the geodetic requirement implemented by Satellite Laser Ranging (SLR). These proposals had yet to complete the initial steps in the interagency requirements process. In addition, we reported that civil agencies that had proposed GPS requirements found the requirements approval process confusing and time-consuming. We recommended that if weaknesses are found the Secretaries of Defense and Transportation should address civil agency concerns for developing requirements, improve collaboration and decision making, and strengthen civil agency participation. Both DOD and DOT concurred with this recommendation. DOD noted that it would seek ways to improve civil agency understanding of the DOD requirements process and would work to strengthen civil agency participation. DOT indicated that it would work with DOD to review the process and improve civil agency participation. In our current work, we found that the requirements process continues to be relatively untested and the lack of documentation of the various stages of the process makes it difficult to determine the extent to which requirements followed the GPS interagency requirements process. No new civil requirements have been requested since our prior report; while DASS and SLR have made some progress, no final decision on whether these requirements will be included on GPS has been made. In addition, there are some civil requirements that have already been included in the DOD requirements document for GPS III, but the extent to which they were evaluated via the interagency requirements process is unclear. The Interagency Forum for Operational Requirements (IFOR), which is co- chaired by officials from DOD and DOT and includes members from several agencies, serves as the entry point into the process and is responsible for receiving and processing new operational requirements and for clarifying existing requirements. DOT has the lead responsibility for the coordination of civil requirements from all civil departments and agencies. Although guidance on the steps in the interagency requirements process describes a more complex process, descriptions by officials involved with the process indicate that there are three key steps in the requirements process with the final determination of whether a requirement is approved being made by DOD’s Joint Requirements Oversight Council (JROC) in coordination with the DOT’s Extended Positioning/Navigation Executive Committee: 1. Civil agencies are to internally identify and validate their requirements and conduct cost, risk, and performance analyses. 2. Civil requirements proposals are submitted to IFOR, which is composed of military and civil working groups. IFOR is then to assist with preparing civil requirements proposals for a GPS satellite capability development document. 3. Upon IFOR recommendation, civil requirements enter the Joint Capabilities Integration Development System (JCIDS), the DOD process to validate warfighter requirements. DOD’s JROC will make the final determination of whether a requirement will be approved for inclusion on GPS, which is documented in the JROC-approved capability development document. Additional details in the guidance provide more specificity regarding how these steps are to be implemented and describe additional steps that may be necessary if there are disagreements or other issues that require adjudication. In addition, there may be a considerable amount of communication with the requesting agency and revision during this process if IFOR or DOD determines that improvements to the requirements packages are necessary. As shown below, two requirements, DASS and SLR, formally entered the interagency requirements process but have not yet completed the review process. Two other civil requirements were included in the GPS III capability development document, but as is reflected in table 3, the lack of documentation of their review makes it difficult to determine the extent to which the GPS interagency requirements process was applied for those submissions. Guidance for the interagency requirements process lacks sufficient detail in areas such as explanations of key terms, documentation standards, steps in the process, and funding. This lack of detail has contributed to a number of problems, such as confusion, disagreement among the agencies involved, and inconsistent implementation of the process. Three documents provide guidance specific to the interagency requirements process. National Security Presidential Directive No. 39 (NSPD-39) provides high-level guidance and the GPS Interagency Requirements Plan (IRP) and the IFOR charter provide more process- specific guidance. The documents do not define key terms, such as secondary mission requirement, civil use, and dual use, nor do they outline how these types of requirements should be treated in the interagency requirements process. As a result, distinctions based on informal verbal instructions appear to have affected how requirements have been treated in the process and could affect future funding decisions. Secondary mission requirements. A secondary mission requirement, sometimes called a secondary payload, is a requirement that does not directly support the primary GPS mission to provide PNT information. The guidance does not define the term nor does it indicate whether or how a secondary mission requirement should be evaluated via the interagency requirements process. DASS is considered to be a secondary mission requirement, and Coast Guard officials involved with the DASS program report that its review was delayed for several years because of uncertainty regarding how secondary mission requirements should be treated in the interagency process. According to those officials, when the DASS requirement was submitted to IFOR in 2003, the Coast Guard was told that DASS should not be reviewed via this process because it was a secondary mission requirement and that it should instead be submitted directly to DOD’s JCIDS requirements process. After several years of delay, the Coast Guard was informed that DASS should be reviewed by IFOR after all. IFOR ultimately accepted the requirement for review in 2008. Civil and dual use. According to officials involved with the interagency requirements process, requirements that are identified by the civil community are considered initially to be “civil unique” and may later be determined to have military utility and identified as “dual use.” However, the guidance does not define the terms, nor does it state how civil unique or dual-use requirements are to be treated in the process. Even though the guidance does not distinguish between these two terms, some agencies involved in the process have indicated that whether a requirement is considered to be civil unique or dual use should determine how the requirement is funded. For example, NASA contends that SLR should be considered dual use and that DOD should therefore partially cover the costs of SLR. According to NASA, both the civil community and the military would benefit from SLR because it would improve GPS accuracy. However, some DOD officials disagree. They stated that there are no military requirements for SLR and that it is therefore not a dual-use requirement, implying that it should be funded solely by NASA. In addition, the guidance provides some information regarding what types of documents should be submitted, but it lacks specificity, resulting in confusion and disagreement among the military and civil agencies involved. The IRP states that cost, risks and performance trades, and other information will be submitted in order to defend requirements’ feasibility, affordability, and best value for the government. However, the guidance documents do not specify the type, level of detail, or formatting requirements for submissions to IFOR. There has been a disconnect between the Coast Guard’s understanding of documentation needs and DOD’s documentation expectations. To remedy this, some Coast Guard officials involved with submitting the DASS requirement stated that a list of required reports and their format should be provided to civil agencies. These officials said that they provided IFOR with assessments of six alternatives, but they were told by DOD officials that the analyses were not adequate. In addition, although guidance does not indicate that documents should be submitted using the JCIDS format, Coast Guard officials indicated that some of the studies they provided in support of the DASS requirement submission were not accepted because they did not use that format. Similarly, NASA officials have expressed frustration with the lack of clear and consistent guidance on documentation standards. While NASA officials stated that since 2007 they have provided all the documentation and analyses on SLR requested by IFOR, DOD officials stated that SLR has not been fully developed as a requirement. The guidance also does not explain in detail the steps in the interagency requirements process. For example, the guidance lacks detail about formal approvals needed to proceed to the next step in the process and about standards regarding what is to take place during each phase of the process. This has resulted in confusion about next steps for agencies that have submitted requirements and it may also have contributed to inconsistent implementation of the process. Approval requirements. There is limited information in the guidance on what formal approvals are required, how they are to be documented, and few details as to when and how these approvals relate to one another. As a result, civil agency officials have indicated that they find it difficult to know when a requirement has been approved to move to the next step in the process or whether it has received final approval. In the case of SLR, in 2007, IFOR released a memo recommending that SLR be included in the GPS III capability development document. However, after some concerns about SLR were identified within DOD that approval was de facto rescinded. SLR is again pending IFOR review and approval. Similarly, there appears to be some confusion about the ultimate fate of some requirements that have already been included in a capability development document. For example, some of the aviation-related requirements were included in the GPS III capability development document for later increments of GPS III, which are important to meeting the needs of FAA’s Next Generation Air Transportation System program, a satellite-based air traffic management system that is under development and is expected to increase the safety and enhance the capacity of the air transport system. However, some DOD officials report that this capability development document will be treated as the one for GPS IIIA and that requirements not included on GPS IIIA will have to be submitted through JCIDS again on the capability development documents for either the GPS IIIB or GPS IIIC. Phases of the process. The guidance lacks details about specific phases of the interagency requirements process, which may have contributed to inconsistent implementation. For example, the guidance regarding the initial step in the interagency requirements process states, among other things, that civil agencies are to internally identify and validate their requirements. However, the requirement for L1C never went through this phase of the process. Instead, the request resulted from an international agreement and was submitted by the White House. In addition, expertise and experience with requirements and their identification and validation processes vary greatly across government agencies. DOT and DOD officials report that some agencies have documented, disciplined requirements processes. However, while other agencies represent vital GPS applications and users, they have limited experience with requirements processes because they do not typically acquire systems to fulfill their missions. Although it may not be realistic to expect civil agencies to have requirements processes that are as rigorous as DOD’s, more detailed guidance on expectations regarding standards for identification and validation of requirements could help ensure that there is more consistency in the first stage of the process. Lastly, the guidance does not include criteria for funding decisions beyond indicating that sponsoring agencies must pay for their requirements. More specifically, the lack of details in guidance regarding the required timing of funding commitments has caused confusion. The process for considering civil GPS requirements is intended to maintain fiscal discipline by ensuring that only critical needs are funded and developed. Our past work has shown that requirement add-ons cause cost and schedule growth. Guidance requires that the agency proposing the requirement pay the costs associated with adding it to the GPS satellites, thereby forcing agencies to separate their wants from needs. IFOR has requested that sponsoring agencies commit to fund a requirement when the requirement proposal is submitted. For example, IFOR requested that the Coast Guard provide a funding commitment for DASS before the requirement enters the JCIDS process. However, information regarding when a funding commitment is required is not included in guidance on the interagency requirements process. The interagency requirements process relies on individual agencies to identify their own requirements but does not identify PNT needs across civil agencies. For example, the DASS requirement is a secondary mission requirement to support a search and rescue system rather than a performance requirement specific to PNT. While such requirements may fulfill important needs, they do not reflect civil community requirements for PNT capabilities. Yet there are considerable challenges to identifying needs across agencies. For example, civil agencies have different roles, missions, and priorities ranging from providing leadership related to food, agriculture, and natural resources to providing the safest, most efficient aerospace system in the world. The civil PNT Executive Committee Co- chair pointed out that most civil agencies have not identified PNT requirements for their agencies, which poses a considerable challenge to identifying these requirements across agencies. These challenges have resulted in an approach that is agency specific and not coordinated rather than a coordinated national approach to identifying PNT needs. While there is no standardized process for identifying requirements across civil agencies, we found that two efforts under way are attempting to contribute to the development of a coordinated national approach to identifying PNT requirements. First, DOT officials stated that they are working with civil agencies to identify PNT requirements that represent their stakeholder needs with respect to accuracy, availability, coverage, and integrity. This information would serve as input for the 2010 Federal Radionavigation Plan, a document that reflects official U.S. radionavigation policy, which covers radionavigation systems, including GPS. Second, DOD’s National Security Space Office has been working with civil agencies to develop a national PNT architecture to address capability gaps and provide a framework for evaluating and recommending new requirements. Last year, we reported that the State Department has engaged other planned global navigation satellite system providers bilaterally and multilaterally in pursuit of compatibility with GPS signals and services and interoperability with civil GPS signals and service. The United States has made joint statements of cooperation with several countries and an executive agreement with the European Community, although according to State Department officials, this agreement has not yet been ratified by all European Union members. Additionally, State Department officials reported that they believe they lack dedicated technical expertise to monitor international activities. State Department officials stated that they would like DOD and civil agencies to dedicate funding and staff positions to international activities accompanied by a sustained level of senior management support and understanding of the importance of these activities. Furthermore, U.S. firms had raised a concern to the Department of Commerce about the lack of information from the European Commission relating to the process for obtaining licenses to sell equipment that is compatible with Galileo, a space-based global navigation satellite system being developed by the European Union. However, according to the executive agreement with the European Community, subject to applicable export controls, the United States and the European Community are to make sufficient information publicly available to ensure equal opportunity for persons who seek to use these signals, manufacture equipment to use these signals, or provide value-added services that use these signals. State Department officials said that they had no new issues or concerns to add to what we reported in April 2009. State Department officials also stated that they continue to engage other planned global navigation satellite system providers bilaterally and multilaterally in pursuit of interoperability with civil GPS signals and compatibility with GPS military signals. According to the officials we spoke with, there have been no changes in the number or status of cooperative agreements between the United States and other countries since April 2009. Furthermore, the State Department reported that the current number of DOD technical experts needed for international discussions about foreign global navigation satellite systems is now sufficient. Additionally, U.S. GPS industry representatives we met with remain concerned about the lack of information from the European Commission. In July 2009, the Office of the U.S. Trade Representative reported to Congress that industry representatives were concerned about (1) the lack of information on how to secure licenses to sell products, protect intellectual property rights, or both; (2) access to signal test equipment for Galileo’s publicly available service; and (3) the lack of information on the three other Galileo PNT services—service for safety-of-life applications, an encrypted signal for government users, and an encrypted service intended for commercial users. However, according to State Department officials, in spring 2010, the European Commission helped address the first two of these concerns when it published an updated technical document that includes information on the process for licensing intellectual property rights related to Galileo. State Department officials said that the U.S. government is seeking additional clarification on Galileo’s newly established intellectual property licensing scheme, which if it is obtained, should address the first concern. State Department officials explained that the updated technical document addresses the second concern, regarding access to signal test equipment for Galileo’s publicly available service, and that the U.S. government will no longer need to pursue the issue. Conditions have improved for the near-term size and availability of the GPS constellation. While DOD has strengthened acquisition practices for GPS and made concerted efforts to maximize the life of GPS satellites, it still faces many of the same challenges we identified last year, as well as new ones we identified this year. For example, the GPS IIIA program has complex and difficult work ahead as it undertakes assembly, integration, and test efforts, and its schedule may leave little margin to address challenges that may arise. Such issues could affect the Air Force’s ability to launch satellites on time, which in turn may affect future GPS constellation availability. Furthermore, because of continued delays with ground control systems and the challenges the Air Force is encountering with enabling quicker procurement of military GPS user equipment, new capabilities may not be delivered to the warfighters when DOD needs them. To better align key decisions and capability deliveries, DOD is now looking more broadly across the GPS enterprise. However, it remains to be seen whether these actions go far enough to synchronize all GPS segments to the maximum extent practicable. For example, while DOD’s new Space and Intelligence Office will help ensure that the development and acquisition of all GPS segments are synchronized, this office does not have authority over all military user equipment development. Consequently, we reiterate our recommendation from our April 2009 report that the Secretary of Defense appoint a single authority to oversee the development of GPS, including DOD space, ground control, and user equipment assets, to ensure that the program is well executed and resourced and that potential disruptions are minimized. Furthermore, we specified that the appointee should have the authority to ensure that all GPS segments are synchronized to the maximum extent practicable, and should coordinate with the existing PNT infrastructure to assess and minimize potential service disruptions should the satellite constellation decrease in size for an extended period of time. Regarding the GPS interagency requirements process, there is still a great deal of confusion about how civil agencies should submit and pay for their requirements. Moreover, this year we found that a lack of comprehensive guidance on the GPS interagency requirements process is a key source of this confusion. Taking steps to clarify the process, documentation requirements, and definitions of key terms would help alleviate this confusion. We recommend that the Secretaries of Defense and Transportation, whose departments co-chair the National Executive Committee for Space-Based Positioning, Navigation, and Timing, develop more comprehensive guidance for the GPS interagency requirements process, including an explanation of key terms, documentation expectations, process steps, requirements approval, and funding commitments. We provided a draft of this report to the Secretaries of Defense, Commerce, Energy, Homeland Security, State, and Transportation and the Administrator of the National Aeronautics and Space Administration for comment. DOD provided written comments on a draft of this report that are reprinted in appendix III. DOT provided oral comments on a draft of this report. In written comments, DOD did not concur with our recommendation that the Secretary of Defense and the Secretary of Transportation develop comprehensive guidance for the GPS interagency requirements process, including an explanation of key terms, documentation expectations, process steps, requirements approval, and funding commitments. DOD stated that the actions being taken by IFOR to clarify existing guidance, ranging from the new IFOR charter (signed in May 2010) to a directed review of the IRP, meet the needs being recommended by the report. DOT generally agreed to consider our recommendation. The IFOR charter, which was updated on May 26, 2010, includes some notable improvements compared to previous guidance, but it does not address all of the shortcomings we identified. In particular, the revised guidance provides more clarity regarding what documentation should be provided with requirements proposal submissions; IFOR’s role in approving or rejecting proposed new requirements; and expectations regarding funding commitments, including the timing of commitments. In addition, the guidance states that requirements will be classified as operational requirements or additional payloads; however, it does not explain what the implications of those classifications are in terms of how the requirements will be treated in the interagency requirements process. The guidance also does not include definitions of civil unique and dual-use requirements, yet there are ongoing deliberations regarding whether SLR is a dual-use requirement. The revised guidance also lacks information on the type of detail, level of detail, and formatting structure for documentation required with requirements proposal submissions. Lastly, the guidance does not specify how IFOR approvals are to be documented and lacks specificity regarding at what stage a requirement is officially approved for inclusion on GPS satellites. Given that there is still confusion about how civil agencies should submit and pay for their requirements, we believe our recommendation remains valid that the Secretaries of Defense and Transportation, who are responsible for leading interagency coordination, should provide more comprehensive guidance. DOD’s written comments noted that DOD concurred with a “For Official Use Only” (FOUO) designation for our report, which was its status while in draft. We subsequently worked with DOD to identify and revise specific areas of the report containing FOUO information, and DOD has confirmed that this version of the report is acceptable for public release. We received technical comments from the Departments of Commerce, Energy, State, and Transportation and the National Aeronautics and Space Administration, which have been incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretaries of Defense, Commerce, Energy, Homeland Security, State, and Transportation; the Administrator of the National Aeronautics and Space Administration; and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors to this report are listed in appendix IV. In order to assess the status of the U.S. Air Force’s efforts to develop and deliver new Global Positioning System (GPS) satellites, the availability of the GPS constellation, and the potential impacts on users if the constellation availability diminishes below its committed level of performance, we performed several tasks. Our work is based on the most current information available as of April 16, 2010. To assess the status of the Department of Defense’s (DOD) efforts to develop and deliver new GPS satellites, we reviewed and analyzed current program plans and documentation related to cost, requirements, program direction, and acquisition and launch schedules. We also interviewed officials from the Office of the Assistant Secretary of Defense, Networks and Information Integration; the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; the Office of the Joint Chiefs of Staff; U.S. Strategic Command; the Air Force Space Command; the Air Force Space and Missile Systems Center’s GPS Wing; the Air Force’s 2nd Space Operations Squadron; and the Air Staff. In addition, to assess the reliability of the GPS IIIA space vehicle integrated master schedule, we reviewed 5 of 20 supporting project schedules and compared those schedules with relevant best practices as identified in our Cost Estimating and Assessment Guide. The review period for the 5 schedules was from May 2008 to July 2009. These 5 schedules were selected because they make up the bulk of the work and they are most critical to the production of the GPS IIIA space vehicle. This analysis revealed the extent to which the schedules reflected key estimating practices that are fundamental to having a reliable schedule. In conducting this analysis, we interviewed GPS Wing officials and contractor representatives to discuss their use of best practices in creating the program’s current schedules. To assess the availability of the GPS constellation, we did the following: Interviewed officials from the Air Force Space and Missile Systems Center GPS Wing, the Air Force Space Command, the Air Force’s 2nd Space Operations Squadron, and the Department of Energy’s National Nuclear Security Administration. To assess the risks that a delay in the acquisition and fielding of GPS III satellites could result in the U.S. government failing to meet its commitment to a 95 percent probability of maintaining a constellation of 24 operational GPS satellites, we obtained information from the Air Force predicting the reliability of 79 GPS satellites—each of the 32 operational (on-orbit) satellites, 44 future GPS satellites, and 3 residual satellites—as a function of their time on orbit. Each satellite’s total reliability function defines the probability that the satellite will still be operational (or in sufficient working order to be made operational) at a given time in the future. This reliability function is generated from the product of two cumulative reliability functions—a wear out reliability function governed by the cumulative normal distribution and a random reliability function governed by the cumulative Weibull distribution. The reliability function for a specific satellite is defined by a set of four parameters—two that define the cumulative normal distribution and two that define the cumulative Weibull distribution. Obtained two sets of reliability parameters for each of the 79 satellites. One set of parameters describes the reliability of the satellites based on the “current management” approach—the Air Force’s efforts to actively manage satellite subsystems to reduce a satellite’s overall consumption of power. The second set of parameters assumed use of a power management approach—shutting off the satellite’s second payload once the satellite is not expected to be capable of generating enough power to support both the positioning, navigation, and timing (PNT) mission and the set of missions supported by the second payload. For each of the 44 unlaunched satellites, we also obtained a parameter defining its probability of successful launch and its scheduled launch date. The 44 unlaunched satellites include 12 IIF satellites, 8 IIIA satellites, 8 IIIB satellites, and 16 IIIC satellites; launch of the final IIIC satellite is scheduled for July 2025. Using this information, we generated overall reliability functions for each of the 32 operational, 44 unlaunched, and 3 residual satellites GPS satellites. We discussed with Air Force and Aerospace Corporation representatives, in general terms, how each satellite’s normal and Weibull parameters were calculated. However, we did not analyze any of the data used to calculate these parameters. Developed a Monte Carlo simulation using the reliability function for each of the 32 operational and 44 unlaunched GPS satellites to predict the probability that at least a given number of satellites would be operational as a function of time, based on the GPS launch schedule approved in December 2009. We conducted several runs of our simulation—each run consisting of 10,000 trials—and generated curves depicting the predicted size of the GPS constellation at the 95 percent confidence level as a function of time. During last year’s review, we compared the results for a 24-satellite constellation with a similar Monte Carlo simulation that the Aerospace Corporation had performed for the Air Force, and confirmed that our simulation produced results that are very similar. We compared our results with the results for the predicted size of the GPS constellation over time (at the 95 percent confidence level) that we had calculated last year using the GPS reliability data and launch schedule approved in March 2009. We then used our Monte Carlo simulation model to examine the impact of a 2-year delay in the launch of all GPS III satellites. We moved each GPS III launch date back by 2 years. We then reran the model and calculated a new curve for the size of the operational constellation as a function of time. To assess the military services’ understanding of the potential impacts on users if the constellation availability diminishes below its committed level of performance, we asked Air Force, Army, Marine Corps, and Navy military service user representatives to provide formal studies and analyses regarding this issue. However, because most military service representatives stated that their services had not conducted formal studies and analyses on this issue, we also obtained written responses to questions regarding this issue from the military service representatives. In addition, to describe civil departments’ and agencies’ understanding of the potential impacts on users if the constellation availability diminishes below its committed level of performance, we obtained written responses to questions regarding this issue from civil departments and agencies involved with the GPS interagency requirements process, including the National Aeronautics and Space Administration; the Department of Transportation, including the Federal Aviation Administration; the Department of Commerce, including the National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology; and the Department of Homeland Security, including the U.S. Coast Guard. To assess the progress of efforts to acquire the GPS ground control and user equipment, we interviewed officials who manage and oversee these acquisitions, including officials from the Office of the Assistant Secretary of Defense, Networks and Information Integration; the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; the Office of the Joint Chiefs of Staff; U.S. Strategic Command; the Air Force Space Command; the Air Force Space and Missile Systems Center’s GPS Wing; the Air Force’s 2nd Space Operations Squadron; and the Air Staff. We reviewed recent documentation regarding the delivery of capabilities and equipment and assessed the level of synchronization among satellites, ground systems, and user equipment. Our work is based on the most current information available as of April 16, 2010. To assess the GPS interagency requirements process, we (1) reviewed and analyzed guidance on the process and documents related to the status of civil requirements and (2) interviewed officials from the National Aeronautics and Space Administration; the Department of Transportation, including the Federal Aviation Administration; the Department of Commerce, including the National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology; the Coast Guard; the Office of the Assistant Secretary of Defense, Networks and Information Integration; the National Security Space Office; the Air Force Space Command; the Interagency Forum for Operational Requirements; and the National Coordination Office for Space-Based Positioning, Navigation, and Timing. Our work is based on the most current information available as of March 10, 2010. To assess GPS coordination efforts with the international global PNT community, we interviewed officials at the Department of State and the Air Force Space and Missile Systems Center’s GPS Wing and some industry representatives. We also reviewed a July 2009 report to Congress from the Office of the U.S. Trade Representative. Our work is based on the most current information available as of March 2, 2010. We conducted this performance audit from July 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our research has identified nine practices associated with effective schedule estimating: (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating schedule activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying float between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations to determine the dates. The GPS IIIA space vehicle integrated master schedule consists of a master schedule with 20 embedded project schedules representing individual integrated product teams. We selected 5 of these project schedules for review because they make up the bulk of the work and they are most critical to the production of the GPS IIIA space vehicle. Specifically, we selected the Antenna Element, Bus, General Dynamics, Navigation Unit Panel, and Launch Operations project schedules and assessed them against the nine best practices for schedule development (see table 4). The review period for the 5 schedules was from May 2008 to July 2009. A well-defined schedule helps to identify the amount of human capital and fiscal resources that are needed to execute the program, and thus is an important contribution to a reliable cost estimate. Our research has identified a range of best practices associated with effective schedule estimating. These practices are as follows: Capturing all activities: The schedule should reflect all activities (steps, events, outcomes, etc.) as defined in the program’s work breakdown structure, including activities to be performed by both the government and its contractors. Sequencing all activities: The schedule should be planned so that it can meet the program’s critical dates. To meet this objective, activities need to be logically sequenced in the order that they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities) and activities that cannot begin until other activities are completed (i.e., successor activities) should be identified. By doing so, interdependencies among activities that collectively lead to the accomplishment of events or milestones can be established and used as a basis for guiding work and measuring progress. Assigning resources to all activities: The schedule should realistically reflect what resources (i.e., labor, material, and overhead) are needed to do the work, whether all required resources will be available when they are needed, and whether any funding or time constraints exist. Establishing the duration of all activities: The schedule should reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, data, and assumptions used for cost estimating should be used for schedule estimating. Further, these durations should be as short as possible and they should have specific start and end dates. Excessively long periods needed to execute an activity should prompt further decomposition of the activity so that shorter execution durations will result. Integrating schedule activities horizontally and vertically: The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with already sequenced activities. These links are commonly referred to as handoffs and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and subtasks. Such mapping or alignment among levels enables different groups to work to the same master schedule. Establishing the critical path for all activities: Using scheduling software the critical path—the longest duration path through the sequenced list of activities—should be identified. The establishment of a program’s critical path is necessary for examining the effects of any activity slipping along this path. Potential problems that may occur on or near the critical path should also be identified and reflected in the scheduling of time for high- risk activities. Identifying float between activities: The schedule should identify float— the time that a predecessor activity can slip before the delay affects successor activities—so that schedule flexibility can be determined. As a general rule, activities along the critical path have the least amount of float. Conducting a schedule risk analysis: A schedule risk analysis uses a good critical path method schedule and data about project schedule risks as well as Monte Carlo simulation (statistical) techniques to predict the level of confidence in meeting a program’s completion date, the amount of time needed for a level of confidence, and the identification of high-priority risks. This analysis focuses not only on critical path activities but also on other schedule paths that may become critical. A schedule/cost risk assessment recognizes the interrelationship between schedule and cost and captures the risk that schedule durations and cost estimates may vary because of, among other things, limited data, optimistic estimating, technical challenges, lack of qualified personnel, and other external factors. As a result, the baseline schedule should include a buffer or a reserve of extra time. Schedule reserve for contingencies should be calculated by performing a schedule risk analysis. As a general rule, the reserve should be held by the project manager and applied as needed to those activities that take longer than scheduled because of the identified risks. Reserves of time should not be apportioned in advance to any specific activity since the risks that will actually occur and the magnitude of their impact is not known. Updating the schedule using logic and durations to determine the dates: The schedule should use logic and durations in order to reflect realistic start and completion dates for program activities. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates, which can be used to determine whether schedule variances will affect downstream work. Maintaining the integrity of the schedule logic is not only necessary to reflect true status, but is also required before conducting a schedule risk analysis. The schedule should avoid logic overrides and artificial constraint dates that are chosen to create a certain result on paper. To ensure that the schedule is properly updated, individuals trained in critical path method scheduling should be responsible for updating the schedule. Table 5 presents the findings for the five project schedules for each best practice, along with an overall score for the integrated master schedule on each best practice. Tables 6 through 10 provide details on the individual project schedule assessments. All durations are given in working time, that is, there are 5 working days per week, 22 working days per month, and 260 working days per year. In addition to the contact named above, key contributors to this report were Art Gallegos, Assistant Director; Greg Campbell; Tisha Derricotte; Steven Hernandez; Laura Holliday; Jason Lee; Sigrid McGinty; Karen Richey; Jay Tallon; Hai Tran; Alyssa Weir; and Rebecca Wilson. | The Global Positioning System (GPS) provides positioning, navigation, and timing (PNT) data to users worldwide. The U.S. Air Force, which is responsible for GPS acquisition, is in the process of modernizing the system. Last year GAO reported that it was uncertain whether the Air Force could acquire new satellites in time to maintain GPS service without interruption. GAO was asked to assess (1) the status of Air Force efforts to develop and deliver new GPS satellites, the availability of the GPS constellation, and the potential impacts on users if the constellation availability diminishes below its committed level of performance; (2) efforts to acquire the GPS ground control and user equipment necessary to leverage GPS satellite capabilities; (3) the GPS interagency requirements process; and (4) coordination of GPS efforts with the international PNT community. To do this, GAO analyzed program documentation and Air Force data on the GPS constellation, and interviewed officials from DOD and other agencies. The Air Force continues to face challenges to launching its IIF and IIIA satellites as scheduled. The first IIF satellite was launched in May 2010--a delay of 6 additional months for an overall delay of almost 3 1/2 years--and the program faces risks that could affect subsequent IIF satellites and launches. GPS IIIA appears to be on schedule and the Air Force continues to implement an approach intended to overcome the problems experienced with the IIF program. However, the IIIA schedule remains ambitious and could be affected by risks such as the program's dependence on a ground system that will not be completed until after the first IIIA launch. The GPS constellation availability has improved, but in the longer term, a delay in the launch of the GPS IIIA satellites could still reduce the size of the constellation to fewer than 24 operational satellites--the number that the U.S. government commits to--which might not meet the needs of some GPS users. Multiyear delays in the development of GPS ground control systems are extensive. In addition, although the Air Force has taken steps to enable quicker procurement of military GPS user equipment, there are significant challenges to its implementation. This has had a significant impact on DOD as all three GPS segments--space, ground control, and user equipment--must be in place to take advantage of new capabilities, such as improved resistance to jamming and greater accuracy. DOD has taken some steps to better coordinate all GPS segments. These steps involve laying out criteria and establishing visibility over a spectrum of procurement efforts. But they do not go as far as GAO recommended last year in terms of establishing a single authority responsible for ensuring that all GPS segments are synchronized to the maximum extent practicable. Such an authority is warranted given the extent of delays, problems with synchronizing all GPS segments, and importance of new capabilities to military operations. As a result, GAO reiterates the need to implement its prior recommendation. The GPS interagency requirements process, which is co-chaired by officials from DOD and DOT, remains relatively untested and civil agencies continue to find the process confusing. This year GAO found that a lack of comprehensive guidance on the GPS interagency requirements process is a key source of this confusion and has contributed to other problems, such as disagreement about and inconsistent implementation of the process. In addition, GAO found that the interagency requirements process relies on individual agencies to identify their own requirements rather than identifying PNT needs across agencies. The Department of State continues to be engaged internationally in pursuit of civil signal interoperability and military signal compatibility, and has not identified any new concerns in these efforts since GAO's 2009 report. Challenges remain for the United States in ensuring that GPS is compatible with other new, potentially competing global space-based PNT systems. GAO recommends that the Department of Defense (DOD) and the Department of Transportation (DOT) develop comprehensive guidance for the GPS interagency requirements process. DOD did not concur with the recommendation, citing actions under way. DOT generally agreed to consider it. GAO believes the recommendation remains valid. |
Multiemployer plans are established pursuant to collectively bargained pension agreements negotiated between labor unions representing employees and two or more employers and are generally jointly administered by trustees from both labor and management. Multiemployer plans typically cover groups of workers in such industries as trucking, building and construction, and retail food sales. These plans provide participants limited benefit portability in that they allow workers the continued accrual of defined benefit pension rights when they change jobs, if their new employer is also a sponsor of the same plan. This arrangement can be particularly advantageous in industries like construction, where job change within a single industry is frequent over the course of a career. Multiemployer plans are distinct from single- employer plans, which are established and maintained by only one employer and where the plans may or may not be collectively bargained. Multiemployer plans also differ from so called multiple-employer plans that are not generally established through collective bargaining agreements and where many such plans have separate funding accounts for each employer. Since the enactment of the National Labor Relations Act (NLRA), in 1935, collective bargaining has been the primary means by which workers can negotiate, through unions, the terms of their pension plan. In 1935, NLRA required employers to bargain with union representatives over wages and other conditions of employment, and subsequent court decisions established that employee benefit plans could be among those conditions. The Taft Hartley Act amended NLRA to establish terms for negotiating such employee benefits and placed certain restrictions on the operation of any plan resulting from those negotiations. For example, employer contributions cannot be made to a union or its representative but must be made to a trust that has an equal balance of union and employer representation. Since its enactment in 1974, multiemployer defined benefit pensions have been regulated by the Employee Retirement Income Security Act (ERISA), which Congress passed to protect the interests of participants and beneficiaries covered by private sector employee benefit plans. Title IV of ERISA created PBGC as a U. S. Government corporation to insure the pensions of participants and beneficiaries in private sector-defined benefit plans. In 1980, Congress enacted the Multiemployer Pension Plan Amendments Act (MPPAA) of 1980 to protect the pensions of participants in multiemployer plans by establishing a separate PBGC multiemployer plan insurance program and by requiring any employer wanting to withdraw from a multiemployer plan to be liable for its share of the plan’s unfunded liability. This amount is based upon a proportional share of the plans’ unfunded vested benefits. Liabilities that cannot be collected from a withdrawn employer are “rolled over” and must eventually be funded by the plans remaining employers. PBGC operates distinct insurance programs, for multiemployer plans and single-employer plans, which have separate insurance funds, different benefit guarantee rules, and different insurance coverage rules. The two insurance programs and PBGC’s operations are financed through premiums paid annually by plan sponsors, investment returns on PBGC assets, assets acquired from terminated single employer plans, and by recoveries from employers responsible for underfunded terminated single employer plans. Premium revenue totaled about $973 million in 2003, of which $948 million was paid into the single-employer program and $25 million paid to the multiemployer program. Over the last few years, the finances of PBGC’s single-employer insurance program have taken a severe turn for the worse. Although the program registered a $9.7 billion accumulated surplus as recently as 2000, it reported a $11.2 billion accumulated deficit for fiscal year 2003, primarily brought on by the termination of a number of large underfunded pension plans. Several underlying factors contributed to the severity of the plans’ underfunded condition at termination, including a sharp decline in the stock market, which reduced plan asset values, and a general decline in interest rates, which increased the cost of terminating defined benefit pension plans. Because of its accumulated deficit, the significant risk that other large underfunded plans might terminate and other structural factors, we designated PBGC’s single-employer pension insurance program as a “high risk” program and added it to the list of agencies and major programs that we believe need urgent attention. In general, the same ERISA funding rules apply to both single and multiemployer defined benefit pension plans. However, there are some important differences. For example, while single-employer plan sponsors can adjust their pension contributions to meet their needs, within the overall set of ERISA and Internal Revenue Code (IRC) rules, individual employers in multiemployer plans cannot as easily adjust their plan contributions. For multiemployer plans, contribution levels are usually negotiated through the collective bargaining process and are fixed for the term of the collective bargaining agreement, typically 2 to 3 years. Benefit levels are generally also fixed by the contract or by the plan trustees. Employer contributions to multiemployer plans are typically made on a set dollar amount per hour of covered work. For many multiemployer plans, contributions are directly tied to the total number of hours worked, and thus, to the number of active plan participants. With other things being equal, the reduced employment of active participants will result in lower contributions and reduced plan funding. The U. S. employer-sponsored pension system has historically been an important component of total retirement income, providing roughly 18 percent of aggregate retirement income in 2000. However, millions of workers continue to face the prospect of retirement with no income from an employer-sponsored pension. The percentage of the workforce with pension coverage has been near 50 percent since the 1970s. Lower-income workers, part-time employees, employees of small businesses, and younger workers typically have lower rates of pension coverage. Retirees with pension incomes are more likely to avoid poverty. For example, 21 percent of retired persons without pension incomes had incomes below the federal poverty level, compared with 3 percent with pension incomes. Of those workers covered by a pension, such coverage is increasingly being provided by defined contribution pension plans. Surveys have reported a worker preference for defined contribution plans, with employers citing worker preference for transparency of plan value and improved benefit portability. As of 1998, the most recent published data available, 27 percent of the private sector labor force was covered by a DC plan, as their primary pension plan, up from 7 percent in 1979. While multiemployer plan funding has exhibited considerable stability over the past 2 decades, available data suggest that many plans have recently experienced significant funding declines. Since 1980, aggregate multiemployer plan funding has been stable, with the majority of plans funded above 90 percent of total liabilities and average funding at 105 percent by 2000. Recently, however, it appears that a combination of stock market declines coupled with low interest rates and poor economic conditions have reduced the assets and increased the liabilities of many multiemployer plans. In PBGC’s 2003 Annual Report, the agency estimated that total underfunding of underfunded multiemployer plans reached $100 billion by year-end, from $21 billion in 2000, and that its multiemployer program had recorded a year-end 2003 deficit of $261 million, the first deficit in more than 20 years. While most multiemployer plans continue to provide benefits to retirees at unreduced levels, the agency has also increased its forecast of the number of plans that will likely need financial assistance, from 56 plans in 2001 to 62 plans in 2003. Private survey data are consistent with this trend, with one survey by an actuarial consulting firm showing the percentage of fully funded client plans declining from 83 percent in 2001 to 67 percent in 2002. In addition, long-standing declines in the number of plans and worker participation continue. The number of insured multiemployer plans has dropped by a quarter since 1980 to fewer than 1,700 plans in 2003, the latest data available. Although in 2001, multiemployer plans in the aggregate covered 4.7 million active participants, representing about a fifth of all defined benefit plan participants, this number has dropped by 1.4 million since 1980. Aggregate funding for multiemployer pension plans remained stable during the 1980s and 1990s. By 2000, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities, with the average plan funded at 105 percent of liabilities. As shown in figure 1, the aggregate net funding of multiemployer plans grew from a deficit of about $12 billion in 1980 to a surplus of nearly $17 billion in 2000. From 1980 to 2000, multiemployer plan assets grew at an annual average rate of 11.7 percent, to about $330 billion, exceeding the average 10.5 percent annual percentage growth rate of single-employer plan assets. During the same time period, liabilities for multiemployer and single-employer pensions grew at an average annual rate of about 10.2 percent and 9.9 percent, respectively. A number of factors appear to have contributed to the funding stability of multiemployer plans, including: Investment Strategy—Historically, multiemployer plans appear to have invested more conservatively than their single-employer counterparts. Although comprehensive data are not available, some pension experts have suggested that defined benefit plans in the aggregate are more than 60 percent invested in equities, which are associated with greater risk and volatility than many fixed-income securities. Experts have stated that, in contrast, equity holdings generally comprise 55 percent or less of the assets of most multiemployer plans. Contribution Rates—Unlike single-employer plans, multiemployer plan funds receive steady contributions from employers because those amounts generally have been set through multiyear collective bargaining contracts. Participating employers, therefore, have less flexibility to vary their contributions in response to changes in firm performance, economic conditions, and other factors. This regular contribution income is in addition to any investment return and helps multiemployer plans offset any declines in investment returns. Risk Pooling—The pooling of risk inherent in multiemployer pension plans may also have buffered them against financial shocks and recessions since the contributions to the plans are less immediately affected by the economic performance of individual employer plan sponsors. Multiemployer pension plans typically continue to operate long after any individual employer goes out of business because the remaining employers in the plan are jointly liable for funding the benefits of all vested participants. Greater Average Plan Size—The stability of multiemployer plans may also be due in part to their size. Large plans (1,000 or more participants) constitute a greater proportion of multiemployer plans than of single- employer plans. (See figs. 2 and 3.) While 55 percent of multiemployer plans are large, only 13 percent of single-employer plans are large and 73 percent of single-employer plans have had fewer than 250 participants, as shown in figure 2. However, distribution of participants by plan size for multiemployer and single-employer plans is more comparable, with over 90 percent of both multiemployer and single-employer participants in large plans, as shown in figure 3. Although data limitations preclude any comprehensive assessment, available evidence suggests that since 2000, many multiemployer plans have recently experienced significant reductions in their funded status. PBGC estimated in its 2003 Annual Report that the aggregate deficit of underfunded multiemployer plans had reached $100 billion by year-end, up from a $21 billion deficit at the start of 2000. In addition, PBGC reported its own multiemployer insurance program deficit of $261 million for fiscal year 2003, the first deficit since 1981 and its largest ever. (See fig. 4.) While most multiemployer plans continue to provide benefits to retirees at unreduced levels, PBGC has also reported that the deficit was primarily caused by new and substantial probable losses, increasing the number of plans it classifies as likely requiring financial assistance in the near future from 58 plans with expected liabilities of $775 million in 2002 to 62 plans with expected liabilities of $1.25 billion in 2003. Private survey data and anecdotal evidence are consistent with this assessment of multiemployer funding losses. One survey by an actuarial consulting firm showed that the percentage of its multiemployer client plans that were fully funded declined from 83 percent in 2001 to 67 percent in 2002. Other, more anecdotal evidence suggests increased difficulties for multiemployer plans. Discussions with plan administrators have indicated that there has been an increase in the number of plans with financial difficulties in recent years, with some plans reducing or temporarily freezing the future accruals of participants. In addition, IRS officials recently reported an increase in the small number of multiemployer plans (less than 1 percent of all multiemployer plans) requesting tax-specific waivers that would provide plans relief from current funding shortfall requirements. As with single-employer plans, falling interest rates coincident with stock market declines and generally weak economic conditions have contributed to the funding difficulties of many multiemployer plans. The decline in interest rates in recent years has increased pension plan liabilities for DB plans in general, because their liability for future promised benefits increases when computed using a lower interest rate. At the same time, declining stock markets decreased the value of any equities held in multiemployer plan portfolios to meet those obligations. Finally, because multiemployer plan contributions are usually based on the number of hours worked by active participants, any reduction in their employment will reduce employer contributions to the plan. Over the past 2 decades, the multiemployer system has experienced a steady decline in the number of plans and in the number of active participants. In 1980, there were 2,244 plans and by 2003 the number had fallen to 1,631, a decline of about 27 percent. While a portion of the decline in the number of plans can be explained by consolidations through mergers, few new plans have been formed, only 5, in fact, since 1992. Meanwhile, the number of active multiemployer plan participants has declined both in relative and absolute terms. By 2001, only about 4.1 percent of the private sector workforce was comprised of active participants in multiemployer pension plans, down from 7.7 percent in 1980 (see fig. 5), with the total number of active participants decreasing from about 6.1 million to about 4.7 million. Finally, as the number of active participants has declined, the number of retirees increased—from about 1.4 million to 2.8 million, and this increase had led to a decline in the ratio of active (working) participants to retirees in multiemployer plans. By 2001, there were about 1.7 active participants for every retiree, compared with 4.3 in 1980. (See fig. 6.) While the trend is also evident among single-employer plans, the decline in the ratio of active workers to retirees affects multiemployer funding more directly because employer contributions are tied to active employment. PBGC’s role regarding multiemployer plans includes monitoring plans for financial problems, providing technical and financial assistance to troubled plans, and guaranteeing a minimum level of benefits to participants in insolvent plans. For example, PBGC annually reviews the financial condition of multiemployer plans to identify those that may have potential financial problems in the near future. Agency officials told us that troubled plans often solicit their technical assistance since under the multiemployer framework, affected parties have a vested interest in a plan’s survival. Occasionally, PBGC is asked to serve as a facilitator where the agency works with all the parties associated with the troubled plan to improve its financial status. Examples of such assistance by PBGC include facilitating the merger of troubled plans into one stronger plan and the “orderly shutdown” of plans, allowing the affected employers to continue to operate and pay benefits until all liabilities are paid. Unlike its role in the single-employer program where PBGC trustees weak plans and pays benefits directly to participants, PBGC does not take over the administration of multiemployer plans, but instead, upon application, provides financial assistance in the form of loans when plans become insolvent and are unable to pay benefits at PBGC-guaranteed levels. Such financial assistance is infrequent; for example, PBGC has made loans totaling $167 million to 33 multiemployer plans since 1980 compared with 296 trusteed terminations of single-employer plans and PBGC benefit payments of over $4 billion in 2002-2003 alone. PBGC officials believe that the low frequency of PBGC financial assistance to multiemployer plans is likely due to specific features of the multiemployer insurance regulatory framework: (1) the employers sponsoring the plan share the risk for providing benefits to all participants in the plan and (2) benefit guarantees are set at a lower level for the multiemployer insurance program compared with the guarantees provided by the single-employer program. Agency officials say that together these features encourage the affected parties to collaborate on their own to address the plan’s financial difficulties. Several of PBGC’s functions regarding its multiemployer program and its single-employer program are similar. For example, under both programs PBGC monitors the financial condition of all plans to identify those that are at-risk of requiring financial assistance. The agency maintains a database of financial information about such plans that draws its data from both PBGC premium filings and the Form 5500. Using an automated screening process that measures each plan against funding and financial standards, the agency determines which plans may be at risk of termination or insolvency. For both, PBGC also annually identifies plans that it considers probable or reasonably possible liabilities and enumerates their aggregate unfunded liabilities in the agency’s annual financial statements for each program. The type of assistance PBGC provides to troubled plans through its multiemployer program is shaped to a degree by the program’s definition of the “insurable event.” PBGC insures against multiemployer plan insolvency. A multiemployer plan is insolvent when its available resources are not sufficient to pay the level of benefits at PBGC’s multiemployer guaranteed level for 1 year. In such cases, PBGC will provide the needed financial assistance in the form of a loan. If the plan recovers from insolvency, it must begin repaying the loan on a commercially reasonable schedule in accordance with regulations. Under MPPAA, unlike its authority towards single-employer plans, PBGC does not takeover or otherwise assume responsibility for the liabilities of a financially troubled multiemployer plan. PBGC sometimes provides technical assistance to help multiemployer plan administrators improve their funding status or for help on other issues. Plan administrators may contact PBGC’s customer service representatives at designated offices to obtain assistance on such matters as premiums, plan terminations, and general legal questions related to PBGC. Agency officials told us that on a few occasions PBGC has worked with plan administrators to facilitate plan mergers, “orderly shutdowns,” and other arrangements to protect plan participants’ benefits. For example, in 1997, PBGC worked with the failing Local 675 Operating Engineers Pension Fund and the Operating Engineers Central Pension Fund to effect a merger of the two plans. However, PBGC officials also told us that the majority of mergers are crafted by private sector parties and have no substantial PBGC involvement. PBGC has also on occasion assisted in the orderly shutdown of plans. For example, agency officials told us that, in 2001, they helped facilitate the shutdown of the severely underfunded Buffalo Carpenters’ Pension Fund. PBGC has the authority to approve certain plan rules governing withdrawal liability payments and did so in this case approving the plan’s request to lower its annual payments, which made it possible for the employers to remain in business and pay benefits until all liabilities were paid. In those cases where a multiemployer plan cannot pay guaranteed benefits, PBGC provides financial assistance in the form of a loan to allow the plan to continue to pay benefits at the level guaranteed by PBGC. A multiemployer plan need not be terminated to qualify for PBGC loans, but must be insolvent and is allowed to reduce or suspend payment of that portion of the benefit that exceeds the PBGC guarantee level. The number of loans and amount of financial assistance from PBGC to multiemployer plans has been small in comparison to the benefits paid out under its single-employer program. Since 1980, the agency has provided loans to 33 plans totaling $167 million. In 2003, PBGC provided $5 million in loans to 24 multiemployer plans. This compares with 296 trusteed terminations of single-employer plans and PBGC benefit payments of over $4 billion to single-employer plan beneficiaries in 2002 and 2003 alone. PBGC officials say that this lower frequency of financial assistance is primarily due to key features of the multiemployer regulatory framework. First, in comparison to that governing the single-employer program, the regulatory framework governing multiemployer plans places greater financial risks on employers and workers and relatively less on PBGC. For example, in the event of the bankruptcy of an employer in a multiemployer plan, the remaining employers in the plan remain responsible for funding all plan benefits. Under the single-employer program, a comparable employer bankruptcy could leave PBGC responsible for any plan liabilities up to the PBGC-guaranteed level. In addition, the law provides a disincentive for employers seeking to withdraw from an underfunded plan by imposing a withdrawal liability based on its share of the plan’s unfunded vested benefits. Another key feature is that multiemployer plan participants also bear greater risk than their single-employer counterparts because PBGC guarantees benefits for multiemployer pensioners at a much lower dollar amount than for single-employer pensioners: about $13,000 for 30 years of service for the former compared with about $44,000 annually per retiree at age 65 for the latter. PBGC officials explained that this greater financial risk on employers and lower guaranteed benefit level for participants in practice creates incentives for employers, participants, and their collective bargaining representatives to avoid insolvency and to collaborate in trying to find solutions to the plan’s financial difficulties. The smaller size of PBGC’s multiemployer program might also contribute to the lower frequency of assistance. The multiemployer program’s $1 billion in assets and $1.3 billion in liabilities accounts for a relatively small portion of PBGC’s total assets and liabilities, representing less than 3 percent of the total. Further, the multiemployer program covers just 22 percent of all defined benefit plan participants. There are also many fewer plans in the multiemployer program, about 1,700, as compared with about 30,000 single-employer plans. Other things equal, there are fewer opportunities for potential PBGC assistance to multiemployer plans than to single-employer plans. A number of factors pose challenges to the long-term prospects of the multiemployer pension plan system. Some of these factors are specific to the features and nature of multiemployer plans, including a regulatory framework that some employers may perceive as financially riskier and less flexible than those covering other types of pension plans. For example, compared with a single-employer plan, an employer covered by a multiemployer plan cannot easily adjust annual plan contributions in response to the firm’s own financial circumstances. Collective bargaining itself, a necessary aspect of the multiemployer plan model and another factor affecting plans’ prospects, has also been in long-term decline, suggesting fewer future opportunities for new plans to be created or existing ones to expand. As of 2003, union membership, a proxy for collective bargaining coverage, accounted for less than 9 percent of the private sector labor force and has been steadily declining since 1953. Experts have identified other challenges to the future prospects of defined benefit plans generally, including multiemployer plans. These include the growing trend among employers to choose defined contribution plans over DB plans, including multiemployer plans, the continued growing life expectancy of American workers, resulting in participants spending more years in retirement, thus increasing benefit costs, and increases in employer-provided health insurance costs, which are increasing employers’ total compensation costs generally, making them less willing or able to increase elements of compensation, like wages or pensions. Some factors that raise questions about the long-term viability of multiemployer plans are specific to certain features of multiemployer plans themselves, including features of the regulatory framework that some employers may well perceive as less flexible and financially riskier than the features of other types of pension plans. For example, an employer covered by a multiemployer pension plan typically does not have the funding flexibility of a comparable employer sponsoring a single- employer plan. In many instances, the employer covered by the multiemployer plan cannot as easily adjust annual plan contributions in response to the firm’s own financial circumstances. This is because contribution rates are often fixed for periods of time by the provisions of the collective bargaining agreement. Employers that value such flexibility might be less inclined to participate in a multiemployer plan. Employers in multiemployer plans may also face greater financial risks than those in other forms of pension plans. For example, an employer sponsor of a multiemployer plan that wishes to withdraw from the plan is liable for its share of pension plan benefits not covered by plan assets upon withdrawal from the plan, rather than when the plan terminates. Employers in plans with unfunded vested benefits face an immediate withdrawal liability that can be costly, while employers in fully funded plans face the potential of costly withdrawal liability if the plan becomes underfunded in the future. Thus, an employer’s pension liabilities become a function not only of the employer’s own performance but also the financial health of other employer plan sponsors. These additional sources of potential liability can be difficult to predict, increasing employers’ level of uncertainty and risk. Some employers may hesitate to accept such risks if they can sponsor other plans that do not have them, such as 401(k) type defined contribution plans. The future growth of multiemployer plans is also predicated on the future growth prospects of collective bargaining. Collective bargaining is an inherent feature of the multiemployer plan model. Collective bargaining, however, has been declining in the United States since the early 1950s. Currently, union membership, a proxy for collective bargaining coverage, accounts for less than 9 percent of the private sector labor force. In 1980, union membership accounted for about 19 percent of the civilian workforce and about 27 percent of the civilian workforce in 1953. Pension experts have suggested a variety of challenges faced by today’s defined benefit pension plans, including multiemployer plans. These include the continued general shift away from DB plans to defined contribution plans, and the increased longevity of the U.S. population, which translates into a lengthier and more costly retirement. In addition, the continued escalation of employer health insurance costs has placed pressure on the compensation costs of employers, including pensions. Employers have tended to move away from DB plans and towards DC plans since the mid 1980s. The number of PBGC-insured defined benefit plans declined from 97,683 in 1980 to 31,135 in 2002. (See fig. 7.) The number of defined contribution plans sponsored by private employers nearly doubled from 340,805 in 1980 to 673,626 in 1998. Along with this continuing trend to sponsoring DC plans, there has also been a shift in the mix of plans that private sector workers participate. Labor reports that the percentage of private sector workers who participated in a primary DB plan has decreased from 38 percent in 1980 to 21 percent by 1998, while the percentage of such workers who participated in a primary DC plan has increased from 8 to 27 percent during this same period. Moreover, these same data show that, by 1998, the majority of active participants (workers participating in their employer’s plan) were in DC plans, whereas nearly 20 years earlier the majority of participants were in DB plans. Experts have suggested a variety of explanations for this shift, including the greater risk borne by employers with DB plans, greater administrative costs and more onerous regulatory requirements, and that employees more easily understand and favor DC plans. These experts have also noted considerable employee demand for plans that state benefits in the form of an account balance and emphasize portability of benefits, such as is offered by 401(k) type defined contribution pension plans. The increased life expectancy of workers also has important implications for defined benefit plan funding, including multiemployer plans. The average life expectancy of males at birth has increased from 66.6 in 1960 to 74.3 in 2000, with females at birth experiencing a rise of 6.6 years from 73.1 to 79.7 over the same period. As general life expectancy has increased in the United States, there has also been an increase in the number of years spent in retirement. PBGC has noted that improvements in life expectancy have extended the average amount of time spent by workers in retirement from 11.5 years in 1950 to 18 years for the average male worker as of 2003. This increased duration of retirement has placed pressure on employers with defined benefit plans to increase their contributions to match this increase in benefit liabilities. This problem can be further exacerbated for those multiemployer plans with a shrinking pool of active workers because plan contributions are generally paid on a per work-hour basis, and thus employers may have to increase contributions for each hour worked by the remaining active participants to fund any liability increase. Increasing health insurance costs are another factor affecting the long- term prospects of pensions, including multiemployer pensions. Recent increases in employer provided health insurance costs are accounting for a rising share of total compensation, increasing pressure on employers’ ability to maintain wages and other benefits, including pensions. Bureau of Labor Statistics data show that the cost of employer provided health insurance has risen steadily in recent years, rising from 5.4 percent of total compensation in 1999 to 6.5 percent as of the third quarter of 2003. A private survey of employers found that employer-sponsored health insurance costs rose about 14 percent between the spring of 2002 and the spring of 2003, the third consecutive year of double digit acceleration and the highest premium increase since 1990. Plan administrators and employer and union representatives that we talked with identified the rising costs of employer provided health insurance as a key problem facing plans, as employers are increasingly forced to choose between maintaining current levels of pension or medical benefits. Although available evidence suggests that multiemployer plans are not experiencing anywhere near the magnitude of the problems that have recently afflicted the single-employer plans, there is cause for concern. Most significant is PBGC’s estimate of $100 billion in unfunded multiemployer plan liabilities that are being borne collectively by employer sponsors and plan participants. Compared to the single- employer program, PBGC does not face the same level of exposure from this liability at this time. This is because, as PBGC officials have noted, the current regulatory framework governing multiemployer plans redistributes financial risk towards employers and workers and away from the government and potentially the taxpayer. Employers face withdrawal and other liabilities that can be significant, while workers face the prospect of receiving guaranteed benefits far lower and with benefit reduction at levels well below the guaranteed limits than those provided by PBGC’s single-employer program, should their plan become insolvent. Together, not only do these features limit the exposure to PBGC and the taxpayer, they create important incentives for all interested parties to resolve difficult financial situations that could otherwise result in plan termination. However, the declines in interest rates and equities markets, and weak economic conditions in the early 2000s, have increased the financial stress on both individual multiemployer plans and the multiemployer framework generally. Proposals to address this stress should be carefully designed and considered for their longer-term consequences. For example, proposals to shift plan liabilities to PBGC by making it easier for employers to exit multiemployer plans could help a few employers or participants but erode the existing incentives that encourage interested parties to independently face up to their financial challenges. In particular, placing additional liabilities on PBGC could ultimately have serious potential consequences for the taxpayer, given that with only about $25 million in annual income, a trust fund of less than $1 billion, and a current deficit of $261 million, PBGC’s multiemployer program has very limited resources to handle a major plan insolvency that could run into billions of dollars. The current congressional efforts to provide funding relief are at least in part in response to the difficult conditions experienced by many plans in recent years. However, these efforts are also occurring in the context of the broader, long-term decline in private sector defined benefit plans, including multiemployer plans, and the attendant rise of defined contribution plans, with their emphasis on greater individual responsibility for providing for a secure retirement. Such a transition could lead to greater individual control and reward for prudent investment and planning. However, if managed poorly, it could lead to adverse distributional effects for some workers and retirees, including a greater risk of a poverty level income in retirement. Under this transition view, the more fundamental issues concern how to minimize the potentially serious, negative effects of the transition, while balancing risks and costs for employers, workers, and retirees, and the public. These important policy concerns make Congress’s current focus on pension reform both timely and appropriate. We provided a draft of this report to Labor, Treasury, and PBGC. The agencies provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor, the Secretary of the Treasury, and the Executive Director of the Pension Benefit Guaranty Corporation; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-5932. Other major contributors include Joseph Applebaum, Orin B. Atwater, Susan Bernstein, Kenneth J. Bombara, Tim Fairbanks, Charles Jeszeck, Gene Kuehneman, Raun Lazier, and Roger J. Thomas. | Multiemployer-defined benefit pension plans, which are created by collective bargaining agreements covering more than one employer and generally operated under the joint trusteeship of labor and management, provide coverage to over 9.7 million of the 44 million participants insured by the Pension Benefit Guaranty Corporation (PBGC). The recent termination of several large single-employer plans--plans sponsored by individual firms--has led to millions of dollars in benefit losses for thousands of workers and left PBGC, their public insurer, an $11.2 billion deficit as of September 30, 2003. The serious difficulties experienced by these single-employer plans have prompted questions about the health of multiemployer plans. This report provides the following information on multiemployer pension plans: (1) trends in funding and worker participation, (2) PBGC's role regarding the plans' financial solvency, and (3) potential challenges to the plans' long-term prospects. Following 2 decades of relative financial stability, multiemployer plans as a group appear to have suffered recent and significant funding losses, while long-term declines in participation and new plan formation continue unabated. At the close of the 1990s, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities. Recently, however, stock market declines, coupled with low interest rates and poor economic conditions, appear to have reduced assets and increased liabilities for many plans. PBGC reported an accumulated net deficit of $261 million for its multiemployer program in 2003, the first since 1981. Meanwhile, since 1980, the number of plans has declined from over 2,200 to fewer than 1,700 plans, and there has been a long-term decline in the total number of active workers. PBGC monitors those multiemployer plans, which may, in PBGC's view, present a risk of financial insolvency. PBGC also provides technical and financial assistance to troubled plans and guarantees a minimum level of benefits to participants in insolvent plans. PBGC annually reviews the financial condition of plans to determine its potential insurance liability. Although the agency does not trustee the administration of insolvent multiemployer plans as it does with single-employer plans, it does offer them technical assistance and loans. PBGC loans have been rare, with loans to only 33 plans, totaling $167 million since 1980. Several factors pose challenges to the long-term prospects of the multiemployer system. Some are inherent to the multiemployer regulatory framework, such as the greater perceived financial risk and reduced flexibility for employers compared to other plan designs, and suggest that fewer employers will find such plans attractive. Also, the long-term decline of collective bargaining results in fewer new participants to expand or create new plans. Other factors threaten all defined benefit plans, including multiemployer plans: the growing trend among employers to choose defined contribution plans; the increasing life expectancy of workers, which raises the cost of plans; and continuing increases in employer health insurance costs, which compete with pensions for employer funding. |
Currently located within the Department of the Treasury, the CDFI Fund was authorized in 1994 and has received appropriations totaling $225 million through fiscal year 1998. The 1995 Rescissions Act limited the Fund to 10 full-time-equivalent staff for fiscal years 1995 and 1996, but for fiscal year 1998, the Fund has a ceiling of 35 full-time staff. As of May 1998, the Fund had 27 full-time and 2 part-time staff. The Fund’s overall performance is subject to the general guidance set forth in the Community Development Banking and Financial Institutions Act of 1994 (CDFI Act), which established the Fund. In addition, the Fund’s overall performance is subject to the Results Act and the Office of Management and Budget’s (OMB) implementing guidance. The latter act seeks to improve the management of federal programs and their effectiveness and efficiency by establishing a system for agencies to set goals for performance and measure the results. Under the act, federal agencies must develop a strategic plan that covers a period of at least 5 years and includes a mission statement, long-term general goals, and strategies for reaching those goals. Agencies must report annually on the extent to which they are meeting their annual performance goals and identify the actions needed to reach or modify the goals they have not met. The Fund completed its final plan in September 1997 and is currently considering revisions to that plan. While the assistance agreements that the Fund negotiated with awardees in the CDFI program satisfy the CDFI Act’s requirements for performance measurement, they include more measures of activity (what the awardees will do) than of accomplishment (how the awardees’ activities will affect distressed communities) and do not always include measures for key aspects of goals. In addition, baseline information that was available to the Fund seldom appears in the Fund’s performance measurement schedule. A more comprehensive performance measurement system would provide better indicators for monitoring and evaluating the program’s results. The CDFI Fund’s progress in developing performance goals and measures for awardees in the CDFI program is mixed. On the one hand, the Fund has entered into assistance agreements with most of the 1996 awardees. As the CDFI Act requires, these assistance agreements include performance measures that (1) the Fund negotiated with the awardees and (2) are generally based on the awardees’ business plans. On the other hand, the Fund’s performance goals and measures fall somewhat short of the standards for performance measures established in the Results Act. Although awardees’ assistance agreements are not subject to the Results Act, the act establishes performance measurement standards for the federal government, including the CDFI Fund. In the absence of specific guidance on performance measures in the CDFI Act, we drew on Results Act guidance for discussion purposes. The assistance agreements called for under the CDFI Act require awardees to comply with multiple provisions, including the accomplishment of agreed-upon levels of performance by the final evaluation date, typically 5 years in the future. As of January 1998, the Fund had entered into assistance agreements with 26 of the 31 awardees for 1996. We found, on the basis of our six case studies, that the Fund had negotiated performance goals that met the statutory requirements and established goals for awardees that match the Fund’s intended purpose, extensively involved the awardees in crafting their planned performance, and produced a flexible schedule for designing goals and measures. According to Results Act guidance, both activity measures, such as the number of loans made, and accomplishment measures, such as the number of new low-income homeowners, are useful measures. However, the act regards accomplishment measures as more effective indicators of a program’s results because such measures identify the outcome of the activities performed. Our survey of CDFIs nationwide, including the 1996 awardees, and our review of six case study awardees’ business plans showed that CDFIs use both types of measures to assess their progress toward meeting their goals. Yet our review of the 1996 awardees’ assistance agreements revealed a far greater use of activity measures. As a result, the assistance agreements focus primarily on what the awardees will do, rather than on how their activities will affect the distressed communities. According to most of the case study awardees, their use of accomplishment measures was limited by concerns about isolating and measuring the results of community development efforts, as well as concerns about the Fund’s possible imposition of sanctions for not meeting performance benchmarks subject to factors outside their control. According to Results Act guidance, goals and measures should be clear. We found that the goals and measures had varying degrees of clarity. For instance, most goals and measures were related; however, in some agreements, the measures did not address all key aspects of the goals. Finally, under Results Act guidance, clarity in performance measurement is also best achieved through the use of specific units, well-defined terms, and baseline and target values and dates. While the measures in the agreements included most of these elements, they generally lacked baseline values and dates. Fund officials told us that they used baseline values and dates in negotiating the performance measures, but this information did not appear in the assistance agreements themselves. Therefore, without information contained in awardees’ files, it is difficult to determine the level of increase or contribution the investment is intended to achieve. Refining the awardees’ goals and measures to meet Results Act guidance will facilitate the Fund’s assessment of the awardees’ progress over time. The Fund is taking steps to avoid some of the initial shortcomings in future agreements and is seeking to enhance its expertise and staffing. Although the Fund has developed reporting requirements for awardees to collect information for monitoring their performance, it lacks documented postaward monitoring procedures for assessing their compliance with their assistance agreements, determining the need for corrective actions, and verifying the accuracy of the information collected. In addition, the Fund has not yet established procedures for evaluating the impact of awardees’ activities. The effectiveness of the Fund’s monitoring and evaluation systems will depend, in large part, on the quality of the information being collected through the required reports and the Fund’s assessment of awardees’ compliance and the impact of awardees’ activities. Primarily because of statutorily imposed staffing restrictions in fiscal years 1995 and 1996 and subsequent departmental hiring restrictions, the Fund has had a limited number of staff to develop and implement its monitoring and evaluation systems. In fiscal year 1998, it began to hire management and professional staff to develop monitoring and evaluation policies and procedures. The Fund has established quarterly and annual reporting requirements for awardees in their assistance agreements. Each awardee is to describe its progress toward its performance goals, demonstrate its financial soundness, and maintain appropriate financial information. However, according to an independent audit recently completed by KPMG Peat Marwick, the Fund lacks formal, documented postaward monitoring procedures to guide Fund staff in their oversight of awardees’ activities. In addition, Fund officials indicated that they had not yet established a system to verify information submitted by awardees through the reporting processes. Fund staff told us that they had not developed postaward monitoring procedures because of the CDFI program’s initial staffing limits. Now that additional staff are in place, they have begun to focus their attention on monitoring issues, including those identified by KPMG Peat Marwick. The CDFI statute also specifies that the Fund is to annually evaluate and report on the activities carried out by the Fund and the awardees. According to the Conference Report for the statute, the annual reports are to analyze the leveraging of private assistance with federal funds and determine the impact of spending resources on the program’s investment areas, targeted populations, and qualified distressed communities. To date, the Fund has published two annual reports, the second of which contains an estimate of the private funding leveraged by the CDFI funding. This estimate is based on discussions with CDFIs and CDFI trade association representatives, not on financial data collected from the awardees. In part because it has been only 16 months since the Fund made its first investment in a CDFI, information on performance in the CDFI program is not yet available for a comprehensive evaluation of the program’s impact, such as the Conference Report envisions. The two annual reports include anecdotes about individuals served by awardees and general descriptions of awardees’ financial services and initiatives, but they do not evaluate the impact of the program on its investment areas, targeted populations, and qualified distressed communities. Satisfying this requirement will entail substantial research and analysis, as well as expertise in evaluation and time for the program’s results to unfold. Fund officials have acknowledged that their evaluation efforts must be enhanced, and they have planned or taken actions toward improvement. For instance, the Fund has developed preliminary program evaluation options, begun hiring staff to conduct or supervise the research and evaluations, and revised the assistance agreements for the 1997 awardees to require that they annually submit a report to assist the Fund in evaluating the program’s impact. However, because the Fund has not yet finished hiring its research and evaluation staff, it has not yet reached a final decision on what information it will require from the awardees to evaluate the program’s impact. The Fund also has to determine how it will integrate the results of awardees’ reported performance measurement or recent findings from related research into its evaluation plans. As to be expected, reports of accomplishments in the CDFI program are limited and preliminary. Because most CDFIs signed their assistance agreements from March 1997 through October 1997, the Fund has just begun to receive the required quarterly reports, and neither the Fund nor we have verified the information in them. Through February 1998, the Fund had received 41 quarterly reports from 19 CDFIs, including community development banks, community development credit unions, nonprofit loan funds, microenterprise loan funds, and community development venture capital funds. The different types of CDFIs support a variety of activities, whose results will be measured against different types of performance measures. Given the variety of performance measures for the different types of CDFIs, it is difficult to summarize the performance reported by the 19 CDFIs. To illustrate cumulative activity in the program to date, we compiled the data reported for the two most common measures—the total number of loans for both general and specific purposes and the total dollar value of these loans. According to these data, the 19 CDFIs made over 1,300 loans totaling about $52 million. In addition, the CDFIs reported providing consumer counseling and technical training to 480 individuals or businesses. In the BEA program, as of January 1998, about 58 percent of the banks had completed the activities for which they received the awards and the Fund had disbursed almost 80 percent of the $13.1 million awarded in fiscal year 1996. Despite this level of activity, the impact of the program on banks’ investments in distressed communities is difficult to assess. Our case studies of five awardees and interviews with Fund officials indicate that although the BEA awards encouraged some banks to increase their investments, other regulatory or economic incentives were equally or more important for other banks. In addition, more complete data on some banks’ investments are needed to guarantee that the increases in investments in distressed areas rewarded by the BEA program are not being offset by decreases in other investments in these distressed areas. Furthermore, the Fund cannot be assured that the banks’ increased investments remain in place because it does not require banks to report any material changes in these investments. Although the CDFI statute does not require awardees to reinvest their awards in community development, most banks that got BEA awards in 1996 have told the Fund that they have done so, thereby furthering the BEA program’s objectives, according to the Fund. Our analysis indicated that the impact of the BEA award varied at our five case study banks. One bank reported that it would not have made an investment in a CDFI without the prospect of receiving an award from the Fund. In addition, a CDFI Fund official told us that some CDFIs marketed the prospect of receiving a BEA award as an incentive for banks to invest in them. We found, however, that the prospect of an award did not influence other banks’ investment activity. For example, two banks received awards totaling over $324,000 for increased investments they had made or agreed to make before the fiscal year 1996 awards were made. Banks have multiple incentives for investing in CDFIs and distressed areas. Therefore, it is difficult to isolate the impact of the BEA award from the effects of other incentives. According to our five case study banks, regulatory incentives, such as the need to comply with the Community Reinvestment Act (CRA), often motivated the banks’ investments in CDFIs and distressed communities, as did economic considerations. One bank said that such investments lay the groundwork for developing new markets, while other banks said that the investments help them maintain market share in areas targeted by the BEA program and compete with other banks in these areas. Two banks cited improved community relations as reasons for their investments. Some banks indicated that the BEA award provides a limited incentive, especially since it is relatively small and comes after a bank has already made at least an initial investment. According to Fund officials, a small portion of the 1996 awardees do not maintain the geographic data needed to determine whether any new investments in distressed areas are coming at the expense of other investments—particularly agricultural, consumer, and small business loans—in such areas. Concerned about the validity of the net increases in investments in distressed areas reported by awardees, the Fund required the 1996 awardees that did not maintain such data to certify that, to the best of their knowledge, they had not decreased investments in distressed areas that were not linked to their BEA award. While most banks maintain the data needed to track their investments by census tract and can thus link their investments with distressed areas, a few do not do so for all types of investments. The Fund does not require awardees to notify the Fund of material changes in their investments after awards have been made. Therefore, it does not know how long investments made under the program remain in place. We found, for example, that a CDFI in which one of our case study banks had invested was dissolved several months after the bank received a BEA award. The CDFI later repaid a portion of the bank’s total investment. Because the Fund does not require banks to report their postaward activity, the Fund was not aware of this situation until we brought it to the attention of Fund officials. After hearing of the situation, a Fund official contacted the awardee and learned that the awardee plans to reinvest the funds in another CDFI. Even though this case has been resolved, Fund officials do not have a mechanism for determining whether investments made under the program remain in place. The CDFI statute does not require awardees to reinvest their awards in community development; however, most of the 1996 awardees have told the Fund, and we found through our case studies, that many of them are reinvesting at least a portion of their awards in community development. Reinvestment in community development is consistent with the goals of the BEA program. The CDFI Fund has more work to do before its strategic plan can fulfill the requirements of the Results Act. Though the plan covers the six basic elements required by the Results Act, these elements are generally not as specific, clear, and well linked as the act prescribes. However, the Fund is not unique in struggling to develop its strategic plan. We have found that federal agencies generally require sustained effort to develop the dynamic strategic planning processes envisioned by the Results Act. Difficulties that the Fund has encountered—in setting clear and specific strategic and performance goals, coordinating cross-cutting programs, and ensuring the capacity to gather and use performance and cost data—have faced many other federal agencies. Under the Results Act, an agency’s strategic plan must contain (1) a comprehensive mission statement; (2) agencywide strategic goals and objectives for all major functions and operations; (3) strategies, skill, and technologies and the various resources needed to achieve the goals and objectives; (4) a relationship between the strategic goals and objectives and the annual performance goals; (5) an identification of key factors, external to the agency and beyond its control, that could significantly affect the achievement of the strategic goals and objectives; and (6) a description of how program evaluations were used to establish or revise strategic goals and objectives and a schedule for future program evaluations. OMB has provided agencies with additional guidance on developing their strategic plans. In its strategic plan, the Fund states that its mission is “to promote economic revitalization and community development through investment in and assistance to community development financial institutions (CDFIs) and through encouraging insured depository institutions to increase lending, financial services and technical assistance within distressed communities and to invest in CDFIs.” Overall, the Fund’s mission statement generally meets the requirements established in the Results Act by explicitly referring to the Fund’s statutory objectives and indicating how these objectives are to be achieved through two core programs. Each agency’s strategic plan is to set out strategic goals and objectives that delineate the agency’s approach to carrying out its mission. The Fund’s strategic plan contains 5 goals and 13 objectives, with each objective clearly related to a specific goal. However, OMB’s guidance suggests that strategic goals and objectives be stated in a manner that allows a future assessment to determine whether they were or are being achieved. Because none of the 5 goals (for example, to strengthen and expand the national network of CDFIs) and 13 objectives (for example, increase the number of organizations in training programs) in the strategic plan include baseline dates and values, deadlines, and targets, the Fund’s goals and objectives do not meet this criterion. The act also requires that an agency’s strategic plan describe how the agency’s goals and objectives are to be achieved. Results Act guidance suggests that this description address the skills and technologies, as well as the human, capital, information, and other resources, needed to achieve strategic goals and objectives. The Fund’s plan shows mixed results in meeting these requirements. On the positive side, it clearly lists strategies for accomplishing each goal and objective—establishing better linkages than the strategic plans of agencies that simply listed objectives and strategies in groups. On the other hand, the strategies themselves consist entirely of one-line statements. Because they generally lack detail, most are too vague or general to permit an assessment of whether their accomplishment will help achieve the plan’s strategic goals and objectives. For example, it is unclear how the strategy of “emphasizing high quality standards in implementing the CDFI program” will specifically address the objective of “strengthening and expanding the national network of CDFIs.” The Fund’s strategic plan lists 22 performance goals, which are clearly linked to specific strategic goals. However, the performance goals, like the Fund’s strategic goals and objectives, generally lack sufficient specificity, as well as baseline and end values. These details would make the performance goals more tangible and measurable. For example, one performance goal is to “increase the number of applicants in the BEA program.” This goal would be more useful if it specified the baseline number of applicants and projected an increase over a specified period of time. Also, some performance goals are stated more as strategies than as desired results. For example, it is not readily apparent how the performance goal of proposing legislative improvements to the BEA program will support the related strategic goal of encouraging investments in CDFIs by insured depository institutions. The Fund’s strategic plan only partially meets the requirement of the Results Act and of OMB’s guidance that it describe key factors external to the Fund and beyond its control that could significantly affect the achievement of its objectives. While the plan briefly discusses external factors that could materially affect the Fund’s performance, such as “national and regional economic trends,” these factors are not linked to specific strategic goals or objectives. The Results Act defines program evaluations as assessments, through objective measurement and objective analysis, of the manner and extent to which federal programs achieve intended objectives. Although the Fund’s plan does discuss various evaluation options, it does not discuss the role of program evaluations in either setting or measuring progress against all strategic goals. Also, the list of evaluation options does not describe the general scope or methodology for the evaluations, identify the key issues to be addressed, or indicate when the evaluations will occur. Our review of the Fund’s strategic plan also identified other areas that could be improved. For instance, OMB’s guidance on the Results Act directs that federal programs contributing to the same or similar outcomes should be coordinated to ensure that their goals are consistent and their efforts mutually reinforcing. The Fund’s strategic plan does not explicitly address the relationship of the Fund’s activities to similar activities in other agencies or indicate whether or how the Fund coordinated with other agencies in developing its strategic plan. Also, the capacity of the Fund to provide reliable information on the achievement of its strategic objectives at this point is somewhat unclear. Specifically, the Fund has not developed its strategic plan sufficiently to identify the types and the sources of data needed to evaluate its progress in achieving its strategic objectives. Moreover, according to a study prepared by KPMG Peat Marwick, the Fund has yet to set up a formal system, including procedures, to evaluate, continuously monitor, and improve the effectiveness of the management controls associated with the Fund’s programs. As is consistent with the Results Act, the Fund is refining its plan by taking steps that, according to a key Fund official in charge of revising the plan, will address the shortcomings that we and the Department of the Treasury have identified. According to this official, the revised strategic plan, which the Fund expects to complete by August 1998, proposes to incorporate changes to the plan’s strategic goals, including the elimination of the two that are organizational rather than strategic; a new format for presenting goals and objectives that links benchmarks and planned evaluations to each goal, along with key external factors that could affect the Fund’s progress toward that goal; a budget structure that aligns the program’s activities with sources and uses of funds to better track the resources required to implement the program’s goals and objectives; a performance goal that measures the ability of the Fund to leverage its resources with those of the private sector; and an identification and description of crosscutting organizations and programs that duplicate or compliment the CDFI Fund’s programs. In closing, Madam Chair, our preliminary review has identified several opportunities for the Fund to improve the effectiveness of the CDFI and BEA programs and of its strategic planning effort. In our view, these opportunities exist, in part, because the Fund is new and is experiencing the typical growing pains associated with setting up an agency—particularly one that has the relatively complex and long-term mission of promoting economic revitalization and community development in low-income communities. In addition, staffing limitations have delayed the development of monitoring and evaluations systems. Recently, however, the Fund has hired several senior staff—including a director; two deputy directors, one of whom also serves as the chief financial officer; an awards manager; a financial manager; and program managers—and is reportedly close to hiring an evaluations director. While it is too early to assess the impact of filling these positions, the new managers have initiated actions to improve the programs and the strategic plan. In our final report, we expect to make recommendations to further improve the operations of the CDFI Fund and its programs. Madam Chair, this concludes our testimony. We would be pleased to respond to any questions that you or Members of the Committee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the preliminary results of its ongoing review of the administration of the Community Development Financial Institutions (CDFI) Fund, focusing on: (1) the first year's performance of the CDFI and Bank Enterprise Award (BEA) programs and opportunities for improving their effectiveness; (2) the Fund's progress in developing performance measures for awardees and systems to monitor and evaluate their progress; (3) the impact of the BEA program on banks' investments in CDFIs and distressed communities; and (4) CDFI's progress in meeting the strategic planning requirements of the Government Performance and Results Act of 1993. GAO noted that: (1) as of January 1998, the Fund had entered into assistance agreements with 26 of the 31 CDFIs that received awards in 1996; (2) these agreements include performance goals and measures that were based on the business plans submitted by awardees in their application packages and negotiated between the Fund and the awardees, as the CDFI Act requires; (3) these agreements are consistent with the program's objectives; (4) using the Results Act for guidance, GAO found that the performance measures in the assistance agreements generally assess activities rather than accomplishments reflecting the results of activities; (5) GAO further found that although the performance measures in the assistance agreements are generally related to specific goals, they do not always address all key aspects of the goals, and most assistance agreements lack baseline data that would facilitate tracking progress over time; (6) the Fund has developed reporting requirements for awardees to collect information for monitoring their performance and is developing post-award monitoring procedures for assessing their compliance with their assistance agreements; (7) the Fund currently does not have a system for evaluating the impact of awardees' activities; (8) although the Fund has disbursed about 80 percent of the fiscal year 1996 BEA awards funds, it is difficult to determine the extent to which the program has encouraged the 38 awardees to increase their investments in distressed communities; (9) in addition, some banks do not collect all of the data on their activities needed to guarantee that increases in investments under the BEA program are not being offset by decreases in other investments in these distressed areas; (10) furthermore, the Fund cannot be assured that banks' increased investments remain in place because it does not require banks to report any material changes in the status of these investments; (11) the CDFI Fund's strategic plan contains all of the elements required by the Results Act and the Office of Management and Budget's associated guidance, but these elements generally lack the clarity, specificity, and linkage with one another that the act envisioned; (12) although the plan identifies key external factors that could affect the Fund's mission, it does not relate these factors to the Fund's strategic goals and objectives and does not indicate how the Fund will take the factors into account when assessing awardees' progress toward goals; and (13) in addition, the plan does not describe the relationship of its activities to similar activities in other government agencies. |
In the past several years, rail has increasingly been used to ship crude oil, a flammable liquid. Even though crude oil shipments by rail have declined in the past year, from January 2010 to October 2014 shipments increased over 30 times, peaking at almost 36 million barrels or about 51,000 rail carloads in October 2014 (see fig. 1). This increase is due to pipeline capacity constraints in certain areas of the country, including the Bakken shale region in North Dakota. As shipments of crude oil have increased, the number of rail accidents involving crude oil has also increased, even as rail accidents involving all hazardous materials have declined. From 2006 to 2010, there were 19 rail incidents involving crude oil, 4 of which were designated by PHMSA as serious rail incidents—compared with 399 crude oil rail incidents from 2011 to 2015— 21 of which were designated by PHMSA as serious. Serious accidents can involve derailments, collisions, and release of materials. Depending on the commodities a train is carrying, releases can involve not only crude oil, which is flammable, but also a variety of chemicals such as sodium hydroxide which can irritate the eyes and burn the skin. The railroad industry is dominated by the seven largest railroads, known as Class I railroads, which transport the majority of rail freight—including hazardous materials—across a network of 200,000 miles of track, mostly in rural areas. In addition, numerous Class II and hundreds of Class III railroads have essential roles in moving freight, typically linking rural communities to the larger railroad network. According to DOT, about 18 percent of all Class I carload freight originates or terminates on a Class II or Class III railroad, often called “first mile” and “last mile” movements. Federal, state, and local entities all play a role related to emergency preparedness for rail accidents involving crude oil and other hazardous materials. For example, DOT oversees compliance with safety regulations applicable to the rail transportation of hazardous materials. Specifically, PHMSA—through its Office of Hazardous Materials Safety—issues regulations that apply to shippers and railroads transporting hazardous materials, like crude oil. PHMSA also provides grants to states to fund training for local responders. As mentioned previously, DOT issued an Emergency Order in May 2014 that requires railroads planning to operate trains transporting 1-million gallons or more of Bakken crude oil to notify states of the expected movement along the routes. The Emergency Order focused on ensuring that state and local emergency responders know the frequency and number of Bakken crude-oil trains moving through their jurisdictions. FRA provides regulatory oversight for both passenger and freight rail, issuing and enforcing safety regulations. FRA enforces its own regulations and the HMR, through its inspections and audits conducted by FRA officials and state partners in some states. In addition, state officials, local emergency planners, and the railroads also play specific roles in planning and preparing for emergencies, specifically: Based on EPCRA, each state is required to establish a SERC that acts as the emergency-planning focal point and that is responsible for designating emergency-planning districts and appointing LEPCs for each district. The SERC supervises the activities of the LEPCs. LEPC membership includes representatives from police, fire, civil defense, health, transportation (including rail), environmental agencies, among others. These entities plan, gather, and share information about emergency preparedness and arrange for emergency responders’ training. Local emergency-planning agencies prepare local emergency plans that include emergency response plans, training requirements for emergency responders (e.g., police officers, firefighters, and emergency medical technicians), and other vital hazardous material response information. Local emergency-planning agencies may also prepare commodity flow studies. Most U.S. firefighters are volunteers. Under DOT’s Emergency Order, the railroads support emergency responders by providing SERCs with information about expected movements of trains carrying Bakken crude oil. In addition, since 1996, railroads that ship oil in containers exceeding 3,500 gallons must prepare response plans that document that they have trained personnel, placed equipment, and established procedures to respond to an oil spill. PHMSA has recently proposed in coordination with FRA, expanding the oil-spill response-planning requirements to require railroads to obtain FRA approval for more detailed, comprehensive oil-spill response plans for high-hazard flammable trains carrying petroleum oil. Recently enacted legislation may change the responsibilities of some stakeholders. In December 2015, President Obama signed into law the FAST Act, which requires DOT to codify and expand the information sharing requirement relative to the existing Emergency Order’s notification requirements for trains carrying 1-million gallons or more of Bakken crude oil. In accordance with the FAST Act, in July 2016 PHMSA proposed regulations (the same as mentioned above) to require railroads to share information on a monthly basis about high-hazard flammable trains’ operations with state (i.e., SERC) and tribal emergency response commissions. “High hazard flammable trains” are those that transport (1) 20 or more tank cars loaded with a Class 3 flammable liquid in a continuous block or (2) a total of 35 or more tank cars carrying such materials. The FAST Act also directs DOT to require railroads to provide, through the applicable fusion center, emergency responders with real-time access to information about the train’s hazardous materials shipments in the case of a train accident. While planning is conducted by a range of participants, local emergency responders and railroad train crews are typically first on the scene in the immediate aftermath of this type of accident. Firefighters, police, and emergency medical technicians assess and secure the scene, as described in figure 2. For example, local and sometimes regional officials may be responsible for advising the public on taking shelter-in-place actions or conducting evacuations of affected populations. In addition, assuming they are not affected by an accident, railroad train crews are expected to provide local emergency responders with information about the type and quantity of hazardous materials on the train, the position of the train’s contents, and the emergency contact information (referred to as hazardous materials documentation), as described in figure 2. According to the HMR, railroads are required to provide notice of certain hazardous materials accidents to the National Response Center. After the initial response, local emergency responders and railroads as well as state and federal agencies have various responsibilities for mitigating and investigating an accident (see fig. 2 above). Mitigation: According to AAR, railroads have hazardous materials accident management personnel and employ hazardous materials response contractors and environmental consultants that provide spill response tools and equipment. Federal and state environmental agencies are responsible for assisting this effort and monitoring and evaluating the environmental damage. States are required to notify EPA and the U.S. Coast Guard, which may send representatives to the accident scene to assist in or direct response activities resulting from a hazardous materials release or discharge. Investigation: FRA monitors train accidents and investigates their causes and compliance with existing safety laws and regulations. Per its discretion, the National Transportation Safety Board investigates some railroad accidents and issues safety recommendations aimed at preventing future accidents. Training is a key aspect of planning and preparing for emergency response and mitigation. Federal regulations require that firefighters and other responders receive hazardous-materials emergency response training. When such training is provided, federal statutes regarding occupational health and safety indicate that it must be based on training standards set by a recognized, standard-setting organization, such as the National Fire Protection Association (NFPA). NFPA standards detail the specific knowledge that trainees must have to be considered competent to provide varying levels of response to hazardous materials accidents, as we describe later in this report. Additionally, DOT officials stated that FEMA’s Emergency Management Institute serves as the national focal point for the development and delivery of emergency management training to enhance the capabilities of state, local, and tribal government officials, volunteer organizations, FEMA’s disaster workforce, federal agencies, and the public and private sectors to minimize the impact of disasters and emergencies. Local emergency planners from almost all of the counties we contacted reported that their emergency responders participated in training, exercises, or drills (“training”) to prepare for responding to rail hazardous materials incidents. Specifically, emergency planners from 21 of 22 counties (13 urban and 8 rural) indicated that emergency responders had participated in rail hazardous materials classroom or hands-on training and 20 reported their emergency responders had participated in rail hazardous materials exercises or drills. About half of the emergency planners (13 of 22) reported that their emergency responders had received rail hazardous-materials training by independent study or webinar. Hazardous materials training may be provided by state or local emergency management agencies, colleges and associations with specialized programs, chemical producers, and railroads, among others. As discussed later, PHMSA provides grants to states to fund training for local responders. Also, as noted earlier, FEMA’s Emergency Management Institute serves as a national focal point for emergency management training. Most local emergency planners reported that the rail hazardous-materials training was useful for helping their emergency responders prepare for and respond to rail hazardous materials incidents. Local emergency planners from 18 counties (9 urban and 9 rural) reported that such training was very useful, 4 reported it was somewhat useful, and 3 reported they were uncertain about its usefulness. Some emergency planners told us that training that involved “hands on” experience was particularly helpful because it facilitated direct interaction with devices and props that emergency responders would not normally encounter except in a real-world incident. Other emergency planners told us that training provides a way for emergency responders to interact face to face with some of the stakeholders they might normally encounter in an incident, such as railroad hazardous materials experts and personnel from other fire departments. Such interaction can facilitate relationships among stakeholders by increasing familiarity and building trust. For example, one local emergency planner told us that the first responders from his county attended a railroad-sponsored hazardous materials exercise that was coincidentally held a week before a hazardous materials train derailment. The emergency planner noted that when the accident happened, responders were already familiar with the railroad and stakeholders, familiarity that facilitated cooperation and trust. Local emergency planners reported on the percentage of their first responders who received various levels of hazardous-materials response training, from basic to more advanced. Generally, as discussed below, a greater percentage of these first responders received basic training than received more advanced levels of training, according to responses to the questionnaires we received. As described previously, there are federal requirements that firefighters and other responders receive hazardous- materials emergency response training, and when such training is provided, it must adhere to federal requirements and must be based on training standards set by a recognized, standards-setting organization, such as the NFPA. NFPA’s standards identify the types of training needed to achieve professional competence at four levels, from “awareness”, the most basic level, to “incident commander,” the most advanced (see table 1). Local emergency planners from most counties (20 of 22) reported that more than 60 percent of their first responders were trained at the awareness level, which enables first responders to recognize that they are dealing with a hazardous materials incident and call for trained response personnel. However, emergency planners from few (6 of 23) counties reported that more than 60 percent of their first responders were trained at the operational level, which allows first responders to take defensive action in the event of a hazardous materials release. Emergency planners from even fewer counties reported emergency responders trained at the technician and incident command levels, a finding that is to be expected given that these advance levels of competency require additional hours of training and certification (see fig. 3). Local emergency planners in select urban and rural counties reported that various obstacles impede their emergency responders’ participation in training activities—activities that would help prepare them to take defensive action to a rail accident involving hazardous materials. Factors such as dedicating time for training, taking unpaid time off of work, and being able to get off from their regular duties may discourage participation in training (see fig. 4). Local emergency planners from 23 counties reported that the leading factor discouraging participation in rail hazardous-materials training was the time commitment to attend a training activity. As one emergency planner put it, despite the fact that the state developed a variety of rail hazardous materials training and exercises for a county’s emergency responders, the ability to make use of the training opportunities ultimately rests on emergency responders’ being able to attend without neglecting other professional and personal responsibilities. Local emergency planners from rural counties with largely volunteer firefighter workforce (i.e., unpaid) reported the dilemma of having to take unpaid leave from their primary workplace to attend training. One emergency planner told us that some training providers recognize this limitation and try to schedule training for periods when volunteers may be more available, such as evenings and weekends. However, even when training is offered during weekends or non-work hours, emergency planners told us that it can be difficult to get participants because of family commitments and other responsibilities. Along this same line, FRA officials told us that communities can be unwilling to send employees to training because they do not want the employees to be away from their professional duties, despite the fact that the cost of the training itself may be covered by other entities, such as a railroad, and free to the employer. Local emergency planners reported another obstacle to participating in training is that of backfill—the situation where a replacement worker needs to cover the shift of the person attending training. As indicated above, 12 of 23 emergency planners reported that emergency responders sometimes are not able to get off their regular duties to attend training because, for example, a replacement cannot be found. Emergency planners reported that backfill can be difficult and expensive. According to one emergency planner from an urban county, most fire departments operate with the bare minimum workforce so sending anyone away to training has a big impact on the budget because the county may need to pay existing staff overtime to work an additional shift. One described backfill as being cost prohibitive, noting that replacing absent workers can cost up to three times as much as the cost of the training itself. Emergency planners also told us that a consequence of such obstacles is that fire departments are not able to train their entire force at one point in time and that their responders have varying levels of training. Whether the workforce is comprised of volunteers or career fire fighters, planners told us that sending their entire force to training is cost prohibitive and otherwise impractical. In addition, an emergency planner illustrated this by stating that in a recent railroad scenario-based exercise, only one shift participated. Most emergency planners viewed their emergency responders as prepared to take defensive actions, such as evacuating affected populations, sheltering in place, and setting up an incident command post in the event of a rail hazardous materials accident. Emergency planners from 22 of 25 counties reported their emergency responders as very (9 counties) or somewhat (13 counties) prepared for taking such actions. Emergency planners told us that there were differences in preparedness level within their counties, stating that some jurisdictions place a higher priority on preparedness than others. One emergency planner indicated a distinction between urban and rural locations, explaining that urban areas may have a heightened awareness to hazardous materials by rail because of the steady presence of trains on their rails. Other planners attributed a greater level of preparedness to a heightened awareness of rail accidents in recent years and to increased training. In a related issue, local emergency planners from all selected counties reported having mutual aid agreements—pre-established agreements in which first responders call for assistance from other fire departments—in place. Such agreements can increase a county’s preparedness since they increase the resources (and in some instances expertise level) available for immediate response. Local emergency planners from most of the selected counties reported that railroads and SERCs have provided them with a variety of information for planning and preparing for hazardous materials accidents and that this information is useful. The types of information provided include: Railroads’ emergency-response-planning guides: Emergency planners from two-thirds of the counties (16 of 24) reported railroads operating in their jurisdictions provided them a copy—or copies if provided by multiple railroads—of their emergency-response-planning guides. (Emergency planners from 8 counties reported that they had not been provided guides.) Railroads’ emergency-response-planning guides may include information about critical railroad contacts and railroad incident response guidelines. Emergency planners from most of the counties that received guides described the guides as being useful or very useful for preparing for a potential rail accident because they contained information about what the railroads’ response activities would be. One planner told us that a guide typically includes information that can be readily incorporated into a local entity’s own hazardous-materials response plan. Emergency planners from three counties told us about deficiencies with the guides and one noted that the information was not particularly useful because the plan referenced the national rail network rather than the locality and focused more on pollution response than on life safety planning. Another told us that the plans had information that was redacted or included such strict non-disclosure statements as to prevent incorporating it into local emergency-planning documents. Hazardous materials information: Local emergency planners from most counties (22 of 24) reported having been provided information about hazardous materials transported through their areas, including a few that told us their SERCs provided them with information about Bakken crude-oil shipments. In addition, most (14 of 20) indicated having no difficulty getting information. (As discussed below, emergency planners from six counties indicated they had difficulty getting information.) According to DOT, besides obtaining information about planned Bakken crude-oil shipments through the May 2014 Emergency Order, bona fide emergency responders are able to obtain information on other types of hazardous materials moving through their community by requesting the information from the railroad. Emergency planners from 11 counties reported that railroads provided such information upon request, 6 reported that railroads voluntarily provided them with such information, and some reported they received information from SERCs and other entities such as the U.S. Coast Guard or the Army Corps of Engineers. Some emergency planners reported receiving information from multiple sources. Emergency planners from 17 counties reported finding the information somewhat to very useful and 4 not useful for planning and preparing for potential accidents involving hazardous materials transported by rail. One local emergency planner told us that having this information in advance of an accident helped in putting the appropriate response plans in place when a derailment involving ethanol and propane occurred. Some local emergency planners described incorporating the hazardous materials information into plans and guidance, such as emergency response plans, commodity flow studies, and county hazardous-materials plans. One said the information was used to fine tune the hazardous materials team’s training exercise by focusing on the use of fire suppression foam since the information indicated an uptick in the volume of crude oil shipments. In addition, some emergency planners reported discussing the information with local emergency responders in order to increase their awareness of hazardous materials shipments. Emergency planners from some (6 of 20) counties reported difficulty getting information about hazardous materials shipments from the railroads or SERCs. In describing the behaviors that impeded information sharing, one emergency planner told us that the railroad was reluctant to release information because it considered the information security sensitive. Another noted that the formal process for requesting information from railroads gets bogged down, perhaps because it is administered by a different office than the personnel with whom informal relationships were developed. With respect to DOT’s Emergency Order related to railroads transporting Bakken crude oil, emergency planners from 9 of 25 counties told us that the Emergency Order improved their access or had other positive outcomes, while 11 of 25 counties told us that the Emergency Order had little or no impact on their ability to access hazardous materials information. Emergency planners from five counties told us that the effect of the Emergency Order was unclear. Of the emergency planners who told us that the Emergency Order improved access to information from the railroads, one told us that the railroad not only provided information about Bakken crude shipments but information about the 10 most shipped commodities as well. Other positive outcomes were related to providing a broader view of hazmat shipments and improving situational awareness. Emergency planners reporting little or no impact explained that in some situations they had adequate access to information prior to the Emergency Order. A couple of planners told us that the Bakken crude information is very generic and provides only generalities that they could ascertain just as readily by observing the train traffic. One emergency planner told us that the information is already accessible because at least one state provides the Bakken crude-oil information on its website for the public to access and review. The hazardous materials information described above is used by local emergency planners from railroads and other entities to develop commodity flow studies. As mentioned previously, these studies describe the types and amounts of hazardous materials transported through a specified geographic area and the modes of transportation. Ten emergency planners told us that information provided by commodity flow studies were particularly useful as comprehensive reference guides. According to one emergency planner, a commodity flow study helps responders focus on the right response based on the potential hazard. A recent transportation study found that local emergency planners use this information as part of their all-hazards-planning process, in which understanding the risks posed by transporting hazardous materials through a community is a key component. Local emergency planners in the 24 counties who responded to this question told us that once a rail accident involving hazardous materials has occurred the most important information for first responders is the type and volume of hazardous materials involved—information that is found in the train’s hazardous materials documentation. In addition, emergency planners from five counties reported it was useful to know the order of the train cars. Train crews are required to make train documentation showing the current position of each rail car in the train immediately available to first responders in the event of an accident. As discussed in more detail later, AAR has developed a smart phone application and railroads are beginning to make it available to assist responders in identifying train cars’ contents. Nearly all emergency planners in urban counties (13 of 14) and about half of the planners from rural counties (6 of 11) reported being familiar with this application. Emergency planners from 15 counties reported that their responders have access to the AskRail application, and 12 found it to be very useful. However, some emergency planners described some concerns about the adequacy of cellular connectivity. Emergency planners in one urban county told us they had a preference for a hard copy of a train’s hazardous materials documentation because of the difficulty of reading such detailed information on a cell phone. Although not required by DOT, all seven Class I railroads we surveyed reported that they have provided training in the past 5 years to local emergency responders related to emergency preparedness for and response to rail accidents involving hazardous materials. Class I railroads reported directly providing or funding a variety of training, including classroom and hands-on training delivered at designated training sites and brought to responders’ locations, as well as scenario-based discussions and full-scale emergency preparedness exercises. For example, one Class I railroad told us that it provides training to fire departments using props such as tank cars. Five Class I railroads reported targeting training resources to communities based on such things as the type of hazardous materials being transported in their area, the volume of such materials, and the train routes. Six Class I railroads told us that awareness- and operational-level training comprised from 50 to 80 percent of the training they offer. Railroads told us that they advertise their training to emergency planners and responders in a variety of ways, including: (1) directly to local fire departments, LEPCs, and county emergency management agencies; (2) to emergency responder organizations, conferences, and training organizations; and (3) to state agencies, including SERCs, emergency management agencies, and states’ fire marshal offices. All seven Class I railroads reported providing at least part of their training through third-party organizations, such as Transportation Community Awareness and Emergency Response (TRANSCAER) or the Security and Emergency Response Training Center (SERTC), which are industry recognized and known for providing hazardous material planning and preparedness training. Class I railroads reported having significantly increased their training related to hazardous materials in terms of dollars spent since 2011. According to data provided by five Class I railroads, their combined total spending on training increased from about $1.5 million in 2011 to $4.6 million in 2015—an increase of more than 200 percent (see table 2 below). AAR officials attributed the increase in spending in part to a response to a call to action by the Secretary of Transportation in January 2014, after which Class I railroads provided $5 million to develop additional curriculum specifically on emergency response to crude oil derailments and train first responders. In addition, AAR officials told us that Class I railroads significantly increased their hazardous materials training efforts over the last several years in part because of the tremendous increase in the volume of Bakken crude oil shipped by rail and high-profile accidents, such as the one in Lac Mégantic, Quebec. Relatedly, railroads reported having trained more responders since 2011. The seven Class I railroads reported training more than 40,000 first responders and other emergency officials in 2015, an increase of over 80 percent from 2011 (see table 3 below). Class II and III railroads can also provide training on preparing for and responding to hazardous materials rail accidents. Of the four Class II and III railroads we surveyed, two reported providing training to local emergency responders. According to officials from the American Short Line and Regional Railroad Association (ASLRRA), its member railroads reach out to local emergency responders and have offered training on hazardous materials in response to recent accidents. However, according to ASLRRA officials, Class II and III railroads generally have fewer resources than Class I railroads to provide training to emergency responders and others. The officials told us that frequently the most effective way for Class II and III railroads to provide training to local emergency responders is for them to collaborate with Class I railroads. For example, an official from one short-line holding company (i.e., a company with a controlling interest in multiple Class II and III railroads) we interviewed told us that one of its railroads recently partnered with a Class I railroad to gain access to tank car training equipment so that responders along its line could undergo training on how to respond to an accident involving a tank car. All Class I railroads reported complying with the Emergency Order by reporting the Bakken crude-oil shipment information to the relevant SERCs. The Class I railroads also told us that they maintain accurate hazardous materials documents, an HMR requirement, with most doing so through paper records and new electronic systems. As discussed later, FRA officials told us that that the agency contacted railroads to confirm that the railroads shared the information about Bakken crude-oil shipments and did not find evidence that any railroad failed to comply. In addition to responding to these DOT requirements, Class I and selected Class II and III railroads described taking the following actions in the area of information sharing to support emergency responders’ preparedness and response for rail accidents involving hazardous materials: Providing additional information about hazardous materials: All seven Class I railroads reported they also provide additional information— upon request from bona fide emergency planners and responders— on hazardous materials shipped by rail through their communities. Two of the smaller railroads (one Class II and one Class III) reported they provide information on hazardous materials. This practice follows industry-accepted operating procedures that railroads provide information on hazardous commodities, as well as work to improve community awareness, emergency planning, and incident response to rail hazardous-materials accidents. The seven Class I railroads reported receiving between 17 and 225 such requests from LEPCs in 2015. Two of the four smaller railroads (one Class II and one Class III railroads) reported receiving one to five requests in 2015. While five of the Class I railroads reported an increase in requests over the last 3 years and three reported an increase of 100 percent or greater, the number of requests appears low relative to the number of counties and communities through which these materials are transported. For example, in 2015, six of the railroads received requests from less than a third of counties where they own track, with the seventh railroad receiving requests equaling just over a third. While the Class I railroads reported that they actively advertise the availability of the hazardous materials information to bona fide entities, AAR officials agreed that the number of requests was low, but did not know the reason why. Making emergency response information available electronically: All 7 Class I railroads reported they make information on train’s contents available electronically to emergency responders via smart phone applications, such as AAR’s AskRail application or their own mobile application. AAR officials said that AskRail—the system used by all seven Class I railroads—offers access to emergency response information similar to that on a train’s hazardous materials documents. AAR officials reported that from the initial rollout in October 2014 through March 2016, 13,000 responders had been invited to download the AskRail application and 6,500 had done so. However, not all responders have access to this application, as discussed previously. According to AAR, future iterations of the application could include making it available on other platforms, such as web-based devices. The four Class II and III railroads reported that they do not provide electronic information on train contents; one Class II and one Class III reported that they had not received requests to do so. Conducting community outreach: Six of seven Class I railroads told us about various methods to contact emergency-planning and response agencies regarding preparedness for a rail accident, such as by letter, Internet, telephone, providing an emergency response planning guide, or through presentations, training, or training materials or LEPC meetings. Five railroads told us that they contact all communities along their railroad, covering between 10 percent and 100 percent of the communities along their respective lines each year; another said it does not have enough resources under its current program to contact all communities but is developing a digital program to improve contact. Furthermore, all seven Class I railroads reported engaging in relationship-building activities with communities, such as participating in LEPC meetings or meetings with local and state officials and training events such as classes and table top exercises. One Class II and one Class III railroad reported they contact emergency responders to discuss their preparedness for a rail accident and another (Class II) indicated it planned to begin an outreach program this year. The fourth railroad (Class III) did not conduct outreach except by making its emergency response guide available upon request. Developing emergency response planning guides: All seven Class I railroads reported that they had developed emergency response- planning guides that include information for communities. However, most of the seven railroads restrict access to their respective emergency response planning guides or do not produce it in written form. Only one of the Class I railroads reported sending the guide to all communities along its lines without being requested to do so and provided a means to order the guide on its website. Two other railroads told us that they made their guides available on their websites, while three railroads told us that they shared them at training events and one said the guide was also available through other communications. One railroad said it provides this information orally, but does not share it in written form. Two of the four Class II and III railroads (one Class II and one Class III) reported having emergency-planning guides that responders must request and the other two (one Class II and one Class III) did not have guides. As we discussed above, local emergency planners from most of the counties we contacted found these guides, when provided, useful in planning and preparing for hazardous materials accidents. DOT has taken multiple actions to improve emergency preparedness for rail accidents involving hazardous materials, with some of the actions recent and focused specifically on rail-transported Bakken crude oil. For example, DOT enforces the requirement that train crews maintain accurate hazardous materials documentation detailing shipments’ contents and containing other information that is critical for emergency responders in an accident. In addition, as previously discussed, DOT issued an Emergency Order in May 2014 requiring railroads expecting to carry large volumes of Bakken crude oil to notify officials in states along the routes. DOT’s contact with SERCs has expanded in recent years as DOT fulfills its regulatory role of ensuring railroads followed the Emergency Order. DOT expects to expand the information-sharing requirements further with proposed regulations (consistent with the FAST Act) to include all high-hazard flammable train operations, not just trains carrying crude oil from Bakken sources, steps that will affect its oversight of railroads’ actions moving forward. Although DOT is working to finalize these regulations, it is not clear that the information that railroads are currently required to share with SERCs on shipments of Bakken crude oil is consistently reaching local first responders. PHMSA has produced a variety of training and other materials for emergency planners and responders, most of them in the past 2 years, intended to improve emergency preparedness for rail accidents involving hazardous materials in general and Bakken crude oil in particular (see table 4). Although we did not ask local emergency planners from selected counties about their specific experiences with the TRIPR training or the crude-oil reference sheet, we did inquire about their views about the usefulness of the ERG. Local emergency planners from all of the counties we contacted who provided a response reported that the ERG was useful—with all but one (23 of 24) finding it very useful. Local emergency planners from a few counties also told us that the guide is particularly useful for the initial first response and contains information not commonly known, but critical, for responding to hazardous materials accidents. PHMSA administers the Hazardous Materials Grant Program, which distributes a series of grants that support state and local emergency- response planning and training activities related to the transportation of hazardous materials for all modes, including rail. Established by the Hazardous Materials Transportation Uniform Safety Act of 1990, this program is funded by registration fees collected from shippers and carriers that transport hazardous materials and consists of three types of grants, the largest of which are Hazardous Materials Emergency Preparedness (HMEP) planning and training grants. In fiscal year 2015, PHMSA awarded $19.9 million in these grants to states, territories, and Native American tribes, allowing them to be used to design and implement planning and training programs according to need. For example, SERCs can use the HMEP funds to conduct commodity flow studies, support training exercises, or to send responders to hazardous materials conferences or provide grants to LEPCs for the similar activities. The Hazardous Materials Grant Program also includes (1) Hazardous Materials Instructor Training grants ($3.3 million in fiscal year 2015) to train instructors who then are able to train hazardous materials employees in their area and (2) Supplemental Public Sector Training grants ($927,000 in fiscal year 2015), which support trainer instruction for hazardous materials response educators. Some state and local emergency planners have reported that HMEP grants—which can be used in a variety of ways as previously discussed— have helped them to improve their emergency preparedness. State emergency planners we interviewed stated that they have used a portion of their HMEP grant allocation for state-wide activities in addition to making funds available to LEPCs or tribal organizations. For example, in 2015, DOT reported that one state emergency planner said funds were recently used to conduct a state-wide study of hazardous materials travelling through the state’s communities in 2014, while another state used its funds to train 3,100 emergency responders about hazardous materials preparedness. Local emergency planners from 8 of 24 counties reported that they have received HMEP funding from their SERCs in the past few years and that the funds improved their ability to plan, prepare for, and respond to rail accidents involving hazardous materials. PHMSA and FRA have also recently introduced other grant programs that support state and local emergency preparedness efforts (see table 5). As previously discussed, DOT has required that train crews operating trains carrying hazardous materials maintain information on the nature and location of those materials that can be shared with responders at the scene, should an accident occur. As railroads continue to ship large quantities of crude oil and other hazardous materials across the nation, emergency responders need to be aware of the location and the nature of the hazardous materials present at the scene of an accident so they are able to be properly equipped to protect themselves and their community. As discussed earlier, local emergency planners from the 24 counties who provided a response told us that the type and quantity of hazardous materials are the most important pieces of information for first responders at the scene of an accident. The HMR requires that a train crew carry hazardous materials documentation including both “shipping papers”— information about the hazardous materials being carried and the shipper’s emergency contact information—and the train’s “consist”—a document detailing the current position and contents of the cars carrying hazardous materials. According to officials from AAR, this documentation is the official record of the order of a train’s cars and its contents, and train crews will update this copy as the train’s order or contents change along its route. DOT officials explained that the HMR also requires train crews to provide their hazardous material documentation to the local emergency responders in the event of an accident to help them identify the location of hazardous materials on a train and inform appropriate defensive response actions (e.g., wearing proper protective equipment and establishing a safe perimeter). FRA conducts inspections to determine railroads’ compliance with safety regulations including those related to hazardous materials documentation. FRA officials told us that as part of FRA’s routine inspections of railroads’ compliance with safety regulations, inspectors collect information on whether or not train crews are in possession of the required hazardous material documentation and whether the documentation is accurate. If the documentation is not accurate or is missing, FRA may cite the railroad for failing to comply with the requirements by citing a defect or issuing a violation and assessing fines. As previously discussed, in the wake of railroad accidents involving crude oil, DOT issued the Emergency Order in 2014 to improve information sharing between railroads and emergency planners by requiring railroads to provide advance notification of Bakken crude oil train movements through states. Prior to the Emergency Order , railroads could provide information about the types of hazardous materials moving through local communities to bona fide emergency response authorities upon request, as described previously. Thus, unless they requested information, state and local emergency planners may not have known if, or how many such trains carrying large quantities of Bakken crude oil were moving through their jurisdiction until after an accident was reported. However, the Emergency Order has since required each railroad transporting 1-million gallons or more of Bakken crude oil in a single train to provide the following information in writing to the SERC for each state in which it operates such a train: a reasonable estimate of the number of high-hazard flammable trains expected to travel through each county within a state per week; the routes of these high-hazard flammable trains; a description of the materials shipped; all applicable emergency-response information; and a point of contact at the railroad. Furthermore, the Emergency Order requires railroads to update this information whenever their reported volumes vary by more than 25 percent, and calls on SERCs—entities that the Emergency Order indicates are best situated—to convey the information about shipments of Bakken crude oil to LEPCs in affected counties. This basic information was meant to inform local emergency responders about the presence of trains carrying Bakken crude oil. By issuing the Emergency Order, DOT required railroads to take action to assist emergency responders in preparing for rail accidents involving crude oil by providing information important for local emergency preparedness. Specifically, the Emergency Order explained that it is essential that local responders be well informed about the presence of trains carrying Bakken crude oil because they are typically the first to arrive on the accident scene. Similarly, it states that local responders should be as well informed as possible prior to an accident involving a train carrying Bakken crude oil, and, without the requirement established by the Emergency Order, local emergency responders may not know to prepare for potential accidents involving these trains. Furthermore, as previously discussed, our review found that local emergency planners find this type of information useful. Specifically, local emergency planners from 17 of 21 counties reported that information about hazardous materials shipments was useful for planning and preparing for accidents. For example, local emergency planners told us that the type of information that is most useful for planning and preparing for a potential rail accident involving hazardous materials includes information about the type of hazardous material (including hazard classification and toxicity level), quantity of hazardous material, route of the shipments, and railroad emergency contact’s name and phone number. However, the extent that this information actually reaches local emergency planners is not clear. FRA officials said the agency contacted railroads and SERCs to confirm that railroads had shared the required information within 30 days of the Emergency Order’s issuance in 2014, and then again in 2015, and did not find any evidence that any railroad had failed to comply. However, FRA officials told us that they did not collect information about whether the SERCs distributed the information to local planners. In 2015, PHMSA gathered some limited information from states about their efforts to distribute the information to local emergency planners and determined that some states did not share the information. Specifically, PHMSA participated in discussions with state officials from 48 states and the District of Columbia as part of an EPA-led initiative in January 2015. Among the topics discussed was the extent to which state agencies passed the Bakken crude oil shipment information to local communities. According to PHMSA officials, although states indicated that they shared the railroad-provided information with local communities, some states indicated the information was heavily redacted. PHMSA officials also learned that some states did not provide any information to local communities because of varying interpretations about whether and how much information could be made publicly available. The extent that this occurred is unclear, however, because neither PHMSA nor FRA took steps to systematically collect additional information from SERCs about whether they disseminated the information required by the Emergency Order. This lack of clarity notwithstanding, in July 2016, PHMSA proposed expanding the information-sharing requirement so that railroads would have to provide advance notification for all high hazard flammable trains and would continue to make the SERC the information focal point consistent with the FAST Act. PHMSA’s proposed rulemaking effectively codifies the Emergency Order but broadens it by requiring railroads to provide monthly reports to SERCs on high-hazard flammable trains, which carry other flammable liquids besides Bakken crude oil. The impending broadening of the Emergency Order is likely to expand its applicability beyond only states through which Bakken crude oil is transported by rail, and affect more railroads. The Standards for Internal Control in the Federal Government state that agencies should design and implement control activities so that they are aligned with their objectives and review control activities after any significant changes to activities to determine that the changes are designed and implemented appropriately. As FRA is responsible for enforcing PHMSA’s regulations in the rail mode, the proposed new reporting requirement would represent an expansion in FRA’s activities and oversight; therefore, it will be important to determine if the activities are properly implemented. Given the proposed expansion of the Emergency Order, this may be an opportune time for DOT to consider how it can determine whether the information-sharing requirements will reach their intended audience. Although increasing the number of trains to which this requirement will apply may increase the administrative burden to railroads—and, to a lesser extent, FRA—the agency is currently unaware whether SERCs have disseminated railroad-provided information to local emergency planners—information that can facilitate preparedness—as intended. However, PHMSA’s January 2015 conversations with state officials demonstrated a shared-interest in enhancing preparedness to rail accidents and a willingness to share information with federal partners. Without an understanding of the extent to which the required railroad- provided information has been received by local emergency planners, it is unclear whether requiring railroads to share this information has the potential to improve emergency preparedness for a rail accident. Furthermore, without information on how consistently SERCs share information, it is possible that some LEPCs will not receive information on, and be unaware of, large shipments of flammable materials that would help them improve their preparedness for a potential accident. Finally, DOT has implemented a requirement without ensuring that the information reaches the intended audience—the emergency planners in local communities. DOT has recently taken actions to enhance emergency preparedness in light of recent accidents involving Bakken crude oil. For example, DOT issued the Emergency Order to increase awareness of large shipments of Bakken crude oil. However, the extent to which local emergency planners in affected communities have received the information about these shipments is unclear because DOT has not taken steps to understand whether SERCs provided the information to local emergency planners, who in turn could use the information in preparing for potential rail accidents involving hazardous materials. Without knowing whether the states provided the information, it is unclear whether requiring railroads to share this information has improved emergency preparedness for such accidents. DOT has proposed regulations codifying the Emergency Order in accordance with the FAST Act. This proposal would expand the requirement for railroads to share information on planned large shipments of other hazardous materials with affected SERCs. However, without a process for understanding whether SERCs are providing the information to local planning entities, DOT cannot be assured that the information will ultimately reach the communities where it is needed or would be useful in preparing local responders for rail accidents involving selected hazardous materials. Furthermore, the agency may be missing an opportunity to optimize this requirement in a way that the information provided best meets the needs of emergency responders. Given that the expanded information-sharing requirement will likely include more railroads and hazardous materials shipments, monitoring whether emergency planners receive this information will become even more important in the future. To continue the agency’s efforts to improve state and local emergency preparedness for rail accidents involving hazardous materials, we recommend that the Secretary of Transportation: after the rulemaking is finalized, develop a process for regularly collecting information from SERCs on the distribution of the railroad- provided hazardous-materials-shipping information to local planning entities. We provided a draft of this product to DOT, the Department of Homeland Security, EPA, NTSB, and the Surface Transportation Board for their review and comment. We received written comments from DOT, which are reprinted in appendix II. DOT concurred with our recommendation to develop a process for regularly collecting information from SERCs on the distribution of railroad- provided hazardous-materials-shipping information to local planning entities. DOT also highlighted recent actions it has taken to improve the ability of communities to prepare for and respond to accidents involving the transportation of crude oil and other hazardous materials, including the publication of proposed regulations in July 2016 that would require railroads to share information on high-hazard flammable train operations with SERCs and tribal emergency response commissions in accordance with the FAST Act. DOT also plans to issue regulations, also in accordance with the FAST Act, requiring Class I railroads that transport hazardous materials to generate and share real-time electronic consist information with applicable fusion centers. Finally, in October 2016, DOT awarded $20.4 million in HMEP grants to states, territories, and Native American tribes to enhance their ability to respond to hazardous materials incidents. The Department of Homeland Security and EPA provided technical comments, which we incorporated in the report, as appropriate. NTSB and the Surface Transportation Board reviewed our report, but did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, Secretary of Homeland Security, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to examine: (1) the factors selected local emergency planners report as affecting their preparedness for rail accidents involving hazardous materials; (2) the actions that Class I and selected other railroads report taking to support local emergency planners’ preparedness for rail accidents involving hazardous materials; and (3) the actions that the Department of Transportation (DOT) has taken to support state and local emergency planners’ preparedness for rail accidents involving hazardous materials and additional actions, if any, that DOT could take. The scope of the work was limited to emergency planning and preparedness for rail incidents involving hazardous materials, and did not focus on other phases of emergency response to an accident, such as mitigation and investigation, that are largely the responsibility of other federal agencies, such as the Federal Emergency Management Agency (FEMA) within the Department of Homeland Security, the Environmental Protection Agency (EPA), and the U.S. Coast Guard. To inform all of our objectives, we reviewed relevant literature, including journal articles and reports about rail accidents involving hazardous materials and the response to such accidents, as well as prior GAO reports on the transportation of hazardous materials by rail. We also reviewed relevant laws, such as the Fixing America’s Surface Transportation (FAST) Act of 2015, the Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA), and regulations, such as the Hazardous Materials Regulations, to determine requirements for DOT and railroads related to transporting hazardous materials by rail, as well as requirements for federal agencies, states, and localities regarding emergency planning and reporting related to hazardous materials. In addition, we interviewed officials from the Federal Railroad Administration (FRA) and the Pipeline and Hazardous Materials Safety Administration (PHMSA) within DOT; FEMA; EPA; and the National Transportation Safety Board to inform our understanding about the roles and responsibilities of federal, state, and local stakeholders in emergency preparedness and response to rail accidents involving hazardous materials. To identify views of selected local emergency planners on factors that affect their preparedness for rail accidents involving hazardous materials, we conducted structured interviews with, and provided a questionnaire to a nonprobability sample of 25 local emergency planners, primarily from local emergency planning committees (LEPCs) affiliated with county agencies—the groups responsible for developing and implementing emergency response plans and managing preparedness activities. We developed a questionnaire and structured interview guide based on our analysis of information gathered from interviews with stakeholders—such as Class I railroads, National Volunteer Fire Council, a state emergency response commission (SERC), and two LEPCs, among others—who were knowledgeable about emergency preparedness and hazardous materials rail accidents. We reviewed the National Fire Protection Association training standards for first responders to hazardous materials accidents to understand the levels of hazardous materials training for responders. After developing and pre-testing a questionnaire and structured interview guide with local emergency planners and responders in 2 counties, we administered the questionnaires and conducted structured interviews with a nonprobability sample of local emergency planners, primarily from LEPCs, and conducted an analysis of open- ended questions provided by these officials in response to the questionnaire and structured interview questions. For some open-ended questions, we conducted a more detailed content analysis, and for other questions, we summarized the results or provided examples of responses. We selected LEPCs as the primary contacts for our questionnaire and interviews because (1) they are responsible for implementing requirements established in EPCRA, including the development of emergency response plans that identify transportation routes of extremely hazardous substances; (2) because they include representatives from local emergency responders; and (3) because the DOT Emergency Order required railroads to report Bakken crude oil information to SERCs, based on the idea that they are best positioned to convey the information to LEPCs in affected counties. To arrive at samples of local emergency-planning officials, we used Geographical Information System software to analyze all rail lines across the 48 contiguous states, the location of county lines, and the estimated amount (in tons) and routes of hazardous materials transported in 2013, based on the Surface Transportation Board’s Carload Waybill Sample. (The 2013 Waybill Sample was the most recent at the time of our review.) To determine the reliability of the data, we reviewed the documentation provided by the Surface Transportation Board on how the sample was taken and did electronic testing to ensure that we received the complete file. Because the waybill data only contain origins, destinations, and selected transfer points, we used the TRAGIS routing model developed by the Oak Ridge National Laboratory to estimate the rail routes for hazardous materials and determined that the data were reliable for the purposes of identifying counties with high volumes of hazardous materials and crude oil transported by rail. Using this analysis, we identified the 10 counties with the highest volumes of crude oil and other hazardous materials transported by rail within each of the five PHMSA regions in four categories: (1) urban counties with carloads of crude oil, (2) rural counties with carloads of crude oil, (3) urban counties with carloads of hazardous materials other than crude oil, and (4) rural counties with carloads of other hazmat. From this list, we identified 30 counties (three urban and three rural in each of the five PHMSA regions) to interview and were able to contact local emergency planning officials from LEPCs in 25 of the 30 counties. (Five counties did not respond to our requests for interviews.) We received completed questionnaires from 24 of 25 counties and interviewed local emergency planners from 25 counties—specifically at least two urban and at least two rural counties across all but one PHMSA region for a total of 25 local emergency planners across 17 states. Because our work was based on a nonprobability sample of counties, the information we obtained and present in this report should not be regarded as an exhaustive list of factors local emergency planners may consider as affecting preparedness for rail incidents involving hazardous materials. Similarly, the information and perspectives that we obtained from these local emergency planners are not generalizable to other local planners or counties. We identified the 17 SERCs to interview based on the location of the four urban and four rural counties that had the highest volumes of hazardous materials in each of the 5 PHMSA regions we selected. Finally, we interviewed representatives from associations representing local emergency responders, such as the International Association of Fire Chiefs, the National Volunteer Fire Council, and the International Association of Fire Fighters about factors affecting preparedness. To understand the actions Class I and selected other railroads report having taken to support local emergency planners’ preparedness for rail accidents involving hazardous materials, we developed a questionnaire and administered it to all seven Class I railroads and to six smaller regional and short line railroads, known as Class II and Class III railroads. We interviewed 5 of 7 Class I railroads (selected in no priority order but which could be scheduled soonest) to develop an understanding of the types of training and other resources railroads provide to local emergency responders. We developed the questionnaire based on our analysis of information gathered from interviews with stakeholders—such as five of seven Class I railroads, the Association of American Railroads (AAR), American Short Line and Regional Railroad Association, a SERC, two LEPCs, and FRA, among others—who were knowledgeable about hazardous materials information sharing, emergency preparedness and response training, community awareness efforts, and railroad response to hazardous materials accidents. The Class II and Class III railroads were selected using the Surface Transportation Board’s 2013 Carload Waybill sample to identify those railroads that operate in counties where we conducted structured interviews with local emergency planners and to obtain variation in railroads operating in urban and rural counties. We sent our questionnaire to two Class II railroads and four Class III railroads and received responses from both Class II railroads and two Class III railroads. Because our work was based on a nonprobability sample of Class II and Class III railroads, the results of the questionnaires with the smaller railroads cannot be generalized to the entire population of Class II and Class III railroads. We also reviewed AAR’s Recommended Railroad Operating Practices for Transportation of Hazardous Materials and railroad documents such as emergency planning guides, examples of reports railroads sent to emergency planners, responder requests for information about hazardous commodities transported in their communities, and training and outreach material provided to local emergency responders such as the Transportation Community Awareness and Emergency Response training program. We also interviewed officials from AAR, the American Short Line and Regional Railroad Association, and the International Association of Sheet Metal, Air, Rail, and Transportation Workers about railroad requirements and actions for emergency preparedness and response to hazardous materials accidents, such as the development of electronic applications that can be used by emergency responders to identify commodities in the event of a hazardous materials accident. We also reviewed information on railroad websites, which are used to provide information to communities about hazardous materials and railroad response to accidents. To understand DOT’s actions to support preparedness for rail accidents involving hazardous materials and additional actions, if any, it could take, we reviewed pertinent FRA and PHMSA documents related to these actions, including training materials and guidance, grant program documentation, and documentation on efforts to oversee the implementation of requirements for railroads to share information on train contents and movements related to the May 2014 Emergency Restriction/Prohibition Order (Emergency Order). We also reviewed comments the agency received related to the Emergency Order as part of its Hazardous Materials: Enhanced Tank Car Standards and Operational Controls for High-Hazard Flammable Trains rulemaking. We compared the agency’s efforts to oversee implementation of information-sharing requirements for railroads with pertinent Standards for Internal Control in the Federal Government and identified criteria about how agencies should review internal control activities to ensure they are implemented properly, particularly after a program change. In addition, we analyzed information obtained from interviews with local emergency planners on how they used the information provided by railroads and with emergency planners from SERCs, including information provided in response to the Emergency Order, and their views on the usefulness of the information to support preparedness and response efforts. We also reviewed PHMSA guidance on Hazardous Materials Emergency Preparedness (HMEP) planning and training grants, Hazardous Materials Instructor Training grants, and Supplemental Public Sector Training grants. We analyzed information obtained from interviews with SERC emergency planners on how they used the PHMSA’s HMEP grants. These stakeholders were chosen because their state coincided with the geographic location of the local emergency planners we selected, as discussed above. We also interviewed PHMSA and FRA officials about recent DOT actions to support emergency preparedness and response to rail hazardous materials accidents, such as the development of the crude oil reference sheet, the Transportation Rail Incident Preparedness and Response training program, and the Assistance for Local Emergency Response Training and Community Safety Grants grant programs. We conducted this performance audit from June 2015 through November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual name above, Nancy Lueke (Assistant Director), Chris Keisling, Gail Marnik (Analyst in Charge), David Hooper, John Mingus, Joshua Ormond, Jaclyn Nelson, Amy Rosewarne, Kelly Rubin, Jim Russell, and Chad Williams made key contributions to this report. | Recent rail accidents involving hazardous materials, such as crude oil, have raised questions about local emergency responders' ability to take protective actions in the aftermath of such accidents. Along with FRA, PHMSA is responsible for ensuring the safe transportation of hazardous materials by rail through issuing and enforcing railroad- and shipper-safety regulations. GAO was asked to review efforts that enhance preparedness for hazardous materials rail accidents. This report examines: (1) the factors selected local emergency planners report affect preparedness; (2) the actions selected railroads have taken to support preparedness; and (3) the actions DOT has taken to support emergency planners. GAO reviewed laws and regulations and surveyed (1) emergency planners representing 25 counties and 17 states with the highest volumes of hazardous materials rail shipments and (2) all seven Class I railroads and four smaller railroads selected because they operate in the counties where GAO surveyed local emergency planners. Emergency planners from most of the 25 selected counties in 17 states that GAO surveyed reported that training for responders and information about rail shipments of hazardous materials affect preparedness. Emergency planners from almost all of the selected counties reported that a majority of the emergency response personnel, such as fire fighters, who arrive first at an accident receive basic training that would enable them to take initial protective actions, including recognizing hazardous materials and calling for assistance in the event of a rail accident involving crude oil and other hazardous materials. Emergency planners from most counties reported that training related to rail hazardous materials was useful in preparing for accidents. Emergency planners reported that some factors present obstacles to responders' receiving training, such as neglecting one's professional duties to take time off for training. Emergency planners from most counties reported that railroads in their jurisdictions have provided them with information about hazardous material shipments and that this information is useful in preparing for potential accidents. All seven of the largest railroads (called Class I railroads) and some of the four smaller railroads that GAO surveyed reported providing training and information about hazardous materials to local emergency responders and planners in recent years. The Class I railroads reported training through a variety of means, including locally delivered training exercises or off-site at industry-recognized training centers. In addition, railroads reported providing information about hazardous material shipments to state and local emergency planners in part due to a May 2014 Department of Transportation (DOT) Emergency Order requiring notification of state emergency-planning agencies about shipments of crude oil from North Dakota and Montana where the Bakken shale deposit is located. This information was intended to reach local emergency responders so that they could better prepare for rail accidents involving crude oil. The Pipeline and Hazardous Materials Safety Administration (PHMSA) and the Federal Railroad Administration (FRA) within DOT have taken multiple actions to support emergency preparedness for rail incidents involving hazardous materials; some actions focused specifically on trains carrying Bakken crude oil. For example, PHMSA developed a web-based training curriculum on how to prepare for hazardous materials incidents, and FRA determined whether railroads provided information about Bakken crude-oil shipments to states. However, PHMSA learned that some states did not provide the information about Bakken crude oil shipments to local emergency planners, as called for in the Emergency Order. Recently enacted legislation expands FRA's oversight of railroads' actions moving forward; for example, railroads will be required to notify states of large shipments of other hazardous materials. However, FRA and PHMSA have not taken steps to understand whether the shipment information railroads are required to share with states is consistently disseminated to local emergency planners. Therefore, the extent to which DOT's information-sharing requirements have the potential to improve local preparedness for rail accidents involving hazardous materials is unclear. GAO recommends that DOT develop a process for regularly collecting information from state emergency- planning agencies about their distribution of railroad-provided hazardous materials shipping information to local emergency planning entities. DOT concurred with our recommendation. |
We have reported in the past on acquisition management at several components of DHS. We have also assessed the department’s overall acquisition management efforts. A common theme in these reports is DHS’s struggle, from the outset, to provide adequate procurement support to its mission components and to provide departmentwide oversight of its acquisition function. Of the 22 components that initially joined DHS from other agencies, only 7 came with their own procurement support. An eighth office, the Office of Procurement Operations, was created anew to provide support to a variety of DHS entities—but not until January 2004, almost a year after the department was created. DHS has established a goal of aligning procurement staffing levels with contract spending at its various components by the last quarter of fiscal year 2009. DHS has set forth a stated goal of integrating the acquisition function more broadly across the department. However, the goal has not been accomplished. In March 2005, we identified key factors impeding accomplishment of the department’s objective, including limitations of a 2004 management directive and lack of departmentwide oversight of component acquisition organizations. We also identified potential gaps in the department’s knowledge-based approach for reviewing its major, complex investments. On a related issue, a number of systemic acquisition challenges we have identified at the Department of Defense could apply equally to DHS. In October 2004, the Secretary of DHS signed a management directive entitled “Acquisition Line of Business Integration and Management,” the department’s principal guidance for leading, governing, integrating, and managing the acquisition function. It directs managers from each component organization to commit resources to training, development, and certification of acquisition professionals. It also highlights the Chief Procurement Officer’s broad authority, including management, administration, and oversight of departmentwide acquisition. However, we have reported that the directive may not achieve its goal of creating an integrated acquisition organization because it creates unclear working relationships between the Chief Procurement Officer and heads of DHS’s principal components. For example, the Chief Procurement Officer and the director of Immigration and Customs Enforcement share responsibility for recruiting and selecting key acquisition officials, preparing performance ratings for the top manager of the contracting office, and providing appropriate resources to support procurement initiatives. The policy leaves unclear how the responsibilities will be implemented or what enforcement authority the Chief Procurement Officer has to ensure that initiatives are carried out. Further, the directive does not apply to the Coast Guard or Secret Service, two entities that are required by the Homeland Security Act of 2002 to be maintained as distinct entities within DHS. According to the directive, the Coast Guard and Secret Service are exempted by statute. We are not aware of any explicit statutory exemption that would prevent the application of this directive. Nothing in the document would reasonably appear to threaten the status of these entities as distinct entities within the department or otherwise impair their ability to perform statutory missions. DHS’s General Counsel has agreed, telling us that the applicability of the directive is a policy, not legal, matter. Excluding certain components from complying with management directives regarding the acquisition function hampers efforts to integrate the acquisition organization. The Coast Guard, for example, is one of the largest organizations within DHS. We have reported that DHS’s principal organizations are, to a large extent, still functioning much as they did in pre-merger days with regard to acquisition-related functions. Embedded within the seven procurement organizations that came to DHS were, for the most part, the same contracting staffs that joined the department from their former agencies. In addition, the Chief Procurement Officer, who is held accountable for departmentwide management and oversight of the acquisition function, lacks the enforcement authority and has limited resources to ensure compliance with acquisition policies and processes. As of August 2006, according to DHS officials, only five staff were assigned to departmentwide oversight responsibilities. The officials told us that, because their small staff faces the competing demands of providing departmentwide oversight and providing acquisition support for urgent needs at the component level, they have focused their efforts on procurement execution rather than oversight. Our prior work shows that in a highly functioning acquisition organization, the chief procurement officer is in a position to oversee compliance by implementing strong oversight mechanisms. Adequate oversight of acquisition activities across DHS is imperative, in light of the department’s mission and the problems that have been reported by us and inspectors general for some of the large components within the department. Some DHS organizations have large, complex, and high-cost acquisition programs—such as the Coast Guard’s Deepwater program—that need to be closely managed. DHS’s investment review process involves several different levels of review, depending on the dollar threshold and risk level of the program. Deepwater, for example, has been designated as a level 1 investment, meaning that it is subject to review at the highest levels within the department. We reported in 2005 that DHS’s framework for reviewing its major investments adopts several best practices from lessons learned from leading commercial companies and successful federal programs that, if applied consistently, could refine its ability to reduce risk to meet cost and delivery targets. One of these best practices is a knowledge-based approach for managers to hold reviews at key decision points in order to reduce risk before investing resources in the next phase of a program’s development. For example, DHS’s investment review policy encourages program managers to demonstrate a product’s design with critical design reviews prior to a production decision. However, we have found that, based on our extensive body of work on this knowledge-based approach, additional program reviews could be incorporated into the process as internal controls to better position DHS to make well-informed decisions on its major, complex investments. For example, DHS does not require a review to ensure that an investment’s design performs as expected before investing in a prototype. We also reported that DHS review processes permitted low-rate initial production to be well underway before a mandatory review gave the go-ahead to proceed to production. A review prior to initiating low-rate initial production was not mandatory; rather, it was held at the discretion of the Investment Review Board (IRB). Our best practices work shows that successful investments reduce risk by ensuring that high levels of knowledge are achieved at these key points of development. We have found that investments that were not reviewed at the appropriate points faced problems—such as redesign—that resulted in cost increases and schedule delays. It is not clear how the Deepwater acquisition has been influenced by the department’s investment review process. According to a DHS official, an IRB review of the Deepwater acquisition program baseline, scheduled for January 2007, was postponed. In its Performance and Accountability Report for fiscal year 2006, DHS stated that it has improved its process for investment reviews by providing greater clarity on DHS policies and procedures. It acknowledges that developing and maintaining the capability needed to achieve DHS missions requires a robust investment program. DHS also states that its components are now required to report on the status of major investments on a quarterly basis and to submit information to ensure that investments are staying within established baselines for cost, schedule, and performance. The report says that the department will identify and introduce acquisition best practices into the investment review process by the first quarter of fiscal year 2008. We have identified a series of systemic acquisition challenges for complex, developmental systems, based mostly on our reviews of Department of Defense programs. In principle, many may apply equally to DHS as it moves forward with its major, complex investments. Some of these challenges include: Program requirements are often set at unrealistic levels, then changed frequently as recognition sets in that they cannot be achieved. As a result, too much time passes, threats may change, and/or members of the user and acquisition communities may simply change their minds. The resulting program instability causes cost escalation, schedule delays, fewer quantities, and reduced contractor accountability. Program decisions to move into design and production are made without adequate standards or knowledge. Contracts, especially service contracts, often do not have measures in place at the outset in order to control costs and facilitate accountability. Contracts typically do not accurately reflect the complexity of projects or appropriately allocate risk between the contractors and the taxpayers. The acquisition workforce faces serious challenges (e.g. size, skills, knowledge, succession planning). Incentive and award fees are often paid based on contractor attitudes and efforts versus positive results, such as cost, quality, and schedule. Inadequate government oversight results in little to no accountability for recurring and systemic problems. The Deepwater program is the Coast Guard’s major effort to replace or modernize its aircraft and vessels. It has been in development for a number of years. Between 1998 and 2001, three industry teams competed to identify and provide the assets needed to transform the Coast Guard. In 2001, we described the Deepwater project as “risky” due to the unique, untried acquisition strategy for a project of this magnitude within the Coast Guard. Rather than using the traditional approach of replacing classes of ships or aircraft through a series of individual acquisitions, the Coast Guard chose to use a system-of-systems acquisition strategy that would replace its deteriorating assets with a single, integrated package of aircraft, vessels, and unmanned aerial vehicles, to be linked through systems that provide C4ISR, and supporting logistics. In June 2002, the Coast Guard awarded the Deepwater contract to Integrated Coast Guard Systems (ICGS). ICGS—a business entity jointly owned by Northrop Grumman and Lockheed Martin—is a system integrator, responsible for designing, constructing, deploying, supporting, and integrating the Deepwater assets to meet Coast Guard requirements. The management approach of using a system integrator has been used on other government programs that require system-of-systems integration, such as the Army’s Future Combat System, a networked family of weapons and other systems. This type of business arrangement gives the contractor extensive involvement in requirements development, design, and source selection of major system and subsystem subcontractors. Government agencies have turned to the system integrator approach when they believe they do not have the in-house capability to design, develop, and manage complex acquisitions. Giving contractors more control and influence over the government’s acquisitions in a system integrator role creates a potential risk that program decisions and products could be influenced by the financial interest of the contractor (who is accountable to its shareholders), which may not match the primary interest of the government--maximizing its return on taxpayer dollars. The system integrator arrangement creates an inherent risk, as the contractor is given more discretion to make certain program decisions. Along with this greater discretion comes the need for more government oversight and an even greater need to develop well-defined outcomes at the outset. The proper role of contractors in providing services to the government is currently the topic of some debate. I believe there is a need to focus greater attention on what type of functions and activities should be contracted out and which ones should not. There is also a need to review and reconsider the current independence and conflict of interest rules relating to contractors. Finally, there is a need to identify the factors that prompt the government to use contractors in circumstances where the proper choice might be the use of civil servants or military personnel. Possible factors could include inadequate force structure, outdated or inadequate hiring policies, classification and compensation approaches, and inadequate numbers of full-time equivalent slots. The Deepwater program has also been designated as a performance-based acquisition. When buying services, federal agencies are currently required to employ—to the maximum extent feasible—this concept, wherein acquisitions are structured around the results to be achieved as opposed to the manner in which the work is to be performed. That is, the government specifies the outcome it requires while leaving the contractor to propose decisions about how it will achieve that outcome. Performance-based contracts for services are required to include a performance work statement; measurable performance standards (i.e., in terms of quality, timeliness, quantity, etc.) and the method of assessing contractor performance against these standards; and performance incentives, where appropriate. If performance-based acquisitions are not appropriately planned and structured, there is an increased risk that the government may receive products or services that are over cost estimates, delivered late, and of unacceptable quality. In 2001, we reported that the Deepwater project faced risks, including the ability to control costs in the contract’s later years; ensuring that procedures and personnel were in place for managing and overseeing the contractor; and minimizing potential problems with developing unproven technology. We noted that the risks could be mitigated to varying degrees, but not without management attention. Our assessment of the Deepwater program in 2004 found that the Coast Guard had not effectively managed the program or overseen the system integrator. We reported last year that the Coast Guard had revised its Deepwater implementation plan to reflect additional homeland security responsibilities as a result of the September 11, 2001, terrorist attacks. The revised plan increased overall program costs from the original estimate of $17 billion to $24 billion. Overall, the acquisition schedule was lengthened by 5 years, with the final assets now scheduled for delivery in 2027. Our reported concerns in 2004 and in subsequent assessments in 2005 and 2006 have centered on three main areas: program management, contractor accountability, and cost control through competition. While we recognize that the Coast Guard has taken steps to address our findings and recommendations, aspects of the Deepwater program will require continued attention, such as the risk involved in the system-of-systems approach with the contractor acting as overall integrator. A project of this magnitude will likely continue to experience other problems as more becomes known. In 2004, we reported that more than a year and a half into the Deepwater contract, the key components needed to manage the program and oversee the system integrator had not been effectively implemented. For example, integrated product teams, comprised of government and contractor employees, are the Coast Guard’s primary tool for managing the program and overseeing the contractor. We found that the teams had not been effective due to changing membership, understaffing, insufficient training, lack of authority for decision making, and inadequate communication among members. Although some efforts have been made to improve the effectiveness of the integrated product teams, we have found that the needed changes are not yet sufficiently in place. In 2005, we reported that decision making was to a large extent stove-piped, and some teams lacked adequate authority to make decisions within their realm of responsibility. One source of difficulty for some team members has been the fact that each of the two major subcontractors has used its own management systems and processes to manage different segments of the program. We noted that decisions on air assets were made by Lockheed Martin, while decisions regarding surface assets were made by Northrop Grumman. This approach can lessen the likelihood that a system-of-systems outcome will be achieved if decisions affecting the entire program are made without the full consultation of all parties involved. In 2006, we reported that Coast Guard officials believed collaboration among the subcontractors to be problematic and that ICGS wielded little influence to compel decisions among them. For example, when dealing with proposed design changes to assets under construction, ICGS submitted the changes as two separate proposals from both subcontractors rather than coordinating the separate proposals into one coherent plan. According to Coast Guard performance monitors, this approach complicates the government review of design changes because the two proposals often carried overlapping work items, thereby forcing the Coast Guard to act as the system integrator in those situations. In addition, we reported in 2004 that the Coast Guard had not adequately communicated to its operational personnel decisions on how new and old assets would be integrated and how maintenance responsibilities would be divided between government and contractor personnel. We also found that the Coast Guard had not adequately staffed its program management function. Despite some actions taken to more fully staff the Deepwater program, we reported that in January 2005 shortfalls remained. While 244 positions were assigned to the program, only 206 were filled, resulting in a 16 percent vacancy rate. In 2004, we found that the Coast Guard had not developed quantifiable metrics to hold the system integrator accountable for its ongoing performance and that the process by which the Coast Guard assessed performance after the first year of the contract lacked rigor. For example, the first annual award fee determination was based largely on unsupported calculations. Despite documented problems in schedule, performance, cost control, and contract administration throughout the first year, the program executive officer awarded the contractor an overall rating of 87 percent, which fell in the “very good” range. This rating resulted in an award fee of $4.0 million of the maximum of $4.6 million. We also reported in 2004 that the Coast Guard had not begun to measure the system integrator’s performance on the three overarching goals of the Deepwater program—maximizing operational effectiveness, minimizing total ownership costs, and satisfying the customers. Coast Guard officials told us that metrics for measuring these objectives had not been finalized; therefore they could not accurately assess the contractor’s performance against the goals. However, at the time, the Coast Guard had no time frame in which to accomplish this measurement. In 2004, we reported that, although competition among subcontractors was a key vehicle for controlling costs, the Coast Guard had neither measured the extent of competition among the suppliers of Deepwater assets nor held the system integrator accountable for taking steps to achieve competition. As the two major subcontractors to ICGS, Lockheed Martin and Northrop Grumman have sole responsibility for determining whether to provide the Deepwater assets themselves or to hold competitions—decisions commonly referred to as “make or buy.” We noted that the Coast Guard’s hands-off approach to make-or-buy decisions and its failure to assess the extent of competition raised questions about whether the government would be able to control Deepwater program costs. Failure to control costs can result in waste of taxpayer dollars. Along with my several colleagues in the accountability community, I have developed a definition of waste. As we see it, waste involves the taxpayers in the aggregate not receiving reasonable value for money in connection with any government funded activities due to an inappropriate act or omission by players with control over or access to government resources (e.g., executive, judicial or legislative branch employees, contractors, grantees or other recipients). Importantly, waste involves a transgression that is less than fraud and abuse and most waste does not involve a violation of law. Rather, waste relates primarily to mismanagement, inappropriate actions, or inadequate oversight. We made 11 recommendations in 2004 in the areas of management and oversight, contractor accountability, and cost control through competition. In April 2006, we reported that the Coast Guard had implemented five of them. Actions had been taken to revise the Deepwater human capital plan; develop measurable award fee criteria; implement a more rigorous method of obtaining input from Coast Guard monitors on the contractor’s performance; include in the contractor’s performance measures actions taken to improve the integrated product teams’ effectiveness; and require the contractor to notify the Coast Guard of subcontracts over $10 million that were awarded to the two major subcontractors. The Coast Guard had begun to address five other recommendations by initiating actions to establish charters and training for integrated product teams; improving communications with field personnel regarding the transition to Deepwater assets; devising a time frame for measuring the contractor’s progress toward establishing criteria to determine when to adjust the project baseline; and developing a plan to hold the contractor accountable for ensuring adequate competition among suppliers. We determined that, based on our work, these recommendations had not been fully implemented. The Coast Guard disagreed with and declined to implement one of our recommendations, to establish a baseline to determine whether the system-of-systems acquisition approach is costing the government more than the traditional asset replacement approach. While we stand behind our original recommendation, the Coast Guard maintains that the cost to implement this recommendation would be excessive. In addition to overall management issues discussed above, there have been problems with the design and performance of specific Deepwater assets. For example, in February 2006, the Coast Guard suspended design work on the Fast Response Cutter (FRC) due to design risks such as excessive weight and horsepower requirements. The FRC was intended as a long-term replacement for the legacy 110-foot patrol boats. Coast Guard engineers raised concerns about the viability of the FRC design (which involved building the FRC’s hull, decks, and bulkheads out of composite materials rather than steel) beginning in January 2005. In February 2006, the Coast Guard suspended FRC design work after an independent design review by third-party consultants demonstrated, among other things, that the FRC would be far heavier and less efficient than a typical patrol boat of similar length, in part, because it would need four engines to meet Coast Guard speed requirements. In moving forward with the FRC acquisition, the Coast Guard will end up with two classes of FRCs. The first class of FRCs to be built would be based on an adapted design from a patrol boat already on the market to expedite delivery. The Coast Guard would then pursue development of a follow-on class that would be completely redesigned to address the problems in the original FRC design plans. Coast Guard officials now estimate that the first FRC delivery will slip to fiscal year 2009, at the earliest, rather than 2007 as outlined in the 2005 Revised Deepwater Implementation Plan. In addition to problems with the FRC design, problems have also been discovered with the long-term structural integrity of the National Security Cutter’s (NSC) design, which could pose operational and financial impacts to the Coast Guard. The Commandant of the Coast Guard recently stated that internal reviews by Coast Guard engineers, as well as by independent analysts have concluded that the NSC as designed will need structural reinforcement to meet its expected 30-year service life. In addition, a recent report by the DHS Inspector General indicated that the NSC design will not achieve a 30-year service life based on an operating profile of 230 days underway per year in General Atlantic and North Pacific sea conditions and added that Coast Guard technical experts believe the NSC’s design deficiencies will lead to increased maintenance costs and reduced service life. In an effort to address the structural deficiencies of the NSC, the Commandant has stated that the Coast Guard is taking a two-pronged approach. First, the Coast Guard is working with the contractors to enhance the structural integrity of hulls three through eight that have not yet been constructed. Second, after determining that the NSC’s structural deficiencies are not related to the safe operation of the vessel in the near term, the Coast Guard has decided to address the deficiencies of hulls one and two as part of depot-level maintenance, planned for several years after they are delivered. The Commandant stated that he decided to delay the repairs to the first two NSC hulls in an effort to prevent further cost increases or delays in construction and delivery. Further, the Deepwater program’s conversion of the legacy 110-foot patrol boats to 123-foot patrol boats has also encountered performance problems. The Coast Guard had originally intended to convert all 49 of its 110-foot patrol boats into 123-foot patrol boats in order to increase the patrol boats’ annual operational hours. This conversion program was also intended to add additional capability to the patrol boats, such as enhanced and improved C4ISR capabilities, as well as stern launch and recovery capability for a small boat. However, the converted 123-foot patrol boats began to display deck cracking and hull buckling and developed shaft alignment problems, and the Coast Guard elected to stop the conversion process at eight hulls upon determining that the converted patrol boats would not meet their expanded post-9/11 operational requirements. The design and performance problems illustrated above have clear operational consequences for the Coast Guard. In the case of the 123-foot patrol boats, the hull performance problems cited above led the Coast Guard to suspend all normal operations of the eight converted normal 123- foot patrol boats effective November 30, 2006. The Commandant of the Coast Guard has stated that having reliable, safe cutters is “paramount” to executing its missions, such as search and rescue and migrant interdiction. The Coast Guard is exploring options to address operational gaps resulting from the suspension of the 123-foot patrol boat operations. In regard to the suspension of FRC design work, as of our June 2006 report, Coast Guard officials had not yet determined how changes in the design and delivery date for the FRC would affect the operations of the overall system-of-systems approach. However, because the delivery of Deepwater assets are interdependent within this acquisition approach, schedule slippages and uncertainties associated with potential changes in the design and capabilities of the new assets have increased the risks that the Coast Guard may not meet its expanded homeland security performance requirements within given budget parameters and milestone dates. Given the size of DHS and the scope of its acquisitions, we are continuing to assess the department’s acquisition oversight process and procedures in ongoing work. For example, we are currently reviewing DHS’s use of contractors to provide management and professional services, including the roles they are performing and how their performance is overseen. In addition, the conference report to the Department of Homeland Security Appropriations Act for Fiscal Year 2007 directed DHS’s Chief Procurement Officer to develop a procurement oversight plan, identifying necessary oversight resources and how improvements in the department’s performance of its procurement functions will be achieved. We have been directed to review the plan and provide our observations to congressional committees. We are also reviewing the department’s use of performance- based acquisitions. We will also continue to review Deepwater implementation and contract oversight. We are currently reviewing aspects of the Deepwater program for the House and Senate Appropriations Committees’ Subcommittees on Homeland Security. Our objectives are to review (1) the status of the development and delivery of the major aviation and maritime assets that comprise the Coast Guard’s Deepwater program; (2) the history of the contract, design, fielding, and grounding of the converted 123-foot patrol boats and operational adjustments the Coast Guard making to account for the removal from service of the 123-foot patrol boats; and (3) the status of the Coast Guard’s implementation of our 2004 recommendations on Deepwater contract management for improving Deepwater program management, holding the prime contractor accountable for meeting key program goals, and facilitating cost control through competition. We will share our results with those committees in April of this year. Due to the complexity of its organization, DHS is likely to continue to face challenges in unifying the acquisition functions of its components and overseeing their acquisitions—particularly those involving large and complex investments. Although the Coast Guard has taken actions to improve its management of the Deepwater program and oversight of the system integrator, problems continue to emerge as the program is implemented. DHS and the Coast Guard face the challenge of effectively managing this program to obtain desired outcomes while making decisions that are in the best interest of the taxpayer. Given its experience with Deepwater, the department would be wise to apply lessons learned to its other major, complex acquisitions, particularly those involving a system integrator. Mr. Chairman, that concludes my statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For information about this testimony, contact Steve Caldwell at (202) 512- 9610 or John Hutton at (202) 512-7773. Other individuals making key contributions to this testimony include Michele Mackin, Christopher Conrad, and Adam Couvillion. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In January 2003, GAO designated the Department of Homeland Security's (DHS) implementation and transformation as high risk because of the size and complexity of the effort and the existing challenges faced by the components being merged into the department. The success of the effort to integrate numerous agencies and organizations into one cabinet-level department rests in large part on DHS's ability to effectively acquire the wide range of goods and services it needs to achieve its mission of protecting the nation from terrorism. DHS is undertaking a number of large, complex investments as the federal government increasingly relies on contractors for roles and missions previously performed by government employees. One of the department's largest investments--the Deepwater program, now estimated to cost $24 billion--is the Coast Guard's major effort to replace or modernize its aircraft and vessels. Rather than using a traditional acquisition approach, the Coast Guard is using a system integrator to design, construct, deploy, support, and integrate the Deepwater assets. In this testimony, the Comptroller General discussed (1) the overarching challenges DHS faces in establishing an effective acquisition organization, (2) GAO's prior work on Coast Guard and contractor management of the Deepwater program, and (3) the status of GAO's ongoing reviews. GAO has reported in the past on acquisition management at several components of DHS and has assessed the department's overall acquisition management and oversight efforts. A common theme in these reports is DHS's struggle, from the outset, to provide adequate support to its mission components in acquiring goods and services and to provide departmentwide oversight of its acquisition function. DHS has a stated goal of integrating the acquisition function more broadly across the department. GAO has reported that this goal has not yet been accomplished and has identified key impediments to achieving it. A management directive intended to integrate the acquisition line of business did not provide the Chief Procurement Officer with the enforcement authority needed in practice, and it does not pertain to all component agencies. Also, the procurement organizations within the department remained somewhat autonomous, and centralized acquisition oversight had not been implemented. While DHS's review process for major investments adopts some best practices, key decision-making reviews at certain points are not required. Investments that are not reviewed at the appropriate points can face a range of problems--such as redesign--resulting in significant cost increases and schedule delays. The Coast Guard's Deepwater program illustrates problems that can occur when effective program management and contractor oversight are not in place. In 2001, GAO described the Deepwater project as "risky" due to the unique, untried acquisition strategy for a project of this magnitude within the Coast Guard--a system-of-systems approach with the contractor as the integrator. In 2004, GAO reported that well into the contract's second year, key components needed to manage the program and oversee the system integrator's performance had not been effectively implemented. For example, integrated product teams, comprised of government and contractor employees, are the Coast Guard's primary tool for managing the program and overseeing the contractor. GAO found that the teams had not been effective due to changing membership, understaffing, insufficient training, lack of authority for decision-making, and inadequate communication among members. GAO also reported that, despite documented problems in schedule, performance, cost control, and contract administration throughout the first year of the Deepwater contract, the contractor had received a rating of 87 percent, which fell in the "very good" range and resulted in an award fee of $4.0 million. GAO's more recent work found that, while the Coast Guard had taken steps to address some of the problems, concerns remained about program management and contractor oversight. In addition to these overall management issues, there have been problems with the design and performance of specific Deepwater assets. Given the size of DHS and the scope of its acquisitions, GAO is continuing to assess the department's acquisition oversight process and procedures in ongoing work. GAO is also currently reviewing the status of the Deepwater program's implementation and contractor oversight. |
Medicaid operates as a joint federal-state program to finance health care coverage for certain categories of low-income individuals, over 11 million of whom are elderly or disabled. In total, Medicaid cost almost $258 billion in fiscal year 2002, and the Congressional Budget Office (CBO) projects that fiscal year 2003 spending will double by 2012. Today, Medicaid ranks as the third largest mandatory spending program in the federal budget and represents the largest source of federal funds to the states, accounting for 41 percent of all federal outlays for grants to states and local governments in fiscal year 2001. In terms of overall state expenditures, outlays for Medicaid rank second only to elementary and secondary education, accounting for an estimated 15 percent of general fund expenditures in state fiscal year 2002. Within broad federal guidelines, states have considerable flexibility in how they administer their Medicaid programs. The federal statute requires state programs to cover certain services and populations, such as nursing home services for qualifying elderly and for disabled individuals aged 21 and over. Each state determines what medical services to cover, establishes eligibility requirements, sets provider payment rates, and develops its own administrative structure. As a result, Medicaid essentially operates as 56 separate programs: 1 in each of the 50 states, the District of Columbia, Puerto Rico, and each of the U.S. territories. Nursing homes care for people with a wide range of clinical conditions and provide a variety of services, including basic custodial care, medical social services, skilled nursing care, and rehabilitative therapies. Medicaid is the single largest funding source for nursing home services, providing about one-half of total expenditures for these services in 2003. Medicaid supports the care of an even larger share of nursing home residents, paying at least in part for the services provided to approximately two in three residents nationwide. Federal requirements regarding states’ methods for reimbursing nursing homes for the services they provide to Medicaid residents have changed over time. A 1972 amendment to the Social Security Act required that states reimburse nursing homes on a reasonable cost-related basis. Under this requirement, states developed methods to identify nursing homes’ reasonable costs as well as set rates based on these costs, both of which were subject to federal verification and approval. Nursing home providers filed a number of federal lawsuits contesting the adequacy of states’ payment rates. In 1980, Congress passed legislation, commonly referred to as the Boren Amendment, which provided that Medicaid payment rates for nursing homes had to be “reasonable and adequate to meet the costs which must be incurred by efficiently and economically operated facilities.” The Boren Amendment also transferred responsibility for verifying that rates complied with these standards from the federal government to states; however, it did not grant states unlimited discretion in developing payment rates. The 1980 Conference Report that accompanied the Boren Amendment stated that rates should not be developed “solely on the basis of budgetary appropriations” and required states to submit annual assurances to the Secretary of Health and Human Services that rates complied with Boren regulations. The Conference Report also clarified that while the Boren Amendment was intended to give states discretion to develop the methods and standards on which payment rates would be based, the federal government retained final authority in approving states’ rates. During the roughly 17 years following the enactment of the Boren Amendment, providers in many states filed suits alleging that Medicaid payment rates were not sufficient and therefore violated federal requirements that rates be reasonable and adequate to cover the costs of efficiently and economically operated nursing homes. In 1990, the Supreme Court found that the amendment imposed a binding obligation on states to adopt reasonable and adequate payment rates and held that providers could sue to enforce this obligation and challenge Medicaid payment rates in federal court. After this decision, nursing home providers continued to rely on the courts to review payment rates they considered insufficient and verify that these rates complied with federal payment standards. The Balanced Budget Act of 1997 (BBA) repealed the Boren Amendment, providing states with increased flexibility to develop approaches to pay nursing homes that participate in Medicaid. States are no longer required to submit annual rate findings to the federal government but instead must develop and implement a public process for determining rates, which requires that states publish all proposed and final rates—including their methodologies and justifications—and ensure that providers, beneficiaries, and their representatives are given reasonable opportunity to review and comment on rates. Additionally, states must continue to ensure that payments are consistent with efficiency, economy, and quality of care standards. In 2003, states faced their third consecutive year of fiscal pressure, with revenue collections again falling short of planned expenditures. A June 2003 survey conducted by NASBO and the National Governors Association (NGA) found that 30 states collected less revenue in fiscal year 2003 than they planned for in their budgets, with sales tax collections 2.5 percent lower than originally budgeted and personal and corporate income tax collections 8.6 percent and 8.3 percent lower than expected, respectively. According to an April 2003 survey conducted by the National Conference of State Legislatures (NCSL), 39 states and the District of Columbia faced budget shortfalls at some point during fiscal year 2003, totaling over $29 billion. At the same time states have experienced shortfalls in their expected revenue collections, they have also experienced significant growth in Medicaid expenditures. According to CMS, the state and local share of Medicaid spending grew almost 14 percent in fiscal year 2002 and is projected to grow almost 10 percent in 2003. In their June 2003 survey, NASBO and NGA reported that 25 states experienced Medicaid budget shortfalls in state fiscal year 2002, and 28 states reported these shortfalls in 2003. Fiscal pressures have compelled states to confront difficult choices, especially because 49 states and the District of Columbia are required to balance their budgets. Recognizing that the Medicaid program represents a large component of many states’ budgets, virtually all states have implemented or planned new cost-containment measures in order to control Medicaid spending growth in 2003, according to another recent state survey. For example, 45 states reported that they planned to reduce spending on prescription drugs, which is an optional benefit, during fiscal year 2003. In addition, benefit reductions, such as limits for vision care and dental services, and changes to eligibility requirements, such as a lowered income threshold for Medicaid program eligibility, were additional cost- containment measures used or proposed by states. In May 2003, Congress passed the Jobs and Growth Tax Relief Reconciliation Act, which included $20 billion in fiscal relief to state and local governments. Of these funds, $10 billion is earmarked for Medicaid, providing temporary enhancements to the federal share of Medicaid funding through June 2004 to help states maintain Medicaid services and eligibility. The remaining $10 billion in fiscal relief is divided among the states based on population and can be used to assist states in providing government services. Recognizing the importance of spending Medicaid dollars effectively, the 19 states we reviewed have designed methods to develop nursing home payment rates that include incentives for homes to deliver care efficiently, operate economically, and concentrate resources on direct resident care. While nursing home payment rates in most of these states are related to individual homes’ costs of delivering needed services, most states also limit payment for certain types of costs and many provide additional payments for direct resident care. Most of these states also regularly adjust rates to reflect changes in homes’ costs or in the care needs of the residents that homes serve. Table 1 provides an overview of various payment features used by the 19 states we reviewed as of September 2003. These features will be discussed below in greater detail. Because states pursue different strategies to meet their various objectives, methods to determine rates differ considerably among states. However, over half of the states we reviewed include at least five such features in their payment methods, with states most commonly using payment ceilings and annual rate updates. All 19 states we reviewed base the per diem, or daily, rate they pay to nursing homes on costs, as reported in cost reports. While 4 states— California, Massachusetts, Oregon, and Texas—use the average or median costs of all homes to pay the same, flat rate, with some adjustments, to all homes or homes within a specified group, the remaining 15 states compute a rate for each home based on the individual home’s costs. States that pay home-specific rates attempt to make more effective use of their resources for nursing homes. They avoid paying lower-cost homes rates significantly in excess of their costs, which can occur when rates are based on the average or median costs across homes. In addition, by not making such excess payments to lower-cost homes, states with home-specific rates can use the same overall budget to pay more higher-cost homes rates that are closer to their costs. States design their payment methods to encourage nursing homes to deliver care efficiently and economically. For example, all 19 states develop their payment rates prospectively, or prior to the time during which the rates apply, using historical cost reports. Prospective rates encourage nursing homes to operate efficiently and incur only necessary costs. Homes that deliver care for less than the payment amount profit; conversely, providers experience losses if costs are higher than the payment rate. Seven of the states we reviewed use explicit efficiency incentives to further encourage homes to minimize spending by providing them with additional payment if they keep their spending below a certain amount. For example, Connecticut nursing homes with indirect care or administrative costs below the median of all homes’ costs in these categories have up to 25 percent of this difference incorporated into their per diem rates. (See app. II for more detail on how states develop nursing home payment rates.) To further encourage homes to operate efficiently, the 15 of the 19 states that pay home-specific rates place ceilings, or limits, on the costs that are reflected in their nursing home payment rates.27, 28 These ceilings encourage homes to control spending as they will not be reimbursed for costs that exceed these ceilings. Since the majority of homes have demonstrated that they can provide care at costs below the ceiling, states may regard costs above the ceiling as excessive. In addition to imposing ceilings, many states use other mechanisms to limit the costs that they recognize when determining homes’ per diem rates. While in some cases these mechanisms may also encourage efficiency, in other cases they may result in fewer homes receiving their full costs than what the ceiling levels indicate. For example, regardless of increasing nursing home costs, Colorado limits the annual increase in administrative costs it recognizes to 6 percent, while South Dakota allows no more than an 8 percent annual increase in overall payment rates. In addition, although Rhode Island and North Dakota rebase their per diem rates regularly, they do not rebase cost-center ceilings as frequently. For example, Rhode Island inflates cost-center ceilings annually instead of rebasing them, and North Dakota rebases ceilings every 3 years on average, inflating them during the interim years. (See app. II for descriptions of additional limits states place on nursing home payments.) The ceiling is typically based on a percentage of the median costs, or a certain percentile of costs, for all homes in the state or within a category of homes. Individual homes’ rates are typically determined by the lower of their own costs or the ceiling. In the four states that generally pay a flat rate to all homes or to all homes in a group, the flat rate also promotes efficiency since homes with costs below the rate are able to retain the difference. have similar labor markets and associated wage costs or homes of comparable size (i.e., homes with a large or small number of beds) that should operate at similar levels of efficiency. For example, since costs per day may vary by geographic location—such as urban versus rural areas— establishing peer groups by location allows states to set higher ceilings for homes in the more costly areas. Peer groups may be unnecessary in states with ceilings that are set well above the median costs and where most homes have costs below the ceilings or in states where wages vary little across areas. Despite the various ways states encourage nursing home efficiency, industry representatives and industry-sponsored studies nonetheless raise concerns that Medicaid payments do not cover the full costs of all nursing homes. For example, a 2002 industry-sponsored study reported that nursing home costs for Medicaid-covered residents in 2000 exceeded Medicaid payment rates an average of $10 per resident day in the 37 states included in the study. In addition, industry representatives in 7 of the states we reviewed expressed concern that state payment methods do not adequately account for increases in certain costs, such as liability insurance or direct resident care staff wages and benefits. However, by incorporating certain features, such as ceilings, into their nursing home payment methods, states have intentionally designed their payment methods so that not all homes receive their full costs and so that lower- cost homes, which are more likely to be efficient and economical, have payment rates nearer to their costs. Through the design of their payment methods, states generally seek to encourage nursing home spending on direct resident care. All 19 states we reviewed divide nursing home costs into categories, or cost centers, with common categories being direct resident care, indirect care, administrative, and capital (see table 2). By varying their payment policies for each category, most states seek to target more of their funds to direct resident care. How states establish ceilings or efficiency incentives for each cost center may encourage nursing homes to spend more money on direct resident care than other areas. In nine of the states we reviewed that pay home- specific rates, the direct resident care ceiling is higher than the administrative ceiling, thus allowing a higher proportion of homes to have their payments based on their total direct resident care costs than is the case for their administrative costs. For example, for all homes within each peer group in Connecticut, the direct resident care ceiling is set at 135 percent of the median direct resident care costs while the administrative ceiling is set at 100 percent of the median administrative costs. In addition, five of the seven states with efficiency incentives that reward homes for spending less do not apply them to direct resident care costs, thereby minimizing the incentive for homes to restrict spending in this area. Further, nine of the states we reviewed used add-on payments to reimburse wages or other expenses for staff who provide direct resident care or to promote the provision of high-quality direct resident care. For example, in 2000, Massachusetts began providing an add-on payment to nursing homes for certified nursing assistants (CNA), who assist residents with activities such as bathing and eating. This add-on is based on CNA salaries and Medicaid nursing home utilization. Because homes often use add-on payments to increase their spending on direct resident care, these payments may lead to higher costs on homes’ cost reports and therefore could result in higher future per diem rates. To reflect changes in nursing homes’ costs, 17 of the 19 states we reviewed regularly calculate new payment rates or adjust existing rates for inflation. To rebase, or calculate new rates, states generally use costs as reported in nursing homes’ most recent cost reports that reflect inflation or other cost changes such as those due to more expensive technologies, a different staff mix, or changing direct resident care needs. Nine of the 19 states we reviewed rebase rates annually, and 8 states rebase homes’ rates every 2 to 4 years. The 2 remaining states, however, rebase infrequently, if ever; Illinois has only rebased rates once in the past 9 years, and New York has not fully rebased homes’ rates since 1986. Most states we reviewed also apply a standard inflation factor, such as the Consumer Price Index (CPI) or the SNF market basket index, to adjust rates during years they do not rebase or to reflect inflation between the midpoint of the cost report year and the midpoint of the year when the rates will be paid, a period that generally ranges from 18 to 36 months. However, Illinois has not consistently updated rates for inflation during non-rebase years since 1994, and Iowa’s new nursing home payment method, which was fully implemented on July 1, 2003, does not have a provision for adjusting rates during non-rebase years. In addition, rather than using a standard inflation factor, Connecticut and Illinois use legislatively determined amounts to update rates when they do not rebase. These amounts vary from year to year and are influenced by budget availability. Instead of paying rates that are based on the costs required to care for a nursing home’s residents during the cost reporting period, 12 of the 19 states we reviewed use case-mix systems to tie payment to the costs associated with a home’s current resident care needs. Using a variety of methods, states classify homes’ residents by the level of care they require and adjust payment rates to reflect the costs associated with treating current residents with different levels of need. While the rate adjustment occurs with varying frequency, most states adjust rates for case-mix two to four times a year. Adjusting rates for case-mix may encourage homes to accept residents who require more expensive care, and it also provides states with a tool to compare more appropriately homes’ costs and to not penalize homes that have higher costs due to a more costly mix of residents. In addition, case- mix adjusted rates particularly help target payments in states that otherwise pay the same, flat rate. Three of the four flat-rate states we reviewed make case-mix adjustments to the rates so payments more closely approximate the costs likely incurred by individual homes for treating residents. Recent state fiscal pressures have not resulted in widespread reductions in Medicaid payment rates to nursing homes in most states we reviewed, although all of these states modified how they pay nursing homes from fiscal years 1998 through 2004. While in some cases modifications to payment methods have clearly increased or decreased payment rates, in other instances the effect of these modifications on payment rates for individual homes is mixed. Further, in nearly three-quarters of the states we reviewed, nursing home per diem rates grew, on average, by an amount that exceeded the SNF market basket index for state fiscal years 2001 through 2003, similar to the years immediately following the repeal of the Boren Amendment. To avoid making significant changes to nursing homes’ payment rates, many states reported that they relied on existing resources, such as budget stabilization funds and tax increases, to generate additional funding. Other factors have also influenced the nature and extent of states’ changes to nursing home payment rates. Even with recent temporary federal fiscal relief, however, officials in some states suggest that nursing home payment reductions are possible in the future. Over the past several years, the states we reviewed have faced increasing budget pressures, and all reported experiencing fiscal pressure in fiscal year 2003. These budget pressures followed consecutive years of significant economic growth in many states. For example, through state fiscal year 2000, Connecticut experienced 10 years of budget surpluses; however, in state fiscal year 2001 the surpluses ended, and the state’s deficit was over $800 million. Also, in 2001, Massachusetts began experiencing increased fiscal pressures mainly because of decreased tax revenues and lower capital gains. Irrespective of shifting fiscal pressures experienced by these states, their modifications to nursing home payment methods have not resulted in widespread payment reductions to nursing homes from fiscal years 1998 through 2004. During this time, all 19 states we reviewed either modified components of their payment methods, such as changing cost-center ceilings or implementing case-mix systems, or created new payment methods, as was the case in Arkansas and Iowa. However, the extent to which states changed specific features of their payment methods generally remained constant during this time, with varying effects on payment rates to individual homes within states. (See app. III for a list of selected state changes.) In addition, despite each of the 19 states experiencing recent fiscal pressure, only 4 states—Illinois, Massachusetts, Michigan, and Texas— explicitly cut the per diem rates paid to all nursing homes at some point during state fiscal years 1998 through 2004, and the rate reduction was for less than 1 year in 2 of these states. For example, for the 3-month period of March through May 2003, Massachusetts reduced payment rates to nursing homes by approximately 2.5 percent, but increased payment rates in June 2003 by about 6.3 percent. Similarly, Michigan reduced nursing home rates from January through September 2002 by approximately 1 percent. With the start of Michigan’s fiscal year 2003 (October 1, 2002), this reduction was lifted; however, facing budgetary constraints, the state again reduced nursing home payment rates from March 2003 through September 2003 by roughly 1.85 percent. While reductions in per diem rates were temporary in these 2 states, the reduction in per diem rates in Illinois and Texas were for longer periods of time. Illinois, for example, implemented an across-the-board 5.9 percent cut to existing rates to all Medicaid providers, including nursing homes, in July 2002, and froze payment rates at this reduced level for fiscal year 2004, which began on July 1, 2003. Similarly, in its 2004/2005 biennial budget, which began September 1, 2003, Texas reduced payment rates to Medicaid providers, with nursing home per diem rates being reduced by 1.75 percent from their fiscal year 2003 levels. In addition to these four states, Oregon froze Medicaid payment rates to nursing homes in fiscal year 2003 at fiscal year 2002 rates and extended this freeze at the beginning of fiscal year 2004. Beginning on July 1, 2003, Connecticut froze Medicaid payment rates to nursing homes at January 2003 levels and also reduced the level of payment increases granted to other Medicaid long-term care providers. The effect of states’ other modifications on payment methods varies. While some changes have obvious positive or negative effects on payment rates, the effect of other changes on payments to individual nursing homes is mixed. For example, New Jersey’s decreased ceiling for administrative and indirect care costs—from 105 to 100 percent of the median costs for all homes—and Michigan’s elimination of add-on payments for quality incentives and direct resident care staff wages likely lowered payment rates to some extent for some nursing homes. Conversely, payment to some nursing homes in New York and Vermont increased because of recently implemented add-on payments for direct resident care staff wages. Effects of other changes on nursing home payments, such as Colorado’s implementation of a case-mix system in 2000 or the addition of two counties to California’s Bay Area peer group in 2002, could either increase or decrease payment rates depending on the home. Although the effect that changes to payment methods have on rates for individual nursing homes may be mixed, average per diem rates in the states we reviewed generally have kept pace with increasing nursing home costs as measured by the SNF market basket index from state fiscal years 1998 through 2003. As figure 1 shows, from state fiscal years 2001 through 2003—a period during which all 19 states we reviewed were experiencing increased fiscal pressures—the average annual percentage change in states’ average per diem rates in 14 of the 19 states exceeded the SNF market basket index. This trend is similar to what occurred to rates during the years immediately following the repeal of the Boren Amendment—1998 through 2000—when states’ fiscal conditions were generally much more positive. In that earlier period, the average annual percentage change in states’ average per diem rates met or exceeded the SNF market basket index in 14 of these states, although the states that fell below the SNF market basket index differed somewhat between the two periods. From state fiscal years 2001 through 2003, the average annual change in per diem rates fell below the SNF market basket index in five states— California, Connecticut, Illinois, Massachusetts, and New York. The factors that contributed to per diem rates falling below this index varied among these states. For example, Illinois’ rate reduction in fiscal year 2003 of almost 6 percent contributed to the average rate change falling below the SNF market basket index. In addition, the lack of regular rebasing likely contributed to lower per diem rates in Illinois and New York. Illinois rebased rates only once from fiscal years 1994 through 2001, and as previously noted, New York has not fully rebased rates since 1986. In addition, industry officials in some states told us that the inflation factor used to update rates in non-rebase years is insufficient to meet nursing homes’ changing costs. For example, industry officials in New York said that the inflation factor the state uses to update homes’ rates annually, the CPI, does not reflect increasing health care costs. In addition, Connecticut—which rebases rates at least once every 2 to 4 years—uses a legislatively set inflation factor to increase rates in non-rebase years, which for the past several years has been limited to approximately 2 percent. Industry and Medicaid officials contend that this legislated amount, which has consistently fallen below the SNF market basket index, does not correspond with increases in actual nursing home costs. To help balance their budgets, states we reviewed have relied on alternative funding sources—including budget stabilization and tobacco settlement funds— and have enhanced revenue by increasing taxes (see table 3). Sixteen of the 19 states we reviewed reported using alternative funding sources, such as tobacco settlement, budget stabilization, cigarette tax increases, and Medicaid trust funds to deal with their states’ budgetary pressures. Most commonly, states relied on tobacco settlement funds to ease fiscal pressures. While many of the states we reviewed have employed alternative funding sources or cigarette tax increases, not all the states relied on these funds to cope with their budget situations. For instance, all 19 states received tobacco settlement funds, yet only 12 used these funds from 1998 through 2003 to respond to fiscal pressures. To help fund Medicaid nursing home payments in particular, several states rely on nursing home provider taxes, and in light of recent fiscal pressures, an increasing number of states have recently adopted or proposed these taxes in an effort to fund nursing home payments or to avert service reductions. Of the 19 states we reviewed, 8 currently have provider taxes for nursing homes, with at least 4 of these states implementing the tax since 2001, when fiscal pressures began increasing in many states. In addition, 5 of the states reviewed currently have pending for CMS’s approval a proposal to adopt a provider tax on nursing homes (see table 4). Of all types of providers, nursing homes were most commonly subject to new provider taxes in state fiscal years 2003 and 2004, according to a recent survey of all 50 states and the District of Columbia. Officials in some states told us that they have avoided making substantial reductions to nursing home payment rates because of other factors. For example, state legislative or regulatory action is typically required to change nursing home payment methods, and garnering sufficient support for such changes—especially for rate reductions—is often difficult. In addition, the nursing home industry has actively worked to avoid decreases in payment rates in several states. For example, industry officials in Alabama, Iowa, and Texas cited campaigns that they considered successful in various ways, such as preventing rate reductions or encouraging rate increases. Specifically, nursing home industry officials in Iowa said that two proposed nursing home rate cuts were defeated in part because of their opposition. Also, industry officials in Texas said that through their efforts, nursing homes were able to obtain rate increases for fiscal year 2002. Although the extent of states’ continued fiscal pressure is unknown, states expect their poor fiscal situations to continue through fiscal year 2004. According to an April 2003 NCSL study, 28 states and the District of Columbia expected budget shortfalls totaling over $53 billion in fiscal year 2004. These budget gaps may be difficult to fill as many states reported that they have depleted or nearly depleted their alternative funding sources. Over half of the states we reviewed that used budget stabilization funds, and 3 of the 12 states that used tobacco settlement funds, reported having depleted or nearly depleted these sources. Some states we reviewed reported their plans to confront continuing budget pressures in fiscal year 2004. As previously noted, at least six of these states reduced or froze their nursing home payment rates at some point during the past 2 fiscal years. In addition, these and other states have recently undertaken or are currently considering actions to reduce future nursing home payment rates. For example, California rebased nursing home rates for the 2004 rate year, which began on August 1, 2003, but has already frozen 2005 payment rates at current levels. Similarly, in August 2003, Connecticut froze per diem rates at their January 2003 levels through December 2004. Even with recent temporary federal fiscal relief, officials in some states suggest that nursing home payment reductions are possible in the future. For example, a Michigan state official indicated that reductions in 2004 per diem rates are probable because the legislative appropriation is likely insufficient to rebase rates. We provided a draft of this report to the Medicaid Director in each of the 19 study states for technical review. All states generally agreed with our characterization of their respective nursing home payment methods and, when necessary, provided clarifying or technical comments, which we incorporated as appropriate. In addition, we obtained oral comments on a draft of this report from representatives of two nursing home associations, the American Health Care Association (AHCA) and the American Association of Homes and Services for the Aging (AAHSA). We have modified the report, as appropriate, in response to their technical comments. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Administrator of CMS and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7118. An additional contact and other staff members who made contributions to this report are listed in appendix IV. To examine Medicaid nursing home payment methods and rates, we selected 20 states for our review. The 20 states included the following: 10 states (1 from each of the 10 Centers for Medicare & Medicaid Services (CMS) regions) with the largest decline or smallest growth in revenue from 2000 through 2002 within their regions, based on data in the November 2002 fiscal survey of states conducted by the National Association of State Budget Officers (NASBO) and the National Governors Association (NGA); 5 states with the largest population based on 2000 Census data; and 5 states with the highest number of Medicaid nursing home residents per capita, as indicated by the most recent data in CMS’s Online Survey Certification and Reporting (OSCAR) database (see table 5). Nationwide, these 20 states represented approximately 62 percent of Medicaid nursing home expenditures in fiscal year 2001 and 59 percent of Medicaid nursing home residents in fiscal year 2000, according to the most recently available CMS data. In each of the 20 states, we interviewed officials from the Medicaid and budget offices. From these officials, we obtained information about nursing home payment methods (including changes) for state fiscal years 1998 through 2004 and per diem rates for state fiscal years 1998 through 2003. In addition, to gain a broader understanding of Medicaid nursing home payments, we interviewed representatives from the offices of the American Health Care Association (AHCA) and/or the American Association of Homes and Services for the Aging (AAHSA) in each of the 20 states. We also interviewed national representatives of AHCA and AAHSA and consultants and experts in the field of Medicaid nursing home payment. Because Arizona’s Medicaid program is predominantly a managed care system, the state determines payment rates for only 5 percent of the nursing home population. Therefore, this report excludes Arizona and presents our findings from analyses of the other 19 states. To examine the extent to which states base nursing home payment rates on homes’ costs, we reviewed documentation, including some state laws and regulations. Relying on these documents as well as our interviews with state officials, we also identified key features of payment methods, such as whether rates are home-specific and how frequently states update or rebase the rates they pay nursing homes. In addition, we summarized the extent to which states’ payment methods incorporate features such as peer grouping, cost-center ceilings, and case-mix adjustment systems. To determine how state fiscal pressures have affected Medicaid programs with regard to nursing home payment rates and methods, we collected per diem rates from state fiscal years 1998 through 2003, fiscal year 2003 being the most current year for which per diem rates were available, and information about changes made to nursing home payment methods from state fiscal years 1998 through 2004. We used the per diem rate data to compare the average annual percentage change in states’ average nursing home payment rates from state fiscal years 1998 through 2003 to the corresponding years’ change in the skilled nursing facility (SNF) market basket index. The SNF market basket index, which is developed and updated annually by Global Insights, Inc., is used by CMS to reflect changes in the prices of goods and services included in the Medicare SNF prospective payment system. States typically provided us with their average Medicaid nursing home per diem rates weighted by resident days; however, in a few instances we had to use a state’s home-specific rates and resident days to calculate the weighted average per diem rate. For 2003, an average per diem rate was not available in Michigan, and projected per diem rates were provided by Arkansas and Pennsylvania. We encountered limitations with data provided by two other states. For example, North Dakota law generally prohibits nursing homes from charging private-pay residents more than the Medicaid rate; however, rates provided to us by the state were based on total resident days, which include payments for 3 to 5 percent of residents whose care is paid at typically higher Medicare rates. Therefore, the rates provided to us may be slightly higher than the average Medicaid rate. Conversely, the rates provided by Pennsylvania may be slightly lower than the actual average nursing home Medicaid rate because they include nursing homes residents’ temporary hospital stays, which account for approximately 1 percent of total resident days and for which homes only receive one-third of the per diem rate. Finally, we reviewed information compiled by NASBO, NGA, and NCSL related to states’ fiscal outlook and possible future reductions in the Medicaid program, including reductions affecting nursing homes. States use many of the same features within their payment methods. We describe below certain features of the payment methods used in the states we reviewed: peer groups, cost-center ceilings, efficiency incentives, case- mix systems, and occupancy standards. Ten states we reviewed classify homes into peer groups, or categories based on characteristics such as size or location, and typically set separate cost-center ceilings for each peer group. The states we reviewed most commonly categorize nursing homes by geographic region or home type. However, how states use peer groups varies (see table 6). For example, some states, such as New Jersey, use peer groups within all cost centers, while other states, such as Alabama, only group homes in one cost center. Further, states differ in the number and type of peer grouping categories they use. For example, Illinois’s peer grouping uses seven geographic regions in all cost centers; Connecticut bases its peer grouping on two geographic regions and two home types in the direct resident care cost center; and Florida’s peer grouping is based on three geographic regions and two home sizes in both the direct resident care and administrative cost centers. To limit the maximum amount states pay for costs within a given cost center, ceilings are typically set at a percentage of median costs, or a certain percentile of costs, for all nursing homes in a state or a subset of nursing homes with similar characteristics in states that pay home-specific rates. Homes in these states generally receive rates based on the lower of their actual costs or the ceiling. While most states we reviewed divide their operating costs into three centers—direct resident care, indirect care, and administration—plus a center for capital costs—the number of cost centers in the states we reviewed ranges from two in Oregon to seven in Rhode Island. In addition, states differ in how they categorize costs. For example, 8 states combine indirect care and administrative costs into a single cost center. Similarly, states may differ in how they categorize certain costs. For instance, Pennsylvania’s direct resident care center includes medical supplies, which are considered indirect costs in Connecticut and Rhode Island. Table 7 describes ceilings for operating costs in the 15 states that pay individual/home-specific rates, and table 8 describes how the remaining 4 states—California, Massachusetts, Oregon, and Texas—develop their flat rates, which serve as a type of ceiling, to pay for all nursing homes in the state. Seven states we reviewed include efficiency incentives in their payment methods, which typically allow nursing homes with costs below a predetermined amount (generally the cost-center ceiling or the median costs) in one or more cost centers to have a portion of the difference incorporated into their per diem rates (see table 9). For example, Connecticut uses efficiency incentives in both its indirect care and administrative cost centers. In the indirect care center, nursing homes with costs below the median have 25 percent of the difference between their costs and the median costs added to their per diem rates. The following hypothetical example demonstrates how this efficiency incentive generally would work. If a home’s costs were $20 per day in the indirect care cost center, and the median indirect care costs for all homes were $24 per day, then the home has costs that are $4 below the median and would have 25 percent of the difference between its costs and the median, or $1, added to its rate. Each of the seven states applies efficiency incentives differently. Case-mix systems categorize residents into groups based on the level of care they need and adjust payment rates to homes accordingly. Twelve of the 19 states we reviewed use case-mix systems, although the type of system and the number of case-mix categories vary widely. While 5 states have designed their own systems to measure case-mix, the remaining 7 states rely on some variation of the Resource Utilization Group (RUG) Patient Classification System, which is also used to determine the acuity level of nursing home residents in the Medicare program. The 7 states that use various versions of the RUG Patient Classification System place residents in 16 to 44 resident classification groups. In contrast, Oregon places residents into one of two groups, basic or complex care. The case- mix classification system used by each state is shown in table 10. By applying an occupancy standard, states reduce the per diem rates paid to nursing homes with occupancy below the state-established minimum levels. Of the 19 states reviewed, 17 use occupancy standards, which vary from 75 percent in Arkansas to 98 percent in Rhode Island, to determine nursing home payment rates. The following hypothetical example demonstrates how a state may apply an occupancy standard. A state applies an occupancy standard of 85 percent in the indirect care cost center, but a nursing home has a 75 percent occupancy level (along with annual costs of $200,000 in the indirect care cost center and 36 beds). Using the home’s actual occupancy, its payment rate for the indirect care cost center would be $20.29 (or $200,000/), whereas adjusting the home’s payment in the indirect care cost center for the state’s occupancy standard results in a lower rate of $17.91 ($200,000/). The extent to which states apply occupancy standards varies. Three of the states we reviewed—Alabama, Arkansas, and Iowa—apply the occupancy standard to only one cost center, and 7 others—Connecticut, Florida, Massachusetts, Michigan, New York, Rhode Island, and South Dakota—apply the occupancy standard to all cost centers (see table 11). Officials in the states we reviewed identified changes to payment rates or to the methods their respective Medicaid programs use to determine nursing home payment rates from state fiscal years 1998 through 2004 (see table 12). While some changes have obvious positive or negative effects on payment rates, the effect of other changes can be mixed. For example, while Colorado’s elimination of its quality incentive add-on payment likely lowers payment to some nursing homes, payment to some nursing homes in Vermont increased because of recently implemented add-on payments for direct resident care staff wages. The effect of other changes, such as California adding two counties to the Bay Area peer group in 2002, are likely to affect rates in both directions for different homes. In addition to changes to how they paid nursing homes, two states— Arkansas and Iowa—designed and implemented completely new payment methodologies during this time. For example, Iowa’s prior payment method did not classify homes into peer groups, did not adjust rates for the costs related to homes’ resident care needs, and limited payment to the 70th percentile of all homes’ total costs. Under the state’s new payment method, which was phased in completely in July 2003, homes are classified into peer groups, rates are adjusted for resident care costs using the RUG-III classification system, and a ceiling of 120 percent of median costs for all homes is imposed on payment for direct resident care costs. Christine DeMars, Behn M. Kelly, Sari B. Shuman, Margaret Smith, and Christi Turner made key contributions to this report. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Medicaid: HCFA Reversed Its Position and Approved Additional State Financing Schemes. GAO-02-147. Washington, D.C.: October 30, 2001. Nursing Workforce: Multiple Factors Create Nurse Recruitment and Retention Problems. GAO-01-912T. Washington, D.C.: June 27, 2001. Nursing Workforce: Recruitment and Retention of Nurses and Nurse Aides Is a Growing Concern. GAO-01-750T. Washington, D.C.: May 17, 2001. Long-Term Care: Baby Boom Generation Increases Challenge of Financing Needed Services. GAO-01-563T. Washington, D.C.: March 27, 2001. Medicaid: State Financing Schemes Again Drive Up Federal Payments. GAO/T-HEHS-00-193. Washington, D.C.: September 6, 2000. Medicaid Formula: Effects of Proposed Formula on Federal Shares of State Spending. GAO/HEHS-99-29R. Washington, D.C.: February 19, 1999. Long-Term Care: Baby Boom Generation Presents Financing Challenges. GAO/T-HEHS-98-107. Washington, D.C.: March 9, 1998. | Almost half of all Americans over the age of 65 will rely on nursing home care at some point in their lives, and two in three nursing home residents have their care covered at least in part by Medicaid. Under Medicaid, states set nursing home payment rates and the federal government reimburses a share of state spending. According to the most recently available data, Medicaid nursing home expenditures exceed $43 billion, and total Medicaid spending for fiscal year 2003 is expected to double by 2012. Such projections of increased Medicaid spending come as most states are confronting their third consecutive year of fiscal pressure. According to the National Association of State Budget Officers (NASBO), in fiscal year 2003, 30 states collected less revenue than they budgeted for, and 37 states reduced enacted budgets by almost $14.5 billion. In light of concerns about the adequacy of nursing home resources, GAO was asked to examine how state Medicaid programs determine nursing home payment rates and whether these payment methods or rates have changed given recent state fiscal pressures. GAO interviewed state and nursing home industry officials in 19 states and obtained documentation about nursing home payment rates and methods, including state methods to determine nursing home per diem rates for fiscal years 1998 through 2004. Recognizing the large share of Medicaid spending that is allocated to nursing homes and the importance of spending their Medicaid dollars effectively, the 19 states GAO reviewed have designed multifaceted approaches to setting nursing home payment rates. All of these states base payment rates on homes' actual costs and most develop rates specific to each home. These payment methods also generally incorporate incentives to achieve certain goals, such as promoting efficiency or encouraging homes to target spending toward resident care. States typically update payment rates regularly to reflect changes in nursing homes' costs due to factors such as inflation or residents' changing care needs. Although each of the 19 states experienced recent fiscal pressure, states' nursing home payment rates have remained largely unaffected. Any future changes, however, remain uncertain. During fiscal years 1998 through 2004, only 4 of these states--Illinois, Massachusetts, Michigan, and Texas--cut the per diem rates paid to all nursing homes at some point, and in 2 of these states, the rate reduction was for less than 1 year. Two other states--Connecticut and Oregon--also froze nursing home per diem rates for a portion of this period. In addition, all 19 states modified the methods they use to determine nursing home payment rates during this time, such as changing ceilings on payment rates; however, irrespective of shifting fiscal pressure, the extent to which states changed specific features of their payment methods generally remained constant, with varying effects on payment rates to individual homes within states. Further, in over three-quarters of these states, nursing home per diem rates grew, on average, by an amount that exceeded the skilled nursing facility market basket index, the index used by the Centers for Medicare & Medicaid Services to measure changes in the price of nursing home goods and services for Medicare, from fiscal years 1998 through 2003. Many states were able to avoid making significant changes to nursing home payment rates by relying on existing resources, such as tobacco settlement and budget stabilization funds, and increasing revenue by imposing cigarette or nursing home provider taxes. Even with these alternative funding sources and recent temporary federal fiscal relief, however, officials in some states suggest that nursing home payment reductions are possible in the future. GAO received comments on a draft of this report from Medicaid officials in the 19 states reviewed, who generally agreed with the characterization of their respective nursing home payment methods. GAO also received technical comments from representatives of two organizations that represent the nursing home industry. |
NARA’s mission is to ensure “ready access to essential evidence” for the public, the president, the Congress, and the Courts. NARA’s responsibilities stem from the Federal Records Act, which requires each federal agency to make and preserve records that document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. Federal records must be managed to ensure that the information that they contain is available when needed. According to NARA, without effective records management, the records needed to document citizens’ rights, actions for which federal officials are responsible, and the historical experience of the nation will be at risk of loss, deterioration, or destruction. Records management is defined as the policies, procedures, guidance, tools and techniques, resources, and training needed to design and maintain reliable and trustworthy records systems. Records must be managed throughout their life cycle: from creation, through maintenance and use, to final disposition. Temporary records—those used in everyday operations but lacking historic value—are ultimately destroyed. Permanent records—those judged to be of historic value—are preserved through archiving. With NARA’s oversight and assistance, each agency is responsible for managing its own records at all phases of the life cycle, with the exception of the archiving of permanent records (which is NARA’s responsibility). issuing records management guidance; working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; providing oversight of agencies’ records management programs; and providing storage facilities for certain temporary agency records. The Federal Records Act also authorizes NARA to conduct inspections of agency records and records management programs. NARA works with agencies to identify and inventory records; to appraise their value; and to determine whether they are temporary or permanent, how long the temporary records should be kept, and under what conditions both the temporary and permanent records should be kept. This process is called scheduling. No record may be destroyed unless it has been scheduled. Thus, for temporary records the schedule is of critical importance, because it provides the authority to dispose of the record after a specified time. Records are governed by schedules that are either (1) specific to an agency or (2) general—that is, common to several agencies or across the government. According to NARA, records covered by general records schedules make up about a third of all federal records. For the other two thirds, NARA and the agencies must agree upon specific records schedules. Once a schedule has been approved, the agency must issue it as a management directive, train employees in its use, apply its provisions to temporary and permanent records, and evaluate the results. While the Federal Records Act covers documentary material regardless of physical form or media, records management and archiving were until recently largely focused on handling paper documents. With the advent of computers, both records management and archiving have had to take into account the creation of records in varieties of electronic formats. NARA’s basic guidance for the management of electronic records is in the form of a regulation at 36 CFR Part 1234. This guidance is supplemented by the issuance of periodic NARA bulletins and a records management handbook, Disposition of Federal Records. For electronic records, NARA’s guidance sets forth two basic requirements. First, agencies are required to maintain an inventory of all agency information systems. The inventory should identify (1) the system’s name, (2) its purpose, (3) the agency programs supported by the system, (4) data inputs, sources, and outputs, (5) the information content of databases, and (6) the system’s hardware and software environment. Second, NARA requires agencies to schedule the electronic records maintained in their systems. Agencies must schedule those records either under specific schedules (completed through submission and approval of Standard Form 115, Request for Records Disposition Authority) or pursuant to a general records schedule. NARA relies on this combination of inventory and scheduling requirements to ensure that management of agency electronic records is consistent with the Federal Records Act. NARA has also established a general records schedule for electronic records. General Records Schedule 20 (GRS 20) authorizes the disposal of certain categories of temporary electronic records. It has been revised several times over the years in response to developments in information technology, as well as legal challenges. GRS 20 applies to electronic records created both in computer centers engaged in large-scale data processing and in the office automation environment. GRS 20 authorizes the disposal of certain types of electronic records associated with large data base systems, (such as inputs, outputs, and processing files), as well as the deletion of the electronic version of records on word processing and electronic mail systems once a recordkeeping copy has been made. Since most agency recordkeeping systems are paper files, GRS 20 essentially authorizes agencies to destroy E-mail and word- processing files once they are printed. (Recall that records not covered by a general records schedule may not be destroyed unless authorized by a records schedule that has been approved by NARA.) GRS 20 does not address many common products of electronic information processing, particularly those that result from the now prevalent distributed, end-user computing environment. For example, although the guidance addresses the disposition of certain types of electronic records associated with large databases, it does not specifically address the disposition of electronic databases created by microcomputer users. In addition, GRS 20 does not address more recent forms of electronic records such as Web pages and portable document format (PDF) files. As the nation’s archivist, NARA accepts for deposit to its archives those records of federal agencies, the Congress, the Architect of the Capitol, and the Supreme Court that are determined to have sufficient historical or other value to warrant their continued preservation by the U.S. government. NARA also accepts papers and other historical materials of the Presidents of the United States, documents from private sources that are appropriate for preservation (including electronic records, motion picture films, still pictures, and sound recordings), and records from agencies whose existence has been terminated. To ensure that permanent electronic records are preserved, each agency must transfer electronic records to NARA in accordance with the agency’s records disposition schedule. NARA accepts for archiving electronic records that are in text-based formats, such as databases and certain text-based geographic information system (GIS) files. In addition, NARA accepts E-mail records and attachments, several forms of scanned images of text files, and PDF files. It does not accept Web pages, word processor files, or relational databases. (Although NARA does not as yet accept such files for archiving, they must still be scheduled.) In response to the difficulty of manually managing electronic records, agencies are turning to automated records management applications to help automate electronic records management lifecycle processes. The primary functions of these applications include categorizing and locating records and identifying records that are due for disposition, as well as storing, retrieving, and disposing of electronic records that are maintained in repositories. Also, some applications are beginning to be designed to automatically classify electronic records and assign them to an appropriate records retention and disposition category. The Department of Defense (DOD), which is pioneering the assessment and use of records management applications, has published application standards and established a certification program. DOD standard 5015.2, endorsed by NARA, includes the requirement that records management applications acquired by DOD components after 1999 be certified to meet this standard. NARA is pursuing other interrelated efforts that address records management (including electronic records). Three major initiatives are NARA’s effort on Redesign of Federal Records Management; the Electronic Records Management initiative, one of 25 e- government initiatives sponsored by the Office of Management and Budget (OMB), and the acquisition of an advanced Electronic Records Archives (ERA). In 2000, NARA began a three-stage effort to redesign federal records management. First, in 2001, NARA produced a report based on information on federal records management that it collected and analyzed. Second, it used this report as a starting point to revise the regulations, policies, and processes for managing federal records and to develop a set of strategies to support federal records management. As a result of this analysis, in July 2002 NARA issued a draft proposal for the redesign of federal records management. Third, based on comments received on the proposal, it is developing a redesigned records scheduling, appraisal, and accessioning process, as well as prototype and functional requirements for automated tools for the redesigned process. The redesign is planned as a multiyear process (2003 to 2006), during which NARA intends to address the scheduling and appraisal of federal records in all formats. The overall purpose of the Electronic Records Management (ERM) initiative is to help agencies better manage their electronic records, so that records information can be effectively used to support timely and effective decision making, enhance service delivery, and ensure accountability. The initiative is intended to provide a variety of tools to address immediate and longer term agency needs. NARA is the managing partner agency for the overall ERM initiative. The goals for the advanced ERA system are that it will be able to preserve and provide access to any kind of electronic record, free from dependency on any specific hardware or software, so that the agency can carry out its mission into the future. NARA plans for ERA to be a distributed system, allowing storage and management of massive record collections at a variety of installations, with accessibility provided via the Internet. NARA is planning to build the system in five increments, with the last increment scheduled to be complete in 2010. The rapid evolution of information technology makes the task of managing and preserving electronic records complex and costly. Part of the challenge of managing electronic records is that they are produced by a mix of information systems, which vary not only by type but by generation of technology: the mainframe, the personal computer, and the Internet. Each generation of technology brought in new systems and capabilities without displacing the older systems. Thus, organizations have to manage and preserve electronic records associated with a wide range of systems, technologies, and formats. These records are stored in specific formats and cannot be read without software and hardware— sometimes the specific types of hardware and software on which they were created. Several factors contribute to the challenge of managing and preserving electronic records: Massive volumes of electronic data require automated solutions. Electronic records are increasingly being created in volumes that pose a significant technical challenge to our ability to organize them and make them accessible. For example, among the candidates for archiving are military intelligence records comprising more than 1 billion electronic messages, reports, cables, and memorandums, as well as over 50 million electronic court case files. Managing such large volumes is clearly not possible without automation. Control of electronic records is difficult in a decentralized computing environment. The challenge of managing electronic records significantly increases with the decentralization of the computing environment. In the centralized environment of a mainframe computer, it is easier to identify, assess, and manage electronic records than it is in the decentralized environment of agencies’ office automation systems, where every user is creating electronic files that may constitute a formal record and thus should be preserved. The complexity of electronic records precludes simple transfer to paper. Electronic records have evolved from simple text-based files to complex digital objects that may contain embedded images (still and moving), drawings, sounds, hyperlinks, or spreadsheets with computational formulas. Some portions of electronic records, such as the content of dynamic Web pages, are created on the fly from databases and exist only during the viewing session. Others, such as E-mail, may contain multiple attachments, and they may be threaded (that is, related E-mail messages are linked into send–reply chains). These records cannot be converted to paper or text formats without the loss of context, functionality, and information. Obsolescent and aging storage media put electronic records at risk. Storage media are affected by the dual problems of obsolescence and decay. They are fragile, have limited shelf life, and become obsolete in a few years. For example, few computers today have disk drives that can read information stored on 8- or 5¼-inch diskettes, even if the diskettes themselves remain readable. Electronic records are dependent on evolving software and hardware. Electronic records are created on computers with software ranging from word-processors to E-mail programs. As computer hardware and application software become obsolete, they may leave behind electronic records that cannot be read without the original hardware and software. In June 2002, we reported that NARA had responded to the challenges associated with managing and preserving electronic records. However, most electronic records—including databases of major federal information systems—remained unscheduled, and records of historical value were not being identified and provided to NARA; as a result, they were at risk of loss. A number of factors contributed to this condition: NARA acknowledged that its policies and processes on electronic records had not yet evolved to reflect the modern recordkeeping environment: records created electronically in decentralized processes. Records management programs were generally afforded low priority by federal agencies. A related issue was that agency management had not given priority to acquiring the more sophisticated and expensive information technology required to manage records in an electronic environment. NARA was also not performing systematic inspections of agency records programs. Such inspections are important as a means to evaluate individual agency records management programs, assess governmentwide progress in improving records management, and identify agency implementation issues and areas where guidance needs to be strengthened. We also provided some confirmation of NARA’s findings regarding records scheduling and disposition: our review at four agencies (Commerce, Housing and Urban Development, Veterans Affairs, and State) elicited a collective estimate that less than 10 percent of mission-critical systems were inventoried. As a result, for these four agencies alone, over 800 systems had not been inventoried, and the electronic records maintained in them had not been scheduled. Scheduling the electronic records in a large number of major information systems presents an enormous challenge, particularly since it generally takes NARA, in conjunction with agencies, well over 6 months to approve a new schedule. Failure to inventory systems and schedule records places these records at risk. The absence of inventories and schedules means that NARA and agencies have not examined the contents of these information systems to identify official government records, appraised the value of these records, determined appropriate disposition, and directed and trained employees in how to maintain and when and how to dispose of these records. As a result, temporary records may remain on hard drives and other media long after they are needed or could be moved to less costly forms of storage. In addition, there is increased risk that these records may be deleted prematurely while still needed for fiscal, legal, and administrative purposes. Further, the lack of scheduling presents risks to the preservation of permanent records of historic significance. NARA acknowledged in 2001 that its policies and processes on electronic records had not yet evolved to reflect the modern recordkeeping environment: records created electronically in decentralized processes. Despite repeated attempts to clarify its electronic records guidance through a succession of bulletins, the guidance was incomplete and confusing. It did not provide comprehensive disposition instructions for electronic records maintained in many of the common types of formats produced by federal agencies, including Web pages and spreadsheets. To support their missions, many agencies had to maintain such records—often in large volumes—with little guidance from NARA. NARA’s study concluded that records management was not even “on the radar scope” of agency leaders. Further, records officers had little clout and did not appear to have much involvement in or influence on programmatic business processes or the development of information systems designed to support them. New government employees seldom received any formal, initial records management training. One agency told NARA that records management was “number 26 on our list of top 25 priorities.” Further, records management is generally considered a “support” activity. Since support functions are typically seen as the most dispensable in agencies, resources for and focus on these functions are often limited. Also, as NARA’s study noted, federal downsizing may have negatively affected records management and staffing resources in agencies. In our June 2002 report, we recommended that the Archivist of the United States address the priority problem by developing a documented strategy for raising agency senior management awareness of and commitment to records management principles, functions, and programs. Related to the priority issue is the need for appropriate information technology tools to respond to the technical challenge of electronic records management: for electronic records to be managed effectively, agencies require a level of technology that was not necessary for paper-based records management programs. Unless management is focused on records management, priority is not given to acquiring or upgrading the technology required to manage records in an electronic environment. Agencies that do invest in electronic records management systems tend to do so because they value good records management and have a critical need to retrieve information efficiently. In other agencies, despite the growth of electronic media, agency records systems are predominantly in paper format rather than electronic. According to NARA’s study, many agencies were either planning or piloting information technology initiatives to support electronic records management, but their movement to electronic systems is constrained by the level of financial support provided for records management. NARA is responsible, under the Federal Records Act, for conducting inspections or surveys of agency records and records management programs and practices. Its implementing regulations require NARA to select agencies to be inspected (1) on the basis of perceived need by NARA, (2) by specific request by the agency, or (3) on the basis of a compliance monitoring cycle developed by NARA. In all instances, NARA is to determine the scope of the inspection. Such inspections provide not only the means to assess and improve individual agency records management programs but also the opportunity for NARA to determine overall progress in improving agency records management and identify problem areas that need to be addressed in its guidance. In 2000, NARA changed its method of performing inspections: rather than performing a small number of comprehensive agency reviews, it instituted an approach that it refers to as “targeted assistance.” NARA decided that its previous approach to inspections was basically flawed, because it could reach only about three agencies per year, and because the inspections were often perceived negatively by agencies, resulting in a list of records management problems that agencies then had to resolve on their own. Under the targeted assistance approach, NARA works with agencies, providing them with guidance, assistance, or training in any area of records management. However, we pointed out in our June 2002 report that this approach, although it may improve records management in the targeted agencies, is not a substitute for systematic inspections and evaluations of federal records programs. Targeted assistance has significant limitations because it is voluntary and, according to NARA, initiated by agency request. Thus, only agencies requesting assistance are evaluated, and the scope and the focus of the assistance are not determined by NARA but by the requesting agency. In light of these limitations, we recommended in June 2002 that the Archivist develop a documented strategy for conducting systematic inspections of agency records management programs to (1) periodically assess agency progress in improving records management programs and (2) evaluate the efficacy of NARA’s governmentwide guidance. Since June 2002, NARA has taken steps to strengthen its guidance, to address the low priority accorded to records management programs and the associated lack of technology tools, and to revise its approach to inspections as part of a comprehensive strategy for assessing agencies’ management of records. However, NARA’s plans to implement its comprehensive new strategy are not yet complete. Although the strategy describes a reasonably systematic approach that allows NARA to focus its resources appropriately and to use inspections and other interventions to assess and improve federal records management, it does not yet include a description of how NARA will establish an ongoing program. Since our 2002 report, NARA has taken steps to update its guidance on electronic records management in various areas. For example, although 36 CFR Part 1234, the basic guidance on electronic records, has not been updated to reflect new types of electronic records, NARA has produced a variety of guidance on electronic records. A new General Records Schedule, GRS 24, “Information Technology Operations and Management Records,” was issued on April 28, 2003. In addition, “Records Management Guidance for PKI- Unique Administrative Records,” which was jointly developed by NARA and the Federal Public Key Infrastructure Steering Committee’s Legal/Policy Working Group, was issued on March 14, 2003. As part of its e-government initiative, NARA has just released guidance on evaluating funding proposals for electronic records management systems through capital planning processes. NARA has also supplemented its disposition guidance as a result of the project on transfer of permanent electronic records under its e- government initiative: this guidance covers transferring permanent E-mail records and attachments, several forms of scanned images of text files, and PDF, and it expanded the methods by which agencies could transfer electronic records to NARA for archiving. NARA is also planning to expand the capability of its current systems for archiving electronic records by accommodating additional electronic record formats and volumes. However, according to NARA, agencies have not yet transferred electronic records in these formats to NARA; these records may not be scheduled or may not yet be eligible for transfer. In addition, as part of the policy analysis in its effort to redesign federal records management, NARA has stated that it plans to identify policies, procedures, regulations, and guidance that would need to be modified in light of the proposed redesign. In response to our recommendation that it develop a documented strategy for raising agency senior management awareness of records management, NARA devised a strategy intended to raise awareness of the importance of agency records management. The strategy includes two goals: increased senior-level awareness of the importance of records management, particularly electronic records management, across the federal government and in specific agencies, and increased senior-level understanding of how effective records management programs support the business needs of specific agencies and the federal government as a whole. As part of its strategy, NARA identified a number of activities that its senior leaders will conduct, including briefing agency program leaders on the importance of records and information management in general and on specific issues (such as electronic record keeping requirements, litigation exposure, and vital records), participating in establishing or closing out certain targeted assistance agreements, and pursuing promotional activities such as making speeches and holding conferences. NARA has also developed an implementation plan, which establishes goals, timeframes, and required resources for fiscal year 2003. For example, the plan contains a goal of conducting six agency briefings by the end of September; three have been completed to date, and a fourth has been scheduled for mid-July. A similar implementation plan for fiscal year 2004 is to be developed by September 1. NARA’s strategy for raising senior agency management awareness appears reasonable, and if carried out effectively could help to mitigate the problem of the low priority given to records management. Since our June 2002 report, some steps have also been taken to address the lack of technology tools to manage electronic records. In January 2003, NARA recommended that agencies use version 2 of DOD standard 5015.2, which sets forth a set of requirements for records management applications, including that they be able to manage records regardless of their media. The effort to promulgate this standard was part of the electronic information management standards project under the ERM initiative. Under the standard, DOD is to certify records management applications as meeting the standard; as of the end of June 2003, DOD had certified 43 applications. The availability of applications that conform to the standard may be helpful in encouraging agencies to adopt records management systems that address electronic records. In response to its own mission needs and our recommendations of June 2002 regarding its inspection program, NARA has documented a new strategy for assessing agencies’ management of records. This strategy is described in draft documents that describe NARA’s plans for setting priorities and for conducting inspections and studies. The new approach is now being piloted with the Department of Homeland Security; the results of the pilot—expected by September 30, 2003—will determine whether it is extended governmentwide. The main features of the draft strategy are as follows: NARA will evaluate agencies and work processes in terms of risk to records, implications for legal rights and accountability, and the quantity and value of the permanent records; it will focus its resources on high-priority areas. This process of assessing risks and priorities will involve NARA staff with subject-matter and agency expertise, and it will address records management governmentwide. NARA plans to use a variety of means to address areas identified for attention through its risk and priority assessment. Among the means being considered are targeted assistance, records management studies, and inspections. The strategy indicates that NARA has changed its approach to targeted assistance: Rather than using it only when an agency requests assistance, NARA intends to recommend that an agency accept targeted assistance when NARA has identified records management issues at that agency that require attention. In addition, NARA plans to perform studies on records management best practices as a means not only to encourage good records management practices throughout government, but also to recognize agencies whose records management programs have exemplary features. According to the strategy, inspections will be conducted only under exceptional circumstances, when the risk to records is deemed high, and after other means have failed to mitigate risks (e.g., targeted assistance, training, and so on). NARA intends to focus on the core functions of the federal government, rather than on individual agencies. It will use as its starting point the business areas defined in the Business Reference Model of the Federal Enterprise Architecture. By focusing on the Business Reference Model’s broad activities and work processes, which cut across agency lines, NARA may inspect a single agency or a group of agencies in one line of business. Although NARA’s strategy appears to be a reasonably systematic approach that allows it to focus its resources appropriately and to use inspections and other interventions to assess and improve federal records management, it is not yet complete. Specifically, the draft strategy does not yet include a description of how NARA will establish an ongoing program. For example, the priority assessment plan does not indicate whether NARA will revise its risk identification process as circumstances warrant, or if this a single- time occurrence. NARA officials have said that the agency will update its priority and risk assessments periodically, but this is not yet reflected in the plan. Further, the strategy states that the results of studies may be used to improve guidance, but it does not create a similar feedback loop for inspection results. While records management guidance may benefit from the “best practices” identified in studies, inspection results could also identify areas where guidance needs to be clarified, augmented, and strengthened. Finally, no implementation plan or schedule for this new strategy has yet been devised. Without a strategy that provides for establishing an ongoing program that includes a feedback cycle, as well as complete implementation plans that fully reflect that strategy, NARA’s efforts to assess records management programs may not provide it with the information that it needs to improve its guidance and to support its redesign of federal records management. In addition to its efforts to improve records management across the government, NARA is also acquiring ERA as a means to archive all types of electronic records and make them accessible, regardless of changes to hardware and software over time. However, NARA faces significant challenges in acquiring ERA. ERA will be a major information system; NARA has no previous experience in acquiring major information systems. Further, no comparable electronic archive system is now in existence, in terms of either complexity or scale. Finally, technology necessary to address some key requirements of ERA is not commercially available and will have to be developed. In light of these challenges, NARA will face significant difficulties in its ERA acquisition unless it addresses its information technology (IT) organizational capabilities; ERA system acquisition policies, plans, and practices; and its ability to control ERA’s cost and schedule. NARA has indicated that it needs to strengthen its IT organizational capabilities and has been taking steps to do so in three key areas: IT investment management provides a systematic method for agencies to minimize risks while maximizing the return on IT investments. An enterprise architecture provides a description—in useful models, diagrams, and narrative—of the mode of operation for an agency. It provides a perspective on agency operations both for the current environment and for the target environment, as well as a transition plan for sequencing from the current to the target environment. Managed properly, an enterprise architecture can clarify and help optimize the dependencies and relationships among an agency’s business operations and the underlying IT infrastructure and applications that support these operations. Information security is an important consideration for any organization that depends on information systems to carry out its mission. Our study of security management best practices found that leading organizations manage their information security risk through an ongoing cycle of risk management. NARA has made progress in strengthening these capabilities, but these efforts are incomplete. For example, NARA has improved its IT investment management. However, although it is continuing to develop an enterprise architecture, NARA does not plan to complete its target architecture in time to influence the ERA system definition and requirements. In addition, it has completed some elements of an information security program, but several key areas have not yet been addressed (such as individual system security plans), and NARA has not assessed the security risks to its major information systems. In addition, NARA has developed policies, plans, and practices to guide the ERA acquisition, but these do not consistently conform to industry standards and federal acquisition guidance. NARA has chosen to follow Institute of Electrical and Electronics Engineers (IEEE) standards in developing its policies, plans, and practices. Examples of these include (1) a concept of operations that describes the characteristics of a proposed system from the users’ viewpoint and provides the framework for all subsequent activities leading to system deployment, (2) an acquisition strategy that establishes how detailed acquisition planning and program execution will be accomplished, and (3) a risk management plan to identify potential problems and adjusting the acquisition to mitigate them. However, key policy and planning documents are missing elements that are required by the standards and federal acquisition guidance: for example, the ERA acquisition strategy did not satisfy 15 of 32 content elements required by the relevant IEEE standard. Further, NARA is unable to track the cost and schedule of the ERA project. The ERA schedule does not include all program tasks and lacks a work breakdown structure, which would include detail on the amount of work and resources required to complete each task. Unless NARA can address these issues, the risk is increased that the ERA system will fail to meet user expectations, and that NARA may not have the information required to control the cost of the system or the time it will take to complete it. In light of these risks, our briefing included recommendations to NARA to address the weaknesses in its acquisition policies, plans and procedures and to improve its ability to adequately track the project’s cost and schedule. | The difficulties of managing, preserving, and providing access to the vast and rapidly growing volumes of electronic records produced by federal agencies present challenges to the National Archives and Records Administration (NARA), the nation's recordkeeper and archivist. Complex electronic records are being created in volumes that make them difficult to organize and keep accessible. These problems are compounded as computer hardware, application software, and even storage media become obsolete, as they may leave behind electronic records that can no longer be read. As a result, valuable government information may be lost. GAO was requested to testify, among other things, on NARA's recent actions to address the challenges of electronic records management, including its effort to address the problem of preserving electronic records by acquiring an advanced Electronic Records Archive (ERA). As reported in GAO's past work, most electronic records--including databases of major federal information systems--remained unscheduled: that is, their value had not been assessed, and their disposition--to destruction or archives--had not been determined. In addition, records of historical value were not being identified and provided to NARA; as a result, they were at risk of loss. NARA has begun to address these problems by taking steps to improve federal records management programs; among other things, it has (1) updated guidance to reflect new types of electronic records, (2) devised a strategy for raising awareness among senior agency management of the importance of good federal records management, and (3) devised a comprehensive approach to improving agency records management that includes inspections and identification of risks and priorities. Through these and other actions, NARA is making progress, but its approach to improving records management does not include provisions for using inspections to evaluate the efficacy of its governmentwide guidance, and an implementation plan for the approach has yet to be established. Without these elements, the risk is increased that federal records management problems will persist. In addition to its efforts to improve records management, NARA is also acquiring ERA as a means to archive all types of electronic records and make them accessible. GAO found, however, that NARA faces significant challenges in acquiring ERA, a major information system. While NARA has made progress in building its organizational capabilities for acquiring major information systems, it has not developed adequate policies, plans and practices to guide the ERA acquisition or established the means to track the cost and schedule of the project. Unless NARA addresses these and other issues, the ERA system may not meet user expectations, and NARA may not have the information required to control the cost of the system or the time it will take to complete it. |
The overriding problem in providing an opinion on IRS’ financial statements, reporting on its internal controls, and reporting on its compliance with laws and regulations is that IRS has not yet been able to provide support for major portions of the information presented in its financial statements and, in some cases where it was able to do so, the information was found to be in error. The principal purpose of our financial audits is to attest to the reliability of information presented in the financial statements and to independently verify management’s assertions about the effectiveness of internal controls and whether the agency complied with laws and regulations. When information that underpins the reported financial statements is not available for audit, it sometimes results in the auditor being unable to render an opinion on the financial statements as a whole. This is because the auditor cannot evaluate sufficient evidence as a basis for forming an opinion on whether the information presented in the financial statements is correct, determining whether all significant internal controls through which the information was managed and processed were effective, and testing whether or not the agency, in this case IRS, complied with laws and regulations. This situation was the case for IRS for fiscal year 1995. The following discusses the five material weaknesses we found. Each weakness was identified in IRS’ Federal Managers’ Financial Integrity Act (FMFIA) report for fiscal year 1995. Revenues, including the related refunds and accounts receivable, are the two key areas in IRS’ efforts to report Custodial financial statements. IRS collects tax receipts, receives tax returns, makes tax refunds to, and corresponds with hundreds of millions of taxpayers each year. IRS also tries to obtain compliance by enforcing the tax laws through its monitoring of accounts receivable. These activities involve processing and tracking billions of paper documents and, in fiscal year 1995, handling a reported $1.4 trillion in tax receipts and a reported $122 billion in tax refunds. Processing this volume of money and paperwork requires substantive coordination among IRS’ more than 600 offices worldwide, approximately 12,000 financial institutions, and 12 Federal Reserve Banks throughout the country. For fiscal year 1995, IRS made several attempts at extracting taxpayer information from its masterfiles—the only detailed record of taxpayer information IRS maintains—to support the amounts it reported for revenues in its financial statements. However, IRS has not been able to make these amounts agree to the amounts included in its financial management systems and Treasury records. Further, IRS is unable to determine that the correct amounts are transferred to the ultimate recipient of the collected taxes. For fiscal year 1995, the detailed transactions from its masterfile accounts were not provided to us in a timely manner to substantiate the reported amounts and thus we could not determine the amount of the differences. The core financial management control weaknesses that contribute greatly to these problems are that IRS does not have comprehensive documentation on how its financial management system works. It has not yet put into place the necessary procedures to routinely reconcile activity in its summary account records with that maintained in its detailed masterfile records of taxpayer accounts. This problem is further exacerbated by IRS’ financial management system, which was not designed to support financial statement presentation, and thus significantly hinders IRS’ ability to identify the ultimate recipient of collected taxes. This occurs because the system requires that corporate and individual taxpayers pay multiple taxes at the same time without readily identifying the application of the payments to the various taxes paid. As a result, IRS is forced to make the allocation of collections to the recipient based on the total tax owed as identified on the related tax return. The tax return is filed at a later date and may not contain sufficient information if the amount of taxes owed on the return does not agree with the amount paid, as is sometimes the case. IRS has developed computer programs to extract the detailed masterfile data from its records but continues to be unable to reconcile the detailed extracted data to the summary accounts. In an interim effort to prepare reliable financial statement information, IRS is attempting to demonstrate the maximum exposure likely attributable to the unexplained differences and provide the necessary information to fix the identified system flaws. This interim plan involves IRS continuing its efforts to develop detailed comprehensive documentation of its current financial management system. We are monitoring IRS’ efforts closely, providing guidance and recommendations, and reporting at regular intervals to IRS’ senior management on the agency’s progress and actions needed to correct these problems in the short and long term. As reported since our audit of IRS’ fiscal year 1992 financial statements, IRS cannot ensure that it distributes excise taxes based on collections, as required by law, because it bases these distributions on the amount reported on the tax return, that is, the assessed amount. However, during fiscal year 1995, IRS analyzed excise taxes by specific trust funds to determine if there were significant differences between taxes paid and amounts reported as owed on the return and found that these differences were insignificant. Because IRS completed this analysis after our audit was completed, we were unable to examine and determine the reliability of this information. For fiscal year 1995, IRS attempted to test a statistical sample of its inventory of open assessments to categorize them between financial accounts receivable and compliance assessments. For all the 4 fiscal years we have audited IRS’ financial statements, IRS has had difficulty separating, in its masterfile records of taxpayer accounts, its financial accounts receivable, from the amounts it has assessed only for compliance purposes because the design of IRS’ masterfiles commingles these amounts. In fiscal year 1995, IRS expanded its previous years’ efforts by trying to first separate the inventory of assessments into accounts receivable and compliance assessments based on its coding of these assessments in its financial management system and then testing the accuracy of this coding to separate accounts receivable from compliance assessments on a taxpayer account basis. However, these efforts were unsuccessful because of mistakes made in performing the statistical tests and errors found in the coding of the assessments in IRS’ financial management systems which made the sample results unreliable for projecting to the total inventory of outstanding assessments. Our tests of the fiscal year 1995 data found significant errors at levels that made the result of any projections from the samples taken unreliable. The actions needed to resolve the key financial management control weaknesses in accounts receivable are consistent with recommendations from our prior reports and are as follows: (1) better review and approval procedures are needed before assessment information is entered into IRS’ masterfile system, (2) clearer lines of authority and responsibility are needed between IRS’ taxpayer service and the Chief Financial Officer’s operations to ensure that internal control procedures are properly identified and strictly adhered to, (3) procedures need to be developed for processing in-process accounts and properly applying them to the respective taxpayer accounts, and (4) periodic detailed taxpayer account reviews should be performed as a quality review measure to ensure that the proper coding is taking place for taxpayer accounts. In addition, IRS needs to (1) continue its efforts to review taxpayer accounts with amounts owed to ensure that they are properly coded and accounted for and (2) perform more macro analysis of its inventory of assessments to identify aberrations and other systemic problems that will need to be corrected to accurately report on accounts receivable. We will continue to monitor IRS’ progress in this area and provide guidance and recommendations as it proceeds. For fiscal year 1995, IRS had a reported $8.1 billion in operating expenses and related assets and liabilities used and incurred in its administrative operations. The key asset in its administrative operations is its Fund Balance with Treasury accounts and the related Unexpended Appropriations accounts. Its operating expenses can be readily separated between its efforts to account for and report, in fiscal year 1995, its $5.3 billion in payroll costs and $2.8 billion in nonpayroll costs. IRS has made progress in accounting for and reporting its administrative operations. In fiscal year 1992, for the most part, we were unsuccessful in our attempts to audit IRS’ records for its administrative operations. IRS’ accounting records were in total disarray, and it could not substantiate large portions of the reported amounts. In addition, internal control policies and procedures were either nonexistent, inappropriately focused, or not followed. For fiscal year 1995, IRS had a core accounting system in place that tracked its financial management activity. Two critical problems, however, have continued to persist that were identified in our fiscal year 1992 audit: (1) IRS’ Fund Balance with Treasury accounts remain unreconciled, though some progress has been made toward that end and (2) IRS has not been able to provide support as to whether and when certain nonpayroll goods and services paid for were received and, in instances where support existed, we found that the cost associated with the purchase was often recorded and reported in the wrong fiscal year. IRS’ Fund Balance with Treasury accounts historically were not being reconciled. For the most part, IRS’ personnel were only tracking the gross differences between their accounting records and what Treasury (the equivalent of their bank) reported to them for their administrative receipts and disbursements. This resulted in years of unreconciled amounts accumulating that were never researched and resolved and that were made difficult to research and resolve when the amounts were required to be audited. These accounts have been unreconciled in each of the years of our prior audits—1992 through 1994—with net reconciling differences in the millions of dollars that were made up of gross reconciling differences in the hundreds of millions of dollars. We were not provided the information to fully determine the gross amount of the differences for fiscal year 1995 and, thus, while we do know the accounts remain unreconciled, we do not know by how much. Over the last 2 fiscal years, IRS has made adjustments to its accounting records to write off large portions of the gross unreconciled amounts where it could not determine what the correct disposition of the difference should be after several efforts at researching the items. In addition, it hired a contractor to identify the differences between its accounting records and what it had reported to Treasury as its activity in its Fund Balance with Treasury accounts. IRS, though, has still not fully reconciled its differences between its records and Treasury’s records that are reported to IRS through its budget clearing accounts—for items that are more than 6 months old that remain unreconciled—and that are identified on its statement of differences—for similar items that are less than 6 months old. Similarly, IRS still needs to investigate and resolve amounts in its suspense accounts, many of which have been in suspense for 1 year or more. In addition, IRS has not disposed of some of the reconciling items between its accounting records and what it reported to Treasury that were identified by the contractor. Through further contractor assistance or more intensified internal efforts, IRS must get these accounts fully reconciled. In addition, IRS needs to look more closely at the skill mix of its staff assigned with the responsibility of completing this reconciliation process. If these accounts remain unreconciled, it will continue to be difficult to provide an opinion on either IRS’ administrative financial statements or management’s assertion about the effectiveness of internal controls. It will also continue to be impossible to determine whether IRS has complied with all of the appropriate laws and regulations to which it is subject. Notwithstanding the problems these unreconciled accounts present for rendering an opinion, these accounts make it impossible, or at best difficult, for IRS or anyone else to know whether its operating funds have been improperly spent and calls into question the accuracy of its reported operating expenses, assets, and liabilities. IRS did not provide support as to whether and when it received goods and services for significant portions of its nonpayroll operating expenses and, in several instances where the support was provided, we found that the cost should have been included in another period. Simply stated, this situation is much like when IRS audits a taxpayer. If the taxpayer cannot show independent evidence that an expense that was deducted on the tax return was incurred in the year under audit, the expense would be disallowed and the taxpayer’s tax liability increased. Likewise, when IRS cannot provide support for its reported expenses or the support shows that the expenses should be properly included in a different fiscal year, the auditor cannot provide an opinion on the amounts. Simply put, we cannot determine whether this expense was an expense of the current period—when no support exists—or whether it must be adjusted from the current year’s expenses—when the support shows it is in the wrong period. Our interim testing of IRS’ accounting records covering the first 10 months of fiscal year 1995 showed significant amounts of nonpayroll costs that were either unsupported or recorded in the wrong period. IRS’ nonpayroll expenses that we reviewed included purchases from other federal agencies as well as from commercial vendors for printing services, postage, computer equipment, and many other costs. IRS’ lack of control over receipt and acceptance of goods and services, combined with its problems in linking the controls over goods and services purchased to the payment for these goods and services, makes it especially vulnerable to vendors, both federal and commercial, billing IRS for goods and services not provided or for amounts in excess of what was provided. This would be comparable to an individual or business receiving an invoice and paying it without verifying that the purchased item had been received and accepted, based on an assumption that someone else in the household or business received it. For example, IRS has an inventory management system that tracks when printed tax forms are received and used. However, the information tracked in this system is not used or integrated with the payment system for making vendor payments nor with any other system used to account for and report IRS’ operating expense for printing these forms. In our prior year reports, we stated that IRS’ computer security environment was inadequate. Our review of controls over IRS’ computerized information systems, done to support our fiscal year 1995 audit, found that IRS has made some progress in addressing and initiating actions to resolve prior years’ computer security issues; however, some of the fundamental security weaknesses we previously identified continued to exist in this fiscal year. We will be studying these issues further and reporting on them in greater detail in a future report. These deficiencies in internal controls may adversely affect any decision by management which is based, in whole or in part, on information that is inaccurate because of the deficiencies. Unaudited financial information reported by the Internal Revenue Service, including budget information, also may contain misstatements resulting from these deficiencies. As described above, we are unable to give an opinion on the Principal Financial Statements for fiscal year 1995. In addition, we were unable to give an opinion on the Principal Financial Statements for fiscal year 1994. We gained an understanding of internal controls designed to safeguard assets against loss from unauthorized acquisition, use, or assure the execution of transactions in accordance with laws governing the use of budget authority and with other laws and regulations that have a direct and material effect on the Principal Financial Statements or that are listed in Office of Management and Budget (OMB) audit guidance and could have a material effect on the Principal Financial Statements; and properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets. For fiscal years 1995 and 1994, we do not express an opinion on internal controls because the scope of our work was limited to determining our procedures for auditing the financial statements, not to express an opinion on internal controls. However, we found that the material weaknesses, described in the Significant Matters section of this report, resulted in ineffective controls that could lead to losses, noncompliance, or misstatements that are material in relation to the financial statements. Our internal control work would not necessarily disclose all material weaknesses. Because of the limitations on the scope of our work as discussed above, we were unable to test the laws we considered necessary; accordingly, we are unable to report on IRS’ compliance with laws and regulations. In our prior year reports (see footnote 1), we made 59 recommendations aimed at resolving IRS’ financial management problems. In our assessment this year, we determined that IRS had completed 17 of these recommendations. See appendix I for the status of IRS’ implementation efforts on the 59 recommendations from our prior year reports. IRS has stated its intention to commit the necessary resources and management oversight to resolve its financial management weaknesses and receive its first opinion on the fiscal year 1996 financial statements. In this regard, we are providing advice to IRS on how to resolve its long-standing and pervasive financial management problems. preparing the annual financial statements in conformity with the basis of accounting described in note 1 of the Administrative and Custodial financial statements; establishing, maintaining, and assessing the internal control structure to provide reasonable assurance that the broad control objectives of the Federal Managers’ Financial Integrity Act (FMFIA) are met; and complying with applicable laws and regulations. We attempted to perform audit procedures on the limited information IRS was able to provide; however, for the reasons stated above, we were unable to perform the necessary audit procedures to opine on IRS’ Principal Financial Statements. Except for the limitations on the scope of our work on (1) the Principal Financial Statements, (2) internal controls, and (3) compliance with laws and regulations described above, we did our work in accordance with generally accepted government auditing standards and OMB Bulletin 93-06, “Audit Requirements for Federal Financial Statements.” We requested written comments on a draft of this report from you or your designee. Your office provided us with written comments which are discussed in the following section and reprinted in appendix II. In commenting (see appendix II) on a draft of this report, IRS generally agreed with the facts as stated in our report. In addition, IRS reaffirmed its commitment to ensuring the integrity of its financial data. Statement of Financial Position (Administrative) Statement of Operations (Administrative) Notes to Principal Financial Statements (Administrative) Statement of Financial Position (Custodial) Notes to Financial Statements (Custodial) Statement of Financial Position (Revolving Fund) Note to Financial Statements (Revolving Fund) The results of our efforts to audit IRS’ fiscal year 1992, 1993, and 1994 Principal Financial Statements were presented in our reports entitled Financial Audit: Examination of IRS’ Fiscal Year 1992 Financial Statements (GAO/AIMD-93-2, June 30, 1993), Financial Audit: Examination of IRS’ Fiscal Year 1993 Financial Statements (GAO/AIMD-94-120, June 15, 1994), and Financial Audit: Examination of IRS’ Fiscal Year 1994 Financial Statements (GAO/AIMD-95-141, August 4, 1995). The system and internal control weaknesses identified in the 1992 report and recommendations to correct them were discussed in more detail in six reports. In fiscal year 1993, we issued one report that included the system and internal control weaknesses and recommendations. For fiscal year 1994, we issued one report that contained no new recommendations. We determined the status of the following recommendations based on our audit work at IRS during fiscal year 1995 and on our discussions with IRS officials. Our assessments of IRS’ actions for the most significant recommendations are discussed in the report. However, we have not fully assessed the appropriateness or effectiveness of all of the responses identified in the following table. We plan to update our assessment of IRS’ responses as part of our fiscal year 1996 audit. Financial Audit: IRS Significantly Overstated Its Accounts Receivable (GAO/AFMD-93-42, May 6, 1993) Provide the IRS Chief Financial Officer authority to ensure that IRS accounting system development efforts meet its financial reporting needs. At a minimum, the Chief Financial Officer’s approval of related system designs should be required. (continued) Take steps to ensure the accuracy of the balances reported in IRS financial statements. In the long term, this will require modifying IRS systems so that they are capable of (1) identifying which assessments currently recorded in the Master File System represent valid receivables and (2) designating new assessments that should be included in the receivables balance as they are recorded. Until these capabilities are implemented, IRS should rely on statistical sampling to determine what portion of its assessments represent valid receivables. Clearly designate the Chief Financial Officer as the official responsible for coordinating the development of performance measures related to receivables and for ensuring that IRS financial reports conform with applicable accounting standards. Modify the IRS methodology for assessing the collectibility of its receivables by —including only valid accounts receivable in the analysis; —eliminating, from the gross receivables balance, assessments determined to have no chance of being collected; —including an analysis of individual taxpayer accounts to assess their ability to pay; —basing group analyses on categories of assessments with similar collection risk characteristics; and —considering current and forecast economic conditions, as well as historical collection data, in analyses of groups of assessments. (continued) Once the appropriate data are accumulated, IRS may use modeling to analyze collectibility of accounts on a group basis, in addition to separately analyzing individual accounts. Such modeling should consider factors that are essential for estimating the level of losses, such as historical loss experience, recent economic events, and current and forecast economic conditions. In the meantime, statistical sampling should be used as the basis for both individual and group analyses. IRS Information Systems: Weaknesses Increase Risk of Fraud and Impair Reliability of Management Information (GAO/AIMD-93-34, September 22, 1993) Limit access authorizations for individual employees to only those computer programs and data needed to perform their duties and periodically review these authorizations to ensure that they remain appropriate. Monitor efforts to develop a computerized capability for reviewing user access activity to ensure that it is effectively implemented. Establish procedures for reviewing the access activity of unit security representatives. Use the security features available in IRS’ operating systems software to enhance system and data integrity. Require that programs developed and modified at IRS headquarters be controlled by a program librarian responsible for (1) protecting such programs from unauthorized changes including recording the time, date, and programmer for all software changes, and (2) archiving previous versions of programs. (continued) Establish procedures requiring that all computer program modifications be considered for independent quality assurance review. Formally analyze Martinsburg Computing Center’s computer applications to ensure that critical applications have been properly identified for purposes of disaster recovery. Test the disaster recovery plan. Monitor service center practices regarding the development, documentation, and modification of locally developed software to ensure that such software use is adequately controlled. Review the current card key access system in the Philadelphia Service Center to ensure that only users who need access to the facilities protected by the system have access and that authorized users each have only one unique card key. Establish physical controls in the Philadelphia Service Center to protect computers with access to sensitive data that are not protected by software access controls. (continued) Financial Management: IRS’ Self-Assessment of Its Internal Control and Accounting Systems Is Inadequate (GAO/AIMD-94-2, October 13, 1993) The Senior Management Council should coordinate, monitor, or oversee activities to (1) establish and implement proper written procedures that provide for the identification, documentation, and correction of material weaknesses, (2) provide classroom training and guidance materials to all review staff, (3) develop effective corrective action plans that address the fundamental causes of the weaknesses, and (4) verify the effectiveness of corrective actions before removing reported weaknesses from IRS’ records. Financial Management: Important IRS Revenue Information Is Unavailable or Unreliable (GAO/AIMD-94-22, December 21, 1993) Develop a method to determine specific taxes collected by trust fund so that the difference between amounts assessed and amounts collected is readily determinable and excise tax receipts can be distributed as required by law. This could be done by obtaining specific payment detail from the taxpayer, consistent with our April 1993 FTD report. Alternatively, IRS might consider whether allocating payments to specific taxes based on the related taxpayer returns is a preferable method. Determine the trust fund revenue information needs of other agencies and provide such information, as appropriate. If IRS is precluded by law from providing needed information, IRS should consider proposing legislative changes. (continued) Identify reporting information needs, develop related sources of reliable information, and establish and implement policies and procedures for compiling this information. These procedures should describe any (1) adjustments that may be needed to available information and (2) analyses that must be performed to determine the ultimate disposition and classification of amounts associated with in-process transactions and amounts pending investigation and resolution. Establish detailed procedures for (1) reviewing manual entries to the general ledger to ensure that they have been entered accurately and (2) subjecting adjusting entries to supervisory review to ensure that they are appropriate and authorized. Monitor implementation of actions to reduce the errors in calculating and reporting manual interest, and test the effectiveness of these actions. Give a priority to the IRS efforts that will allow for earlier matching of income and withholding information submitted by individuals and third parties. Financial Management: IRS Does Not Adequately Manage Its Operating Funds (GAO/AIMD-94-33, February 9, 1994) Monitor whether IRS’ new administrative accounting system effectively provides managers up-to-date information on available budget authority. Promptly resolve differences between IRS and Treasury records of IRS’ cash balances and adjust accounts accordingly. Promptly investigate and record suspense account items to appropriate appropriation accounts. (continued) Perform periodic reviews of obligations, adjusting the records for obligations to amounts expected to be paid, and removing expired appropriation balances from IRS records as stipulated by the National Defense Authorization Act for Fiscal Year 1991. Monitor compliance with IRS policies requiring approval of journal vouchers and enforcing controls intended to preclude data entry errors. Review procurement transactions to ensure that accounting information assigned to these transactions accurately reflects the appropriate fiscal year, appropriation, activity, and sub-object class. Provide (1) detailed written guidance for all payment transactions, including unusual items such as vendor credits, and (2) training to all personnel responsible for processing and approving payments. Revise procedures to require that vendor invoices, procurement orders, and receipt and acceptance documentation be matched prior to payment and that these documents be retained for 2 years. Revise procedures to incorporate the requirements that accurate receipt and acceptance data on invoiced items be obtained prior to payment and that supervisors ensure that these procedures are carried out. Revise document control procedures to require IRS units that actually receive goods or services to promptly forward receiving reports to payment offices so that payments can be promptly processed. Monitor manually computed interest on late payments to determine whether interest is accurately computed and paid. (continued) Enforce existing requirements that early payments be approved in accordance with OMB Circular A-125. Require payment and procurement personnel, until the integration of AFS and the procurement system is completed as planned, to periodically (monthly or quarterly) reconcile payment information maintained in AFS to amounts in the procurement records and promptly resolve noted discrepancies. Require the description and period of service for all invoiced items to be input in AFS by personnel responsible for processing payments, and enhance the edit and validity checks in AFS to help prevent and detect improper payments. Establish procedures, based on budget categories approved by OMB, to develop reliable data on budget and actual costs. Use AFS’ enhanced cost accumulation capabilities to monitor and report costs by project in all appropriations. Financial Management: IRS Lacks Accountability Over Its ADP Resources (GAO/AIMD-93-24, August 5, 1993) Provide the agency’s CFO with the authority to ensure that data maintained by IRS’ ADP inventory system meet its management and reporting needs. Provide that any software purchases, development, or modifications related to this system are subject to the CFO’s review and approval. (continued) Develop and implement standard operating procedures that incorporate controls to ensure that inventory records are accurately maintained. Such controls should include — establishing specific procedures to ensure the prompt and accurate recording of acquisitions and disposals in IRS’ ADP fixed asset system, including guidance addressing the valuation of previously leased assets; — reconciling accounting and inventory records monthly as an interim measure until the successful integration of inventory and accounting systems is completed as planned; and — implementing mechanisms for ensuring that annual physical inventories at field locations are effectively performed, that discrepancies are properly resolved, and that inventory records are appropriately adjusted. Oversee IRS efforts for ensuring that property and equipment inventory data, including telecommunications and electronic filing equipment, is complete and accurate. Determine what information related to ADP resources, such as equipment condition and remaining useful life, would be most useful to IRS managers for financial management purposes and develop a means for accounting for these data. Develop an interim means to capture relevant costs related to in-house software development. (continued) Financial Audit: Examination of IRS’ Fiscal Year 1993 Financial Statements (GAO/AIMD-94-120, June 15,1994) Ensure that system development efforts provide reliable, complete, timely, and comprehensive information with which to evaluate the effectiveness of its enforcement and collection programs; Establish and implement procedures to analyze the impact of abatements on the effectiveness of assessments from IRS’ various collection programs; and Reconcile detailed revenue transactions for individual taxpayers to the master file and general ledger. Establish and implement procedures to proactively identify errors that occur during processing of data, and design and implement improved systems and controls to prevent or detect such errors in the future. Monitor its systems and controls to regularly identify problems as they occur by establishing clear lines of responsibility and communication from top management to the lowest staff levels, Develop action plans that are agreed upon by all affected groups and individuals to correct problems identified, and Continuously monitor corrective actions to ensure that progress is achieved. Periodically compare information in payroll records to supporting personnel information, Use current information to periodically update estimated future TSM costs, and (continued) Develop reliable detailed information supporting its reported accounts payable balances. Develop and implement systems and standard operating procedures that incorporate controls to ensure that seized asset inventory records are accurately maintained, which include Establishing specific procedures to ensure the prompt and accurate recording of seizures and disposals, including guidance addressing the valuation of seized assets; Reconciling accounting and inventory records monthly as an interim measure until the successful integration of inventory and accounting systems is completed; and Implementing mechanisms for ensuring that annual physical inventories at field locations are effectively performed, that discrepancies are properly resolved, and that inventory records are appropriately adjusted. Determine what information related to seized assets, such as proceeds and liens and other encumbrances, would be most useful to IRS managers for financial management purposes and develop a means for accounting for these data. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Internal Revenue Service's (IRS) financial statements for fiscal years 1995 and 1994. GAO found that: (1) it could not express an opinion of the 1995 IRS financial statement due to limitations in the scope of its work; (2) the information in the 1995 statement may be unreliable; (3) ongoing financial management problems include IRS inability to verify or reconcile taxpayer revenue and refunds to accounting records, substantiate amounts reported for various types of taxes collected, verify nonpayroll operating expenses, and reconcile reported appropriations with Department of the Treasury records, and determine the unreliability of estimated accounts receivable balances; (4) IRS has not resolved many of its financial management problems, but it has developed software to capture detailed revenue and refund transactions and is completing documentation of its financial management systems to aid in system improvements; (5) significant material weaknesses in IRS controls over recordkeeping exist, including lax computer security; (6) it could not test IRS compliance with applicable laws and regulations; and (7) IRS has completed 17 of 59 recommendations for improving its financial management systems. |
Section 233 of the National Defense Authorization Act for Fiscal Year 2014 required DOD to report on various regional BMD topics, including the eight specific elements presented in table 1, by June 24, 2014. DOD identified its current approach to regional BMD in the 2010 Ballistic Missile Defense Review Report. In that report, DOD stated it would match U.S. BMD strategies, policies, and capabilities to the requirements of current and future threats and use that information to inform BMD planning, budgeting, and oversight. DOD also noted that phased, adaptive approaches to BMD would enable a flexible, scalable response to BMD threats around the world by incorporating new technologies quickly and cost-effectively, and described the advantages of mobile BMD assets that can be readily transported from one region to another, over fixed assets. In addition, DOD indicated that new assets would undergo testing that enables assessment under realistic operational conditions, prior to deployment. Finally, DOD emphasized working with regional allies to strengthen BMD and its deterrent value. The 2010 Ballistic Missile Defense Review Report indicates that the United States would pursue a phased, adaptive approach to missile defense within each region that is tailored to the threats and circumstances unique to each region. An area of emphasis in the 2010 Ballistic Missile Defense Review Report was on the EPAA—the U.S. approach to regional BMD in Europe. In the 2010 Ballistic Missile Defense Review Report, DOD also discussed the development of regional phased, adaptive approaches to BMD in the Asia-Pacific and the Middle East. The 2010 report highlighted differences among the ballistic missile threat posed to each region, as well as the differences among the regional defensive arrangements that exist between the United States and its partners. In an August 2013 report on regional BMD issues, DOD stated that its process of working with regional allies and partners was well under way and included the participation, along with the United States, of some allies and partners in regional command and control centers that conduct BMD operations. In the Pacific, DOD noted that cooperation is most robust with Japan, that other allies and partners participate to varying degrees, and that allies and partners in the Asia-Pacific region generally have exhibited an increasing interest in enhanced cooperation with DOD. DOD’s August 2013 report also noted that in the Middle East the United States is working with a number of Gulf Cooperation Council States on a bilateral basis, including supporting the purchase of BMD systems through the Foreign Military Sales program. DOD’s regional BMD effort consists of a number of specific weapon systems or elements that compose the BMD system as a whole. They are the following: Command and Control, Battle Management, and Communications (C2BMC): a system that integrates individual BMD elements and allows users to plan BMD operations, maintain situational awareness and communications, and manage networked sensors. Army Navy/Transportable Radar Surveillance and Control Model 2 (AN/TPY-2) X-Band Radar: a sensor that tracks ballistic missiles in flight. Aegis BMD Weapons System: a ship-based weapon system that consists of a radar, software, and processors to track threat missiles and cue Aegis Standard Missile-3 (SM-3) interceptors. Aegis Ashore: a land-based version of the Aegis BMD interceptor system, which will employ Aegis BMD Weapons System upgrades and SM-3 upgrades as they become available. Standard Missile-3 (SM-3): a family of defensive missiles that intercept regional threat missiles of various ranges. Terminal High Altitude Area Defense (THAAD): a mobile, ground- based missile defense system that includes a fire control and communications system, a radar, interceptors, and other support equipment. Patriot Advanced Capability-3 (PAC-3): a mobile defense against short-range missiles. It is now operated and fielded by the U.S. Army and may be used in a variety of regional BMD approaches. According to DOD, various versions of these weapon systems are being deployed in Europe, the Asia-Pacific region, and in the Middle East, but the EPAA is the only regional approach described extensively in the 2010 Ballistic Missile Defense Review Report. To support Phase 1 of the EPAA for operations in Europe, by December 2011 MDA delivered an AN/TPY-2 X-band radar, an Aegis BMD ship with SM-3 Block IA missiles, and an upgrade to C2BMC. As we reported in March 2014,the EPAA are intended to provide improved integration and interoperability among sensors and interceptor systems, which would expand the area being defended, as well as improve the ability to defend against attacks involving a larger number of incoming missiles. Specifically, for Phase 2, MDA plans to deploy improved versions of the Aegis BMD Weapons System on ships and Aegis Ashore in Romania with the next generation of SM-3 interceptor, called SM-3 Block IB. MDA also plans to field improvements to C2BMC, upgrading an existing version in the 2015 time frame and fielding a new version in 2017. For Phase 3, MDA is developing further improvements to the Aegis BMD system, including a new version of the weapons system and new interceptor, called SM-3 Block IIA, as well as an additional Aegis Ashore installation in Poland and further improvements to C2BMC for fielding in 2018. Figure 1 depicts the weapon systems that DOD plans to deploy in and around Europe in support of the EPAA in its three phases. DOD, Report to Congress: Regional Ballistic Missile Defense. Compared to the statutory reporting requirements, DOD’s June 2014 regional BMD report addressed five of the eight required reporting elements, and partially addressed the remaining three elements. DOD addressed elements that describe the overall risk assessment from the Global Integrated Air and Missile Defense Assessment, the role of regional missile defenses in the homeland defense mission, the integration of offensive and defensive capabilities, and two elements on the roles and contributions of allies. DOD partially addressed the remaining three reporting elements, regarding the alignment of regional approaches to missile defense with combatant command integrated priorities, the concept of operations for EPAA, and the testing and development of key EPAA elements. Table 2 summarizes our assessment of DOD’s report. Additionally, through interviews with DOD officials and from our application of generally accepted standards that define a sound and complete defense research study, we found that DOD’s report did not include key details for some required reporting elements that we believe could have benefitted congressional defense committees’ oversight of DOD’s regional BMD programs. Generally accepted standards that define a sound and complete defense research study include that a report provide complete, accurate, and relevant information for the client and stakeholders. However, DOD’s report does not consistently meet this standard, based on GAO’s review. For example, the standards for the presentation of results state that findings should be complete and accurate, but we found that key information regarding the characterization of the testing and development of EPAA systems was incomplete. DOD’s June 2014 report could have provided additional details for several of the required reporting elements. Specifically: In support of element F, regarding integration of offensive and defensive capabilities, the report described some plans regarding implementation of the imperatives suggested in the Joint Integrated Air and Missile Defense Vision for 2020. We determined that DOD’s report met the statutory requirement to describe the manner in which enhanced integration of offensive and defensive capabilities will fit into regional missile defense planning and force structure assessments. Although not required, we found the report did not provide comprehensive information on how DOD will identify and address potential capability and capacity shortfalls in support of air and missile defense missions, nor did it provide a description of policies to increase cooperation among partners and allies, as emphasized by the Joint Integrated Air and Missile Defense Vision for 2020, which is information that provides more insight into how DOD manages regional BMD resources and risks. In support of elements G and H, regarding allied contributions to regional BMD, DOD’s report included some information that did not relate to regional missile defense and characterized a number of allied contributions as notional, which could misrepresent the extent to which particular allies and partners contribute to regional BMD. For example, the report mentions Denmark hosting an Upgraded Early Warning Radar in Greenland as an allied contribution to European missile defense, but that radar is used exclusively for supporting the homeland defense BMD mission. Additionally, the report did not include estimates of actual or potential cost savings derived from taking advantage of economies of scale or a reduced number of U.S. deployments due to allied capabilities. For instance, Japan’s effort to develop and deploy the Aegis BMD Weapon System on Japanese ships is mentioned by the report, but without concrete information on the effect that may have on U.S. resources. Appendix I highlights U.S. and allied contributions to regional BMD operations. Additionally, we determined that DOD’s report omitted key details regarding its approach to regional BMD for the three elements that it partially addressed in the report, regarding the combatant commands’ force structure and deployment options, concepts for operating with NATO, and the development of EPAA systems. We believe that by not including these details, although not required, DOD reduced the report’s usefulness to the congressional defense committees and to their oversight of DOD’s regional BMD programs. In support of element B, regarding the combatant commands’ deployment options, as stated earlier, we determined that DOD’s report partially addressed the required reporting element because it did not include key details about U.S. European Command’s and U.S. Pacific Command’s planned options to increase BMD capability in response to an imminent threat, nor did the report provide a comprehensive analysis regarding how the various regional approaches to BMD will meet combatant command integrated priorities. DOD’s report also did not provide an analysis of the BMD assets that each combatant command needs to meet their respective integrated priorities, nor did it describe how many assets each combatant command has in-theater to address these requirements, or identify how many assets DOD could reasonably deploy into the area if additional capability were needed during a crisis. U.S. Strategic Command and the Joint Staff track the deployment and availability of BMD forces, such as the Aegis BMD Weapon System, THAAD, and PAC- 3, and make priority recommendations for their deployment, so that senior DOD decision makers can assess risk and priorities when allocating assets among regions. In support of element C, related to operational control of assets in Europe, we found that DOD’s report lacked key details about how command of assets are allocated between U.S. European Command and NATO. For example, in the briefing referenced by the report, DOD provides some description of how Aegis ships would be transferred from one command’s authority to the other and provides the current operational control status for the forward-based AN/TPY-2 X-band radar. However, neither the report nor the briefing contain the operational details that are important to fully understanding the circumstances under which each of the relevant BMD systems could be transferred from one command’s authority to another. This information is important to fully understanding the implications of how control of BMD assets is allocated, as well as the effect those circumstances have on various BMD systems. In support of element D, regarding the development and testing of BMD systems that are part of EPAA, DOD’s report did not include details about C2BMC and Aegis BMD testing and development issues, and the lack of such detail may limit Congress’ ability to understand the extent to which the EPAA system can be integrated: C2BMC: The 2010 Ballistic Missile Defense Review Report and MDA’s acquisition and system engineering documentation underscore the importance of C2BMC for all regional approaches, since it is the system that enables system-level capabilities. In EPAA, C2BMC is necessary to link allied systems, such as NATO’s Active Layered Theater Ballistic Missile Defense, with the U.S. systems. It also controls the AN/TPY-2 X-band radar, and integrates Aegis BMD ships, as well as additional sensors and an Aegis Ashore as they become available in Phases 2 and 3. As the integrator, C2BMC allows the BMD system to defend against more missiles simultaneously, to conserve interceptor inventory, and to defend a larger area than individual systems operating independently. For Phase 2 of the EPAA, MDA plans to upgrade the C2BMC system in 2015 to address new threats and, in 2017, to integrate additional sensors and improve the ability of Aegis BMD to launch an interceptor before its shipboard radar acquires a threat missile. In 2018, for Phase 3 of the EPAA, MDA plans additional C2BMC upgrades, including some that would enable the Aegis BMD to intercept missiles based on tracks passed through C2BMC from forward-based AN/TPY-2 X-band radars, without having to detect the threat with its own radar. However, our current and previous work indicates that some capability upgrades to C2BMC for Phase 3 of the EPAA have been deferred indefinitely, which DOD did not reference in its June 2014 report. For example, according to our analysis of MDA’s system engineering documentation, we found that MDA has deferred the delivery of a key C2BMC capability that would further integrate the BMD system and improve its management of limited BMD resources by allowing C2BMC to directly send engagement commands to interceptor systems. According to the Director, Operational Test and Evaluation, effective “battle management” requires C2BMC to not only collect and process information from sensors and weapons, as it currently does, but to also determine which threats should be engaged by which weapon to produce the highest probability of engagement success and then transmit this information back to the sensors and weapons. Aegis BMD Weapon System: DOD’s report did not fully describe the performance and acquisition risks to the Aegis BMD systems slated for Phase 2 of the EPAA, which we have identified through our prior work. Aegis BMD is the primary interceptor system for EPAA. MDA plans upgrades for Phase 2 of the EPAA that increase the types and number of threats it can engage. However, in April 2014, we found that one SM-3 Block 1B failed in flight during an interceptor test in September 2013, which, according to our current work, could increase reliability risk. Since then, DOD officials told us that MDA is seeking to maintain reliability of the interceptor by developing a redesign; it is unclear when this redesign will be flight-tested. MDA told us that it plans to ground-test the redesign. Moreover, our reviews of MDA’s Aegis BMD Baseline Execution Reviews from April 2013, August 2013, and June 2014 indicated that the certification of a new version of Aegis BMD software called Aegis BMD 4.1, which is needed for Phase 2, had been delayed at least 3 months past Phase 2 declaration. Additionally, based on our analysis of MDA’s August 2013 and June 2014 Baseline Execution Reviews, MDA continues to discover software defects faster than it is able to fix them for another version of Aegis BMD, also planned for Phase 2 of the EPAA. Furthermore, although DOD officials told us that the Aegis Ashore program is on track, our review of MDA’s March 2014 test documentation identified schedule slips that delayed Aegis Ashore’s participation in key interoperability tests, compressing the time to rectify issues should they be discovered prior to the planned Phase 2 declaration in 2015. DOD officials who developed the June 2014 regional BMD report told us that they used their best judgment in determining the appropriate level of detail for the report. The officials added that their goal was to address each of the required reporting elements concisely. Furthermore, they explained that they regularly provide more detailed analysis on some of these topics to congressional defense committees via periodic briefings, and that they did not want to provide duplicative or unnecessary information. Although we recognize the need for professional judgment by DOD officials when preparing the report, our review concluded that DOD’s report did not include details that we believe could have made the report more useful to Congress in its oversight of DOD’s regional BMD programs. However, DOD’s report was prepared in response to a onetime, nonrecurring mandate, and therefore we are not making any recommendations to amend the report and provide additional detail. DOD reviewed a draft of this report, but did not provide formal agency comments. DOD did provide technical comments, and we incorporated these changes as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Commander, U.S. Strategic Command; and the Director, MDA. This report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-9971 or KirschbaumJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Allied Contributions to Regional Ballistic Missile Defense (BMD) Figure 2 summarizes key information provided by the Department of Defense (DOD) regarding the contributions of allies to regional BMD. In addition to the contact named above, Kevin O’Neill, Assistant Director; David Best; Patricia Donahue; Amie Lesser; Randy Neice; Wiktor J. Niewiadomski; Richard Powelson; Terry Richardson; Mike Shaughnessy; and Jina Yu made key contributions to this report. Ballistic Missile Defense: Actions Needed to Address Implementation Issues and Estimate Long-Term Costs for European Capabilities. GAO-14-314. Washington, D.C.: April 11, 2014. Missile Defense: Mixed Progress in Achieving Acquisition Goals and Improving Accountability. GAO-14-351. Washington, D.C.: April 1, 2014. Regional Missile Defense: DOD’s Report Provided Limited Information; Assessment of Acquisition Risks is Optimistic. GAO-14-248R. Washington, D.C.: March 14, 2014. Missile Defense: Opportunity to Refocus on Strengthening Acquisition Management. GAO-13-432. Washington, D.C.: April 26, 2013. Missile Defense: Opportunity Exists to Strengthen Acquisitions by Reducing Concurrency. GAO-12-486. Washington, D.C.: April 20, 2012. Ballistic Missile Defense: Actions Needed to Improve Training Integration and Increase Transparency of Training Resources. GAO-11-625. Washington, D.C.: July 18, 2011. Missile Defense: Actions Needed to Improve Transparency and Accountability. GAO-11-372. Washington, D.C.: March 24, 2011. Ballistic Missile Defense: DOD Needs to Address Planning and Implementation Challenges for Future Capabilities in Europe. GAO-11-220. Washington, D.C.: January 26, 2011. Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization, Transparency, and Accountability Challenges. GAO-11-179R. Washington, D.C.: December 21, 2010. Defense Acquisitions: Missile Defense Program Instability Affects Reliability of Earned Value Management Data. GAO-10-676. Washington, D.C.: July 14, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Defense Acquisitions: Missile Defense Transition Provides Opportunity to Strengthen Acquisition Approach. GAO-10-311. Washington, D.C.: February 25, 2010. | Regional BMD constitutes an essential element in deterring enemies from using ballistic missiles and supporting defense commitments to U.S. allies and partners. DOD's 2010 Ballistic Missile Defense Review Report noted that the United States would pursue phased, tailored, and adaptive approaches to regional BMD in Europe, the Asia-Pacific region, and the Middle East. A provision in the National Defense Authorization Act (NDAA) for Fiscal Year 2014 mandated DOD to submit within 180 days a report to the congressional defense committees on eight elements related to the status and progress of regional BMD programs and efforts. The Joint Explanatory Statement accompanying the NDAA mandated that GAO provide its views on DOD's report. Separately, GAO was requested to provide its results in a written, publicly releasable form. This report assesses the extent to which DOD's report addressed the required reporting elements and provides views on other key information, if any, that DOD could have included in the report. GAO used a scorecard methodology to compare the required reporting elements to the information in DOD's BMD report. Further, GAO reviewed the 2010 Ballistic Missile Defense Review Report , combatant commander integrated priority lists, and other DOD documents and policy, and interviewed DOD officials to gain further insight on DOD's regional BMD efforts. The Department of Defense's (DOD) June 2014 regional ballistic missile defense (BMD) report addressed five of the eight required reporting elements, and partially addressed the remaining three required reporting elements. DOD's report addressed elements relating to a BMD risk assessment, the role that regional missile defenses play in the homeland defense mission, the integration of offensive and defensive capabilities, and two elements on the roles and contributions of allies. DOD's report partially addressed the required reporting elements regarding the alignment of regional approaches to missile defense with combatant command-integrated priorities, the concept of operations for the European Phased Adaptive Approach (EPAA), and the testing and development of key EPAA elements. Additionally, GAO determined that DOD's report did not include key details for some elements that would have benefitted the congressional defense committees' oversight of DOD's regional BMD efforts. Generally accepted research standards for preparing sound and complete defense studies include providing complete, accurate, and relevant information. However, DOD's report does not consistently meet this standard, based on GAO's review. For example, the explanation in DOD's report of the North Atlantic Treaty Organization's transfer of authority process did not include sufficient detail to clearly convey the process. DOD's report also did not include details regarding the combatant commands' requirements, nor did it fully describe issues affecting the testing and development of key regional BMD systems (see fig.). DOD officials told GAO that the report was intended to address each of the eight required reporting elements concisely, that DOD regularly provides more detailed analysis on some of these topics to Congress via periodic briefings, and that they did not want to provide duplicative information in this report. GAO recognizes that judgment is needed in preparing reports to Congress; however, DOD's report did not include details on key BMD assets and risks to the EPAA schedule, which limits the report's utility to the congressional defense committees in their oversight of DOD's regional BMD programs. Because DOD prepared its report in response to a nonrecurring mandate, GAO is not making recommendations. |
Since 2001, the United States and its NATO partners have been responsible for securing Afghanistan and leading the effort to secure, stabilize, and rebuild Afghanistan. In 2010, the United States, NATO, and other coalition partners agreed to transition lead security responsibility for Afghanistan from NATO to the Afghan government by the end of 2014. Specifically, the Afghan government and ISAF—including the United States—agreed to a transition process that emphasizes a shift in ISAF’s role from leading combat missions to advising and assisting the ANSF, resulting in ISAF shifting to a security force assistance mission. Lead security responsibility in Afghanistan is defined as responsibility and accountability for planning and conducting operations within a designated area, with ISAF support as required. At the same time, overall U.S. force levels are planned to draw down over the next year to about 34,000 with additional decisions on drawdown of remaining U.S. forces yet to be determined. ISAF is a NATO-led mission in Afghanistan established by the United Nations Security Council in December 2001. The ISAF coalition currently consists of 28 NATO nations, including the United States, and 22 partnering nations with forces deployed across Afghanistan. ISAF is divided into six regional commands across Afghanistan, each with a specific geographic area of responsibility—North, East, South, Southwest, West, and the Kabul area (known as Regional Command–Capital). The United States leads three of these commands—East, South, and Southwest. In addition to conducting security operations, ISAF forces have long been training and advising the ANSF both in training centers and at unit locations after they have been formed and fielded. For the U.S. contribution, DOD has used a variety of approaches to provide U.S. forces to carry out the advise-and-assist mission. For example, prior to 2010, the advising mission in Afghanistan was primarily conducted with transition teams. These teams did not exist as units in any of the services’ force structures and were instead comprised of company- and field-grade officers and senior non-commissioned officers who were centrally identified and individually selected based on rank and specialty. As we have previously reported, the demand for these leaders created challenges for the services because, among other things, the leaders were generally pulled from other units or commands, which then were left to perform their missions while understaffed. In part as a means of alleviating these challenges, the Army developed the concept of augmenting brigade combat teams with specialized personnel to execute the advising mission, and began deploying these augmented brigades in 2010. In early 2012, based on requests from ISAF as part of its shift to a security force assistance mission, the U.S. Army and Marine Corps began to deploy small teams of advisors with specialized capabilities, referred to as SFA advisor teams, which are located throughout Afghanistan, to work with Afghan army and police units from the headquarters to the battalion level, and advise them in areas such as command and control, intelligence, and logistics. More recently, the Army began tailoring the composition and mission of its brigade combat teams to further focus on advising efforts. U.S. advisor teams are under the command and control of U.S. commanders within ISAF’s regional commands.have overall responsibility for operations in their geographic area, including setting goals for the advising mission. ISAF establishes the requirements for advisor teams, including force needs, and training requirements. To meet the U.S. share of these requirements, the Army and Marine Corps are responsible for providing advisor personnel, establishing service-specific training requirements, and conducting training prior to deployment. DOD and ISAF have defined the mission and broad goals for advisor teams based on the type of ANSF (e.g., army, police) and the type of unit, from the headquarters to the battalion level. Advisor teams varied in the extent to which their approaches for developing their ANSF counterparts identified activities based on specific end states, objectives, and milestones that are in support of the regional command’s broad goals. The mission for advisor teams for various types of ANSF units are defined in multiple ISAF and DOD plans, directives, and orders. According to DOD documentation, SFA advisor teams provide training, advising, assisting, and development functions to prepare ANSF units to assume full security responsibility by December 31, 2014. Missions also have been defined for SFA advisor teams based on the type of ANSF unit they advise, specifically: Afghan National Army advisor teams are expected to advise and assist those units, act as liaisons to ISAF units, and support the operational planning and employment of the Afghan unit as part of helping to develop a self-sufficient, competent, and professional unit capable of autonomous operations. Afghan National Police advisor teams are expected to advise those units, act as liaisons to ISAF units, and support the operational planning and employment of the Afghan unit as part of helping to develop a self-sufficient, competent and professional unit capable of maintaining public order, security, and rule of law. Operational Coordination Center advisor teams are expected to advise those units, act as liaisons to ISAF units and support the development of a coherent security coordination structure. The regional commands have amplified this guidance for advisor teams by providing key advising goals based on the developmental needs of the ANSF in their region. For example, Regional Command-South identified their top-five advising goals, aimed at strengthening ANSF capabilities such as logistics, countering improvised explosive devices, and medical evacuation. Regional Command-East had a similar set of top-five advising goals. While ISAF and the regional commands have defined the mission and broad goals for the advisor teams, it is largely left to the teams, in coordination with the regional command and brigade commander for their area of operations, to develop their approach for working with their ANSF counterpart units. According to multi-service guidance on advising, in order to successfully exert influence, advisors have an end or goal in mind. Similarly, the Army’s Field Manual for Security Force Assistance states that, in order to be successful, advisors have an end or goal in mind and should establish objectives and milestones that support higher- command plans and can be achieved during their deployment.addition, advisor teams must balance the priorities of their commands with those of their counterpart units. Specifically, DOD officials emphasized that advisor teams need some flexibility to tailor their approaches to the respective needs of their ANSF counterpart units while still working towards regional command goals. Advisor teams we spoke with were generally familiar with the broad goals established by ISAF and regional commands, but used various approaches to develop their ANSF counterpart units, which varied in the extent to which they resulted in the identification of activities based on specific objectives or end states that were clearly linked with established goals. Some teams we spoke with had taken the initiative to develop structured approaches that identified objectives or end states and milestones, drawing from the regional command’s broader goals to guide their advising efforts. For example, one team stated they worked directly from the regional commander’s top-five goals, developing a planning process to identify monthly objectives and milestones for each advising area (e.g., personnel, intelligence, logistics) that support these goals, and then regularly assessing where they are in terms of progress towards the commander’s goals and in what areas they should continue to focus. Using this process, the advisor team identified a training need for an ANSF brigade related to the regional commander’s broad goal of developing the ANSF’s counter improvised explosive device capabilities and arranged for a U.S. Explosive Ordinance Disposal unit to provide this training. In another instance, a logistics advisor team identified a need for its ANSF counterpart to be capable of repairing items such as cranes and fuel distribution equipment to help achieve the regional command’s broad goal of developing general level maintenance capability. To achieve this objective, the team created a training program to develop this capability. Another team leader we spoke with stated he developed advising plans based on the regional command’s high level goals and informed by an assessment of their ANSF counterpart unit, to identify tasks and timelines to train their counterparts on basic skills such as map reading in order to improve their ability to plan and conduct operations. Other advisor teams we met with were familiar with the broad goals for ANSF development and had identified activities to develop their ANSF counterpart units, but used less structured approaches to guide their advising efforts. For example, advisor teams in multiple regional commands stated their approach was to rely on interactions with their ANSF counterparts to identify priorities, using this input to develop activities on an ad hoc basis. Similarly, according to a brigade commander serving as an advisor team leader, his team and other advisor teams from his brigade generally identified development activities in reaction to situations as they arose rather than as part of a longer-term, more structured approach to achieve broad goals. According to several advisor teams, while they received input from various higher headquarters, that input lacked specificity regarding end states they should be trying to achieve for their ANSF units, leading them to use less structured approaches to guide their efforts. For example, the deputy team leader of an advisor team for a high-level Afghan National Army unit with visibility over the efforts of several advisor teams for subordinate ANSF units stated that while his team was able to develop activities intended to enable his counterpart unit to operate independently, he believed that guidance from the regional command did not clearly define the overall desired end state for the ANSF, which made it difficult to determine where to focus their particular advising efforts. Similarly, officials responsible for collecting best practices and lessons learned from SFA advisor teams in one regional command said that, in talking with teams, they found a lack of direction for advisor teams from higher headquarters resulted in what they characterized as a collection of good activities conducted by individual teams over time without a synchronized approach driving towards a tangible end state. Without a more structured approach with clear linkages between objectives or end states linked to development goals for ANSF units, regional commanders cannot be assured that the activities of individual advisor teams are in fact making progress toward established goals. Moreover, having such an approach would help with continuity of effort from one advisor team to the next, since advisor teams typically deploy for 9 months. The Army and Marine Corps have provided the required number of SFA advisor teams to Afghanistan based on theater commanders’ requests. Recognizing that high ranks and skill specialties were required for advisor teams, theater commander guidance allowed for some substitutions when specific ranks or skills were unavailable, which enabled the Army and Marine Corps to provide the appropriate personnel. The Army’s use of brigades to form advisor teams has enabled them to meet requirements but has resulted in leaving large numbers of brigade personnel at their home station locations. To manage these large rear detachments, brigade leadership undertook significant planning to ensure enough stay- behind leadership existed to maintain a sufficient command structure and provide certain training and exercises. In late 2011, ISAF and U.S. Forces–Afghanistan established requirements for coalition and U.S. SFA advisor teams, including specifying the number of teams required, team composition and capabilities, and assignment to ANSF units. Although the numbers of teams have changed over time, according to ISAF, the Army and Marine Corps have provided the required number of SFA advisor teams based on these requests and, as of December 2012, approximately 250 U.S. advisor teams were operating in Afghanistan. SFA advisor teams are generally comprised of 9 to 18 advisor personnel—made up of a mix of company- and field-grade officers, and senior non-commissioned officers—with specific specialties such as military intelligence, military police, and signal officers. The composition of advisor teams is tailored to match the needs of their ANSF counterpart. For example, teams at higher echelons of the ANSF (e.g., corps or provincial headquarters) have a higher rank requirement for the advisor team leader and police advisor teams include requirements for military police personnel. According to ISAF, Army, and Marine Corps officials, advisor teams are generally expected to remain with the same ANSF unit for the duration of their approximately 9-month deployments. According to DOD and ISAF officials, the requirement for advisor teams has fluctuated as additional ANSF units have been fielded, and the overall requirement for advisor teams is expected to change as the development of ANSF units progresses. For example, according to ISAF officials, SFA advisor teams currently advise down to the battalion level, but as U.S. forces draw down in Afghanistan and the capability of the ANSF increases, the U.S. advising effort could shift to a brigade-and-higher focus, which could affect the overall number and size of the teams. U.S. SFA advisor teams began deploying to Afghanistan in early 2012, and the Army and Marine Corps have used a variety of approaches to provide these teams. To meet its requirements for the first set of advisor team deployments, the Army tasked three non-deployed brigades to form the bulk of the advisor teams using personnel from their units, with additional non- deployed units tasked to form the remaining teams. These advisor teams then deployed to Afghanistan and were attached to combat brigades already in theater. More recently, the Army shifted its sourcing approach by tailoring the composition and mission of brigades deploying to Afghanistan to further focus on the SFA mission, and began deploying these SFA brigades (SFABs) in November 2012. According to ISAF officials, SFABs include advisor teams that are primarily created using personnel from within the brigade. According to Army officials, as of January 2013, three SFABs have deployed in place of combat brigades, and at least four more U.S. brigades in Afghanistan have been identified to be replaced by SFABs. According to Army officials, the Army will continue to provide some advisor teams using personnel from non-deployed active and reserve units that will join the remaining combat brigades in Afghanistan. Additionally, planning for the remaining brigades and overall force levels in Afghanistan is ongoing and by late 2013 all deploying U.S. brigades may be SFABs. To meet the initial deployment of SFA advisor teams beginning in early 2012, the Marine Corps created some teams out of personnel already deployed in Afghanistan and created additional teams using non-deployed personnel generally from the I and II Marine Expeditionary Forces, according to Marine Corps officials. For subsequent deployments of teams, the Marine Corps has created teams using non-deployed personnel from across the Marine Expeditionary Forces that then deploy to Afghanistan as formed teams. The Army and Marine Corps have been able to fill SFA advisor teams, but they continue to face challenges meeting specific rank and skill requirements. In 2011, we reported on challenges the Army was experiencing providing high-ranking personnel with specialized skills for the advising mission in Afghanistan. According to Army and Marine Corps officials, meeting the rank and skills required for SFA advisor teams, including those as part of SFABs, continues to present a challenge given the limited availability of such personnel across the services. To help address these challenges, theater commanders, in coordination with the Army and Marine Corps, have outlined a set of substitution guidelines, to allow flexibility in the rank and skill requirements. For instance, specific rank requirements can generally be substituted with an individual one rank above or below the requirement. Similarly, there are guidelines for different skills and specialties that may be substituted for one another. For example, a team may have a requirement for a specific type of intelligence officer, but the substitution guidance identified other types of intelligence personnel that could be used to meet this requirement such as a counterintelligence or signals intelligence analyst. Army Forces Command officials told us that because the required number of ranks and specialties for SFA advisor teams exceeds the total number of such personnel that exist in a typical brigade, the ability to substitute certain ranks and skills with other available personnel was critical to meeting the requirement for most advisor teams and for all three of the first deploying SFABs. Army officials recognized that substitutions would need to occur both within and among brigades. According to sourcing officials and officials from one of the brigades tasked to provide the first set of advisor teams, The following are examples: While 40 majors were required to fill the specified number of teams, the brigade had only 25 majors on hand. Recognizing this, the Army’s plan called for substituting captains for majors in order to meet the requirement. The requirement for certain intelligence officers exceeded that which existed in the brigade. Therefore, brigade leadership used lower ranking military intelligence officers or other officers with sufficient related experience. According to Army officials, the rank and skill requirements, as well as the reliance on substitutions, are expected to continue with the use of SFABs. As the Army and Marine Corps began to form the teams, they also worked with their force providers in order to utilize individual augmentees from active and reserve non-deployed units to help meet the rank and skill requirements for SFA advisor teams. For example, an official from a Marine Expeditionary Force responsible for providing many of the first advisor teams stated that the unit used reservists to fill over 130 advisor slots, and the Marine Corps expects to continue to use them to fill subsequent teams. The Army’s sourcing approaches enabled it to meet theater requirements for SFA advisor teams, but resulted in brigades leaving large numbers of personnel at home station locations. For the first set of Army deployments, the three brigades identified to source the bulk of the teams left the majority of their personnel at home station. For example, according to brigade officials, one brigade deployed approximately 370 people to create advisor teams, leaving approximately 3,100 personnel (approximately 90 percent) behind at home station. According to Army officials, SFABs reduce the size of the rear detachments because a larger percentage of the brigade’s personnel are to be deployed, although they recognized SFABs would continue to result in large rear detachments. For example, two of the first SFABs to deploy each left roughly 2,000 personnel at home station. Because the advisor team requirement calls for high numbers of company- and field-grade officers and senior non- commissioned officers, as well as specific skill specialties, staffing the teams required the brigades to deploy a significant portion of their leadership and expertise, including the brigade commanders and many battalion, company, and platoon commanders, for the advisor mission. As a result, according to Army Forces Command officials and officials from two brigades, brigade leadership had to undertake significant planning to ensure that enough stay-behind leadership existed to maintain a sufficient command structure and the unit leadership needed to conduct certain training, such as artillery and other live-fire exercises. In order to help brigades in this planning, Army Forces Command has issued guidance for the training and employment of rear detachments during advisor team deployments, including missions the force may be assigned to, training expectations, and equipment maintenance responsibilities. For example, one brigade that deployed many of the first set of advisor teams consolidated its rear detachment into smaller numbers of more fully manned platoons to ensure appropriate leadership existed for each platoon. In addition, the brigade leadership developed a training plan for the rear detachment to maintain proficiency in critical tasks while awaiting reintegration of deployed personnel. The Army and Marine Corps have developed standardized predeployment training programs for SFA advisor teams in Afghanistan, but teams varied in the extent to which they had access to mission- specific information prior to deploying that they believed would help them prepare for their specific advising missions. SFA advisor teams take part in a broad set of training activities both at home station and at training centers in the months leading up to their deployment. ISAF has established minimum training requirements for SFA advisor teams from all coalition countries, including the United States. These training requirements include both individual advisor knowledge and skills, such as understanding how to work through an interpreter, and collective team knowledge and skills, such as how the advisor team will assess ANSF unit capabilities and provide force protection and sustainment. ISAF envisions that this training will be conducted using a combination of individual and team-based training. In accordance with these requirements, the Army and Marine Corps have each developed a program of instruction for predeployment training, which generally occurs in three stages. Home-Station Training. Home-station training includes individual and team-level combat skills training provided to all deploying forces to Afghanistan. Typically, SFA advisor teams are formed prior to the beginning of this training. Topics include combat lifesaver training, various weapons and driving qualifications, and countering improvised explosive devices. During this period, teams also begin to gather information regarding their specific advising assignment in order to conduct mission analysis, shape the next two stages of their training, and establish their initial plan for their advising missions. For example, officials at the Joint Readiness Training Center Operations Group, which conducts culminating training exercises for Army advisor teams and SFABs, told us that it is during this time that they begin to work with commanders to design their culminating training exercise. Advisor-Specific Training. Advisor-specific training is focused on language, culture, counterinsurgency, and advisor skills. Army advisor teams generally receive advisor-specific training during an 8-day course provided by the 162nd Infantry Training Brigade. Marine Corps teams receive training at the Advisor Training Cells at their respective Marine Expeditionary Force home stations, as well as the Advisor Training Group at the Marine Corps Air Ground Combat Center. such as overviews of Afghan security force institutions, how to use an interpreter, and techniques for building rapport. The training also utilizes role players in practical exercises to simulate engagements with key Afghan civilian and military leaders in different situations. Culminating Training Exercise. This training includes situational Both the Army and Marine Corps training includes courses training exercises and a culminating training exercise that integrates ANSF role players into a simulated deployed environment in order to exercise the advisor teams’ ability to advise their ANSF counterpart units. For Army advisor teams, this exercise is incorporated into the culminating training exercise of the brigade under which they will operate in Afghanistan, when possible, and is conducted at the Joint Readiness Training Center at Fort Polk, Louisiana, or other combat training centers. These exercises include training based on the level (e.g., brigade, battalion) and type (e.g., army, police) of the ANSF unit that teams will be advising and their specific areas of responsibility in Afghanistan, individual and team proficiency assessments, and live- fire drills, such as combat patrols. Marine Corps advisor teams receive similar training at the Advisor Training Group, though this training does not include the combat unit with which they will be operating in Afghanistan. The Army, Marine Corps, and ISAF have established mechanisms to gather feedback on predeployment training from advisor teams in Afghanistan in order to update and refine training for the advisor mission. Both the Army and Marine Corps centers for lessons learned have ongoing efforts in Afghanistan to collect observations and best practices for SFA advisor teams. Additionally, the 162nd Infantry Training Brigade employs liaison officers at ISAF and the regional commands, among other places, to collect lessons learned and after-action reports from advisor teams in Afghanistan, which are then incorporated into advisor training. Officials from the 162nd Infantry Training Brigade said that, based in part on this feedback, the advisor training has changed significantly since the first SFA advisor teams began going through the training in January 2012, and that the program of instruction will continue to evolve. For example, officials from two of the first SFA advisor teams told us that the advisor training was too focused on classroom instruction. Officials from the 162nd Infantry Training Brigade said that they had heard similar concerns, and later iterations of SFA advisor team training was updated to provide greater balance between classroom training and practical exercises that use cultural role players. Further, between August 2012 and October 2012, ISAF conducted a survey of U.S. and coalition nation SFA advisor team personnel on predeployment training in order to provide advisor insights to U.S. and NATO training centers and made several recommendations to improve predeployment training. For example, ISAF recommended that advisor teams contact the unit they will be replacing to fine tune their training in order to meet the challenges they will face upon deployment. ISAF’s minimum training requirements direct advisor teams to conduct mission analysis prior to deployment in order to develop plans for advising their ANSF counterpart unit. Further, the Army’s Field Manual for Security Force Assistance, states that an in-depth understanding of the operational environment—including a clear understanding of the theater, population, and the foreign security forces and capabilities with which they are working—is critical to planning and conducting effective SFA. According to some advisor team officials and ISAF officials tasked with gathering lessons learned from advisor teams and identifying potential challenges, the personalities and capabilities of each ANSF unit and district are unique, and advisor teams need specific information on their ANSF counterpart unit as well as the efforts of the advisor teams currently working with the unit prior to deployment in order to be successful. In addition, some advisors stated that having specific information about the operational environment where teams will be deployed would be beneficial in determining where to place emphasis during training. For example, some advisor teams we spoke with are able to walk to their counterpart unit’s headquarters, while other teams had to travel longer distances to accompany their counterpart units. Having this type of specific information about their operating environment could be helpful for advisor teams in tailoring some of their more general combat training at home station. Advisor teams varied in the extent to which they had access to information to help prepare for their specific advising missions prior to deployment. Advisor teams may gain access to this information through a variety of ways. For example, officials from the 162nd Infantry Training Brigade said that they coordinate video teleconferences between advisor teams going through advisor training and deployed advisor teams with the goal that advisor teams are able to talk to the SFA advisor team that they will replace to help the deploying team better understand its specific mission and the unit that it will be advising. Advisor teams can also utilize secure networks to gather mission-specific information. For example, much of the information on advising and general operations in Afghanistan (e.g., daily and weekly update briefs, details of the advisor teams’ interactions with ANSF units, and regional command campaign plans) is stored and shared on the Combined Enterprise Regional Information Exchange System-ISAF (CENTRIXS-I) network—a network that is widely used by U.S. and coalition forces in Afghanistan, but with limited access in the United States. Additionally, advisor teams may take part in predeployment site surveys in which commanders take staff members to theater and meet with the units they will be replacing to learn more about the mission they will support. According to the Army Field Manual for Security Force Assistance, the predeployment site survey should, among other things, provide information on the organization, leadership, and capabilities of the foreign unit that will be advised, as well as an overview of the operational area. ISAF minimum training requirements also require that advisor teams conduct predeployment site surveys as part of their SFA mission analysis and planning. We found differences in the extent to which advisor teams were actually able to gain access to mission-specific information throughout their predeployment training. For example, While some SFA advisor teams told us that mission-specific information shared on CENTRIXS-I is beneficial in shaping their predeployment training and mission analysis, we found that advisor teams varied in the extent to which they were able access this system and thus the information contained therein throughout their predeployment training. Some advisor teams had access to CENTRIXS-I at home station. For example, officials from one brigade that provided SFA advisor teams said that they recognized the value of CENTRIXS-I in gathering specific information from units on the ground in order for teams to conduct their mission analysis and early planning, and proactively took steps to gain access to the network at home station early on in predeployment training, and were able to obtain access for its SFA advisor teams 5 months prior to deploying. However, other advisor teams said that they had limited or no access to this network at their home stations, thus limiting the information available to the teams to shape training, conduct mission research, and develop situational awareness before arriving in Afghanistan. Advisor teams are able to access CENTRIXS-I once they arrive at the 162nd Infantry Training Brigade and the Advisor Training Group training sites. However, teams are at these locations for a short time (i.e., less than 30 days) in the mid-to-late stages of training. Advisor teams with limited or no access to CENTRIXS-I at home station may be unable to fully leverage mission-specific information to (1) either shape their training prior to going to these locations or (2) continue to fully maximize the up-to-date information contained therein to prepare for their missions after they leave the training sites. Advisor teams varied in their ability to send representatives on predeployment site surveys to Afghanistan. Unit commanders and theater commands determine the numbers of personnel that take part in the survey, taking into consideration limitations on the ability of certain locations to provide transportation, housing, and other support. According to an ISAF official, units tasked with the advising mission are encouraged to take some representatives from their advisor teams on these surveys. According to a U.S. Forces–Afghanistan official, there has been at least one recent case where a predeployment site survey team sent to Afghanistan was augmented with additional personnel in order to accommodate the need to visit multiple locations. In contrast, some advisor teams we spoke with said that they did not send representatives from their individual teams on these site surveys, which limited their ability to shape their training and their understanding of the environment in which they would be operating. For example, one advisor team said that it did not know the specifics of the operating environment when conducting home station training, such as details about security and movement, and that the opportunity to conduct a predeployment site survey would have been helpful for the team’s mission preparation. Another unit that was organized into three advisor teams reported that they did not take part in a predeployment site survey and thus faced significant challenges during their first 45 days of deployment because they were unaware that logistic support arrangements for the teams in Afghanistan had not been established. DOD officials acknowledged that increased information prior to deployment would benefit advisor teams, but added that resource constraints are a consideration in determining how to expand access to certain information sources. Nonetheless, without a more complete understanding of the capabilities of the ANSF counterpart units to be advised and the operating environment in which they will be advising prior to deploying, it may take advisor teams more time after deploying to maximize their impact as advisors. The use of SFA advisor teams to develop and support the ANSF are a key element of the U.S. and ISAF strategy to transition lead security responsibility to Afghanistan while drawing down combat forces. By ensuring that SFA advisor teams have structured approaches with clear linkages between end states, objectives, and milestones that are in support of broad goals for ANSF units, theater commanders can enhance the ability of advisor teams to develop their ANSF counterparts. In addition, this will enable theater commanders to better gauge an ANSF unit’s progress towards their broader development goals and facilitate continuity of effort from one advisor team to the next. Lastly, by improving the availability of mission-specific information prior to deployment, the Army and the Marine Corps will ensure that SFA advisor team have the information necessary on their specific ANSF counterpart and the operational environment to better inform training. Moreover, such information would enhance the ability of advisor teams to prepare for and undertake their efforts immediately upon deployment. To ensure that the activities of individual advisor teams are more clearly linked to ISAF and regional command goals for overall ANSF development, we recommend that the Secretary of Defense, in consultation with the commander of U.S. Central Command, direct theater commanders in Afghanistan to work with brigade commanders and advisor teams to identify specific end states, objectives and milestones for developing their ANSF counterparts that are in support of the broad theater goals to guide their advising efforts during their deployment. To enhance the ability of SFA advisor teams to prepare for and execute their mission, we recommend that the Secretary of the Army and the Commandant of the Marine Corps take steps to improve the availability of mission-specific information during predeployment training. Such steps could include: Expanded access to the data and information contained in CENTRIXS-I; and, Increased opportunities, in coordination with U.S. Central Command, for advisor team leaders to participate in predeployment site surveys with the teams they are expected to replace. In written comments on a draft of this report, DOD partially concurred with our recommendations. The full text of DOD’s written comments is reprinted in appendix II. DOD also provided technical comments, which we incorporated where appropriate. In its comments, DOD partially concurred with our first recommendation that the Secretary of Defense, in consultation with the commander of U.S. Central Command, direct theater commanders in Afghanistan to work with brigade commanders and advisor teams to identify specific end states, objectives, and milestones for developing their ANSF counterparts that are in support of the broad theater goals to guide their advising efforts during their deployment. Also, DOD provided comments regarding the command relationships and guidance affecting the advisor teams. Specifically, DOD stated that the issue of linking advisor teams with regional commanders and the theater commander to identify specific end states, objectives, and milestones resides within the operational level and not at the strategic level with the Secretary of Defense and U.S. Central Command. The department further stated that the Commander, International Security Assistance Force (COMISAF), is the theater commander and produces the operation plans for Afghanistan, which provide the end states, objectives, and milestones for the campaign, including efforts to develop the ANSF and ministerial-level agencies. COMISAF also issues guidance for developing the ANSF and ministerial agencies to include end states, objectives, and milestones. Further, DOD noted that regional commanders receive their guidance and direction in part through the OPLANs and other guidance issued by COMISAF. The department also stated that brigade commanders, SFABs, and SFA advisor teams are operationally and/or tactically controlled by the regional commanders. DOD stated that guidance from the regional commanders for these subordinate elements should include the guidance provided by COMISAF regarding development of the ANSF. Lastly, DOD stated that individual ANSF elements advised by SFA advisor teams and SFABs have different levels of capabilities and unique circumstances involved in developing those capabilities. Therefore, DOD stated that commanders at the operational and tactical level should have sole responsibility for directing the development of the individual ANSF elements. We agree that it is the responsibility of commanders, particularly regional commanders, at the operational and tactical level, to direct SFA advisor teams to develop individual ANSF elements. As we noted in our report, regional commands have overall responsibility for operations in their geographic area, including setting goals for the advising mission. We further noted that the missions for advisor teams are defined in multiple ISAF and DOD plans, directives, and orders and that the regional commands amplify this guidance by providing key advising goals based on the developmental needs of the ANSF in each region. However, we found that it is largely left to advisor teams to develop their approach for working with their ANSF counterpart units and that advisor teams varied in the extent to which their approaches identified activities based upon specific objectives linked to ANSF development goals. Therefore, we recommended that theater commanders in Afghanistan should work with brigade commanders and advisor teams to identify specific end states, objectives and milestones for developing their ANSF counterparts that are in support of the broad theater goals to guide their advising efforts during their deployment. We agree with the department’s view that directing the development of the individual ANSF elements should be the sole responsibility of commanders at the operational and tactical level. We believe that our recommendation does not conflict with this principle but rather calls for the Secretary of Defense, in consultation with the Commander of U.S. Central Command, to direct the operational commander to ensure that these actions are taken. Regarding our second recommendation, we recommended that the Secretary of the Army and the Commandant of the Marine Corps take steps to improve the availability of mission-specific information during predeployment training, and provided two examples of such steps for illustrative purposes. DOD commented separately on these examples. Specifically, with respect to the step calling for expanded access to the data and information contained in CENTRIXS-I, DOD concurred and noted that actions had been taken to install CENTRIXS-I kiosks at U.S bases and overseas locations and plans were underway to install additional kiosks. Also, DOD noted that while CENTRIXS-I is a specific capability, it appears that the intent of our recommendation is to expand information flow by any means available, and DOD suggested that we rephrase the first step to read: “Expand access to secure networks in order to gather data and information.” We agree that the intent of our recommendation is to expand information flow and to recognize, as noted in our report, that other information sources exist beyond CENTRIXS-I. Based on our discussions with command and advisor team personnel, CENTRIXS-I was cited as an important information source and therefore we cited it as an example in our report. We believe that, as currently worded, our recommendation provides flexibility for the department to determine a range of options for improving the availability of information to advisor teams. With respect to the step calling for increased opportunities for advisor team leaders to participate in predeployment site surveys, DOD partially concurred. The department stated that advisor teams and the leadership of brigades must collaborate and use the site survey as well as the brigade’s intelligence infrastructure to support the teams in getting situational awareness. Further, DOD further noted that space and logistical constraints may limit participation in a brigade’s site survey. Given the critical nature of the SFA advisor team mission, DOD noted that team leaders should be given priority to participate in a predeployment site survey, but that a balance must be met regarding the comprehensive nature of the mission in Afghanistan. Additionally, the department stated that while the Secretary of the Army and the Commandant of the Marine Corps can explore timing opportunities for advisor team leaders to participate in predeployment site surveys, the Afghanistan theater of operations has responsibility for ultimate approval for a site-survey visit request. As a result, the department recommended that we rephrase the second step to include the wording "in coordination with U.S. Central Command.” We agree that various factors can affect the composition of the personnel participating in the site surveys and that the theater of operations has responsibility to approve visit requests. Our report specifically notes that unit commanders and theater commands determine the numbers of personnel that take part in the predeployment site survey, and take into consideration limitations on the ability of certain locations to provide transportation, housing, and other support. Based on DOD’s comments, we modified the text of our second step as DOD suggested. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Chairman of the Joint Chiefs of Staff; the Secretary of the Army; the Commandant of the Marine Corps; and the Commander of U.S. Central Command. In addition, the report will also be available on our website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD), in conjunction with the International Security Assistance Force (ISAF), has defined Security Force Assistance (SFA) advisor team missions, goals, and objectives, we reviewed doctrine and guidance from the Army, Marine Corps, and theater commanders, including the Army Field Manual 3-07.1 Security Force Assistance and the ISAF SFA Concept and Implementation Guidance. We also examined key planning documents, such as operational plans and orders, theater commanders’ requests for forces, and select advisor team mission briefs and after-action reports. Additionally, we interviewed officials in the United States from the Office of the Secretary of Defense, Department of the Army, Headquarters Marine Corps, as well as officials in Afghanistan from ISAF, ISAF Joint Command, regional commands, and U.S. Army and Marine Corps advisor teams. To determine the extent to which the Army and Marine Corps have been able to provide SFA advisor teams, we reviewed documents such as theater and combatant commanders’ requests for forces that establish personnel requirements for SFA advisor teams and Army and Marine Corps sourcing documents, including execution orders and other manning guidance. We also examined ISAF, ISAF Joint Command, and Army and Marine documents detailing the structure and composition of the SFA advisor teams, including the ISAF SFA Concept and Implementation Guidance, theater commander operational and fragmentary orders, and unit and advisor team briefings. Additionally, in addition to the officials mentioned above, we also interviewed officials in the United States from Army Forces Command, Marine Corps Central Command, 1st Marine Expeditionary Force, U.S. Central Command, officials from Army brigades that provided SFA advisor teams, and U.S. Army and Marine Corps advisor team personnel in the United States and Afghanistan. To determine the extent to which the Army and Marine Corps have developed programs to train SFA advisor teams for their specific missions in Afghanistan, we reviewed theater commanders’ and service training requirements for SFA advisor teams, such as U.S. Central Command theater training requirements, ISAF minimum training requirements for SFA advisor teams, and Army and Marine Corps training requirements for SFA advisor teams. We also examined documents detailing Army and Marine Corps advisor training programs, such as concept briefs and curriculum documents from the 162nd Infantry Training Brigade, the Joint Readiness Training Center, the Marine Corps Advisor Training Group, and Marine Corps Advisor Training Cell. We also reviewed after-action reports and lessons-learned documents from SFA advisor teams. Additionally, we interviewed officials from the Army 162nd Infantry Training Brigade, Joint Readiness Training Center, 1st Marine Expeditionary Force Advisor Training Cell, Marine Corps Advisor Training Group, and U.S. Army and Marine Corps advisor personnel conducting training in the United States and deployed in Afghanistan, as well as from those organizations mentioned earlier. We visited or contacted officials from the following organizations in the United States and Afghanistan during our review: DOD Organizations in the United States Office of the Secretary of Defense, Arlington, Virginia U.S. Central Command, Tampa, Florida U.S. Army Department of the Army Headquarters, Arlington, Virginia U.S. Army Forces Command, Fort Bragg, North Carolina 162nd Infantry Training Brigade, Fort Polk, Louisiana Joint Readiness Training Center, Fort Polk, Louisiana 101st Airborne Division, Fort Campbell, Kentucky Headquarters, Marine Corps, Arlington, Virginia Marine Corps Central Command, Tampa, Florida 1st Marine Expeditionary Force, including its Advisor Training Cell, Advisor Training Group, Marine Corps Air Ground Combat Center, DOD and International Entities in Afghanistan North Atlantic Treaty Organization (NATO) entities, including the ISAF, ISAF Commander’s Advisory and Assistance Team, and ISAF Joint Command, Kabul, Afghanistan NATO Training Mission-Afghanistan, Kabul, Afghanistan Regional Command headquarters and staff: Regional Command–East (Commanded by 1st Infantry Division, U.S. Army), Bagram Air Field, Afghanistan Regional Command–South (Commanded by 3rd Infantry Division, U.S. Army), Kandahar Air Field, Afghanistan Regional Command–Southwest (Commanded by 1st Marine Expeditionary Force (Fwd), U.S. Marine Corps), Camp Leatherneck, Afghanistan U.S. Forces–Afghanistan, Kabul, Afghanistan U.S. Army and Marine Corps Units, Personnel, and Advisor Teams deployed in Afghanistan: 4th Brigade, 4th Infantry Division, U.S. Army 2nd Stryker Brigade, 2nd Infantry Division, U.S. Army 162nd Infantry Training Brigade training liaison officers 23 SFA advisor teams in Afghanistan, including the following: 7 Army advisor teams in Regional Command–East 10 Army advisor teams in Regional Command–South 5 Marine Corps advisor teams in Regional Command– 1 Army advisor team in Regional Command–West As part of this review, we selected an illustrative, non-generalizable sample of deployed U.S. Army and Marine Corps SFA advisor teams in Afghanistan. We worked with theater commands in Afghanistan to identify and meet with a selection of advisor teams that included both Army and Marine Corps advisor teams, advisor teams operating in different regional commands, and advisor teams assigned to various types (e.g., army, police, operational coordination center, etc.) and levels (e.g., corps, brigade, battalion, etc.) of the ANSF. Ultimately, we met with 23 deployed U.S. advisor teams in Afghanistan operating in four different regional commands’ areas of operations—18 Army teams and 5 Marine Corps teams. We conducted this performance audit from June 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, James A. Reynolds, Assistant Director; Virginia Chanley; Carole Coffey; Grace Coleman; Mark Dowling; Kasea Hamar; Marcus Oliver; Luis Rodriguez; and Sally Williamson made key contributions to this report. Building Partner Capacity: Key Practices to Effectively Manage Department of Defense Efforts to Promote Security Cooperation. GAO-13-335T. Washington, D.C.: February 14, 2013. Afghanistan: Key Oversight Issues.GAO-13-218SP. Washington, D.C.: February 11, 2013. Afghanistan Security: Security Transition. GAO-12-598C. Washington, D.C.: September 11, 2012. Observations on U.S. Military Capabilities to Support Transition of Lead Security Responsibility to Afghan National Security Forces. GAO-12-734C. Washington, D.C.: August 3, 2012. Afghanistan Security: Long-standing Challenges May Affect Progress and Sustainment of Afghan National Security Forces. GAO-12-951T. Washington, D.C.: July 24, 2012. Interim Results on U.S.-NATO Efforts to Transition Lead Security Responsibility to Afghan Forces. GAO-12-607C. Washington, D.C.: May 18, 2012. Security Force Assistance: Additional Actions Needed to Guide Geographic Combatant Command and Service Efforts. GAO-12-556. Washington, D.C.: May 10, 2012. Iraq and Afghanistan: Actions Needed to Enhance the Ability of Army Brigades to Support the Advising Mission. GAO-11-760. Washington, D.C.: August 2, 2011. | ISAF's mission in Afghanistan has shifted from a combat role to focus more on preparing ANSF units to assume lead security responsibility by the end of 2014. A key element in advising and assisting the ANSF is SFA advisor teams, provided by the U.S. Army and Marine Corps. A House Armed Services Committee report accompanying its version of the Fiscal Year 2013 National Defense Authorization Act directed GAO to review DOD's establishment and use of SFA advisor teams. Specifically, GAO evaluated the extent to which (1) DOD, in conjunction with ISAF, has defined SFA advisor team missions, goals, and objectives; (2) the Army and Marine Corps have been able to provide teams; and (3) the Army and Marine Corps have developed programs to train teams for their specific missions. GAO reviewed doctrine and guidance, analyzed advisor requirements, reviewed training curricula, and interviewed Army, Marine Corps, theater command, and SFA advisor team officials in the U.S. and Afghanistan. DOD and the International Security Assistance Force (ISAF) have defined the mission and broad goals for Security Force Assistance (SFA) advisor teams; however, teams varied in the extent to which their approaches for developing their Afghan National Security Force (ANSF) units identified activities based on specific objectives or end states that were clearly linked with established goals. SFA guidance states that to be successful, advisors must have an end or goal in mind, and establish objectives that support higher-command plans. Theater commanders have outlined goals aimed at strengthening specific capabilities such as logistics, and it is largely left to the teams to then develop their approach for working with their counterparts. GAO found some advisor teams had developed structured advising approaches drawing from these goals, such as identifying monthly objectives and milestones for their team. Other teams GAO met with used less structured approaches, such as relying on interactions with ANSF counterparts to identify priorities and using this input to develop activities on an ad hoc basis, rather than as part of a longer-term, more structured approach to achieve broad goals. Officials from several teams stated that the guidance they received lacked specificity regarding desired end states for the development of their ANSF counterpart units. Without a more structured approach with clear linkages between end states, objectives, and milestones that are in support of broad goals for ANSF units, theater commanders cannot be assured that the advisor team activities are making progress toward these goals. The Army and Marine Corps have been able to fill requests for SFA advisor teams, using various approaches such as tasking non-deployed brigades to form advisor teams or creating teams using personnel already deployed in Afghanistan. According to Army and Marine Corps officials, the ability to substitute an individual at one rank above or below the request has helped the services meet rank and skill requirements. The Army's reliance on brigades to provide a portion of their personnel to form advisor teams has enabled them to meet requirements but resulted in leaving large numbers of personnel at the brigades' home stations. To manage these large rear detachments, brigades undertook significant planning to ensure that enough stay-behind leadership existed to maintain a sufficient command structure and provide certain training. The Army and Marine Corps have developed training programs for SFA advisor teams, but teams varied in the extent to which they had specific information to help prepare them for their mission prior to deployment. SFA guidance states that an in-depth understanding of the operational environment and of foreign security force capabilities is critical to planning and conducting effective SFA. Advisor teams may access such information from a variety of sources such as conducting video teleconferences with the teams they will replace, using secure networks to gather information, or sending personnel on predeployment site surveys, although teams varied in the extent to which they were actually able to gain access to these sources. For example, GAO found that while teams had access to a certain secure network at training sites, only some had access at home station, enabling them to shape their training and mission analysis earlier in predeployment training or after training but prior to deploying. Having limited access to this information prior to arriving in Afghanistan may result in advisor teams needing more time after deploying to maximize their impact as advisors. GAO recommends that theater commanders take steps to work with brigade commanders and advisor teams to identify end states, objectives, and milestones for the development of their ANSF counterpart units in support of the regional commands broad goals, and that the Army and Marine Corps improve availability of mission-specific information prior to advisor teams deployment. DOD partially concurred with GAOs recommendations and identified actions to further prepare SFA advisor teams for their missions. |
PSI is a multinational effort to prevent the trafficking of WMD, their delivery systems, and related materials to and from states and nonstate actors of proliferation concern. The PSI has no formal organization or bureaucracy. U.S. agencies are involved in the PSI as a set of activities, rather than a program. PSI encourages partnership among states to work together to develop a broad range of legal, diplomatic, economic, military, law enforcement, and other capabilities to prevent WMD-related transfers to states and nonstate actors of proliferation concern. International participation is voluntary, and there are no binding treaties on those who choose to participate. Countries supporting PSI are expected to endorse PSI principles, embodied in six broad goals in the Statement of Interdiction Principles of September 2003 (see app. II) by a voluntary, nonbinding “political” commitment to those principles and to voluntarily participate in PSI activities according to their own capabilities. According to the principles, PSI participants use existing national and international authorities to put an end to WMD-related trafficking and take steps to strengthen those authorities, as necessary. The U.S. government’s PSI efforts involve three broad activities: multilateral PSI planning meetings (referred to as Operational Expert Group meetings), participation in PSI exercises, and other outreach efforts such as workshops and conferences. According to State, at multilateral PSI planning meetings, military, law enforcement, intelligence, legal, and diplomatic experts from the United States and 19 other PSI countries meet to explore and consider operational ways to enhance the WMD interdiction capabilities of PSI participants, build support for the initiative, develop operational concepts, organize PSI exercises, and share information about national legal authorities. The policy office in the Office of the Secretary of Defense heads the U.S. delegation to these multilateral meetings. PSI exercises vary in size and complexity, and some involve military personnel and assets from participating PSI countries. Some exercises do not involve any military assets but instead examine the use of law enforcement or customs authorities to stop WMD proliferation. Other exercises are “tabletop” exercises or simulations, which explore scenarios and determine solutions for hypothetical land, air, or sea interdictions. Among the most visible PSI exercises are those that combine a tabletop and a live interdiction exercise using military assets from multiple PSI countries, such as practicing the tracking and boarding of a target ship. Outreach efforts include workshops, conferences, and other meetings that relevant U.S. officials said they engage in to support PSI goals and bilateral PSI shipboarding agreements that the United States concludes with other states. The administration has not issued the directive, as called for by a sense of Congress provision in the law, that directs U.S. agencies to take actions to improve PSI activities, such as establishing clear structures. In addition, the administration has not submitted a PSI budget report for fiscal year 2009 detailing PSI-related expenditures in the past 3 fiscal years and a plan for the next 3 years. In July 2008, the administration submitted to Congress a PSI implementation report that was required by law to be issued in February 2008. The administration has not issued a directive to U.S. agencies that perform PSI functions to take actions to expand and strengthen PSI, as called for by a sense of Congress provision in the Implementing Recommendations of the 9/11 Commission Act of 2007. Multiple U.S. agencies, including State, DOD, and law enforcement agencies such as CBP and FBI, perform PSI-related activities for the United States. Section 1821(a) of Pub. L. No. 110-53 contains a sense of Congress that a presidential directive should be issued to direct these agencies to take actions such as establishing clear PSI structures, incorporating a PSI budget request in each agency’s fiscal year budget request, and providing other resources necessary to achieve better performance of U.S. PSI-related activities. The administration, in its implementation report to Congress in July 2008, asserted that it is unnecessary to issue a directive for PSI. The administration believes that an existing WMD interdiction process, as documented in an 8-page 2002 National Security Presidential Directive, addresses the relevant issues that would be covered under a PSI directive. The existing WMD interdiction process covers how U.S. agencies should coordinate U.S. government efforts to conduct WMD interdictions. However, this process predates the creation of PSI and does not cover U.S. agencies’ involvement in three broad PSI activities: multilateral planning meetings, exercises, and other outreach efforts. According to the administration, the President launched PSI in 2003 because of the recognition that stopping WMD proliferation is a task the United States cannot accomplish by itself. U.S. involvement in PSI activities, while complementing U.S. agencies’ participation in WMD interdictions, is focused on the diplomatic and educational outreach efforts of the U.S. government to other countries to strengthen their interdiction capabilities and efforts. The administration has not submitted a PSI joint budget report for fiscal year 2009, as required by the law. Specifically, the law required the Secretaries of State and Defense to submit an unclassified comprehensive joint budget report to Congress in each year for which the President submits a PSI budget request, with the first report due in February 2008. The joint budget report should contain the following: A 3-year plan, beginning with the fiscal year for which the budget is requested, specifying the amount of funding and other resources the United States would provide for PSI-related activities and the purposes for such funding and resources over the term of the plan. For the 2008 report, a description of the PSI-related activities carried out during the 3 fiscal years preceding the year of the report, and for 2009 and each year thereafter, a description of PSI-related activities carried out during the fiscal year preceding the year of the report. Other information that the Secretaries of State and Defense determine should be included to keep Congress fully informed of PSI operations and activities. Agency officials stated that they were in the process of preparing the budget report, but they did not provide an estimated completion date. The administration issued a required PSI implementation report to Congress in July 2008, 5 months after the mandated issuance date of February 2008. In addition, the report does not fully specify the steps taken to implement GAO’s previous recommendations or other provisions of the law. The law required the administration to issue an implementation report to Congress describing the steps it had taken to implement the recommendations contained in our classified September 2006 report and the progress it has made toward implementing the other actions contained in the sense of Congress provisions of the law. In our September 2006 report, we made two recommendations. First, we recommended that the administration better organize its efforts for performing PSI activities, including establishing clear PSI policies and procedures and indicators to measure the results of PSI activities. Second, we recommended that the administration develop a strategy to work with PSI-participating countries to resolve interdiction issues. The agencies did not concur with our recommendations. Their reasons are discussed in our classified report. The administration’s 2008 implementation report reiterates the agencies’ nonconcurrence with our prior recommendations. While the implementation report primarily described the administration’s activities with the 19 other leading countries that attend the multilateral PSI planning meetings, it did not specify the steps taken to develop a comprehensive strategy for resolving interdiction issues with PSI- participating countries. Also, under a sense of Congress provision in the law, the administration is called upon to issue a PSI directive, increase cooperation with all countries, and increase coordination and cooperation with PSI- participating countries. The implementation report did not fully specify the steps taken to implement these other provisions of the law. The report stated that the administration did not consider it necessary to issue a PSI directive because it believes that an existing WMD interdiction process already addresses the relevant issues. However, this existing WMD interdiction process is not responsive to the provisions of the Implementing Recommendations of the 9/11 Commission Act of 2007. As previously noted, it predates the creation of PSI and does not cover U.S. agencies’ involvement in three broad PSI activities: multilateral planning meetings, exercises, and other outreach efforts. DOD has taken more steps to address some of the law’s provisions, such as establishing clearer PSI policies and procedures, structures, and budgets, compared with other agencies such as State and law enforcement agencies. State and U.S. law enforcement agencies do not all have the policies, procedures, or budgets in place to facilitate their participation in PSI activities, despite the need for greater involvement of U.S. law enforcement agencies to address PSI law enforcement issues. Furthermore, U.S. agencies have not established performance indicators to measure the results of PSI activities. DOD has taken some steps to establish PSI policies and procedures for U.S. military support to PSI, specifically by encouraging Combatant Commands (COCOM) to incorporate PSI components into existing DOD exercises when resources or mission requirements permit. However, uncertainties remain about how to incorporate law enforcement agencies into PSI exercises and track PSI expenditures. Consistent with internal controls, establishing clear PSI policies and procedures will help the agencies better organize their PSI activities. COCOMs generally plan, implement, and pay for military exercises in their area of responsibility. According to agency officials, in the past, DOD Joint Staff encouraged the COCOMs to implement PSI exercises in addition to their scheduled standard DOD exercise program. As a result, financial and logistical pressures of planning and implementing PSI exercises outside their standard exercise program discouraged COCOM participation in PSI exercises. In March 2007, DOD Joint Staff revised its guidance to direct COCOMs to leverage the staff, assets, and resources of the existing DOD exercise program in support of PSI exercises. Joint Staff guidance is the primary document setting forth PSI policy and provides procedures, including roles and responsibilities, for the planning and execution of U.S. military support to PSI. The guidance encourages COCOMs to put a PSI component into existing DOD exercises and establishes a small office that will assist COCOMs in planning and executing a PSI component. According to agency officials, COCOMs generally plan to include PSI components, such as PSI-focused interdictions and boardings, into their existing multinational exercises that regularly practice these activities and intend to increase the complexity of PSI components in the future. For example, since 2006, Southern Command has included a PSI component in its multinational military exercise designed to defend the Panama Canal against a terrorist-based threat. Agency officials stated that there is no significant cost for including an additional PSI interdiction scenario. This strategy helps to relieve COCOMs from developing and paying for a stand- alone PSI exercise with their operational funds and, therefore, allows COCOMs to exercise PSI objectives more frequently. However, placing a PSI component in a strictly military exercise does not allow COCOMs to exercise law enforcement issues and interagency coordination. To address these issues, COCOMs can plan stand-alone PSI exercises and computer-based or gaming exercises. In one case, a COCOM is planning a stand-alone PSI exercise that will address law enforcement issues, such as seizure and disposal of cargo, and interagency participation. DOD officials stated that it also plans to examine these and other law enforcement concerns in greater detail through gaming and simulation exercises. In February 2008, DOD conducted such a simulation using a U.S. shipboarding agreement with Malta. In June 2007, DOD sponsored a PSI game at the Naval War College to test national interagency processes to interdict WMD-related materials and to address post-interdiction issues, such as disposition of seized cargo and prosecution of proliferators. Although COCOM officials generally report having clear roles and responsibilities in incorporating PSI components, they lack guidance on how to incorporate law enforcement issues into military exercises and track PSI expenditures. The revised Joint Staff guidance does not clearly address some areas of COCOM responsibility. For example, to facilitate interagency involvement, the revised Joint Staff guidance encourages COCOMs to include law enforcement agencies in exercise planning, but the guidance does not provide specifics on how to liaise with law enforcement agencies. Some COCOM officials stated that they need clear guidance on how to exercise the disposition of cargo and other law enforcement issues. Direct coordination with either domestic or foreign law enforcement agencies is outside of normal COCOM military function. In one case, a PSI exercise was hosted by foreign law enforcement agencies, but a DOD official stated that they did not have clear guidance on how to coordinate U.S. military participation with U.S. and foreign law enforcement agencies. Also, Joint Staff guidance calls upon COCOMs to track PSI expenditures, personnel, and military assets used in support of PSI activities. However, some COCOM officials stated that they typically do not track these types of expenditures, except for PSI-related travel costs for COCOM staff. For example, while COCOMs may submit to Joint Staff the costs for travel to exercise planning conferences or a PSI exercise site, as well as travel cost estimates for future activities, they typically do not submit other costs expended on PSI stand-alone exercises or PSI components of existing DOD exercises. DOD has structures in place at the Office of the Secretary of Defense (OSD), the Joint Staff, and the COCOMs to coordinate its involvement in PSI activities. Within OSD, the Deputy Assistant Secretary of Defense for Counternarcotics, Counterproliferation, and Global Threats leads the U.S. interagency delegation to multilateral PSI planning meetings and coordinates with Joint Staff on U.S. participation in PSI-related live and tabletop exercises. Joint Staff assists with exercise planning and provides COCOMs with policies and procedures to direct their participation in PSI activities. Joint Staff also can provide COCOMs with information gathered at multilateral PSI planning meetings to keep them informed on PSI- related developments. COCOMs plan, implement, and participate in PSI stand-alone exercises or existing DOD exercises with PSI components based on their mission priorities and available resources. DOD also has established an office to further support COCOM involvement in PSI exercises and produce guidance on how to achieve this goal. The March 2007 Joint Staff guidance directed Strategic Command to develop a “PSI Support Cell” that educates COCOMs regarding the process of putting a PSI component into an existing DOD exercise and helps develop exercise scenarios that meet objectives developed at multilateral PSI planning meetings. COCOM officials reported that they have collaborated with the cell to incorporate PSI components into two existing DOD exercises and, in one case, it improved the exercise’s sophistication. COCOM officials also reported that they use the cell’s secure Web portal, which integrates information for planning and implementing PSI exercises, such as scenarios and lessons learned from previous PSI exercises. The PSI support cell is drafting an exercise planning handbook that will detail guidelines and best practices for use by COCOMS in designing and conducting multilateral PSI exercises. DOD also has created public affairs guidance to publicize exercises and other PSI activities in U.S. and international media. OSD established an interagency working group that sets priorities for U.S. agencies involved in multilateral PSI planning meetings. This interagency working group leverages capabilities and resources of U.S. agencies participating in PSI activities. Through this working group, OSD provides input to the host of the multilateral meeting on the agenda and determines which agencies will participate in the U.S. delegation. Before the multilateral PSI planning meeting, OSD ensures that the U.S. delegation coordinates and cooperates to reach a consensus on PSI-related issues and resolves any disagreements. OSD requests relevant U.S. agencies to submit briefings on agenda topics and circulates them to staff involved in PSI to receive feedback before clearing them for presentation at the multilateral meeting. After the multilateral meeting, OSD also oversees the process of delegating tasks to relevant U.S. agencies and keeps track of their progress. Agency officials reported that this informal interagency working group is valuable because it is a regular channel for exchanging information about PSI and setting priorities identified at multinational PSI planning meetings among all U.S. agencies that support PSI activities. DOD has established an annual budget to offset COCOM costs of adding a PSI component into existing DOD exercises and other PSI-related expenses. However, COCOM staff responsible for arranging PSI exercises stated that this funding level is inadequate to support stand-alone PSI exercises. DOD has created an $800,000 annual budget (starting fiscal year 2008) that can be used by COCOMs for variety of PSI-related activities, including upgrades to equipment used in interdictions and to engage subject matter experts. Some COCOMs stated that this funding helped them to attend multilateral PSI planning meetings, exercise planning conferences, and other PSI events. These funds are not available, however, to other U.S. agencies to host PSI events, such as PSI workshops or other outreach events, or to cover any foreign country’s costs to participate in PSI activities. Some COCOM officials responsible for arranging PSI exercises stated that the $800,000, which DOD has established out of operations and maintenance funds, is sufficient to fund less-expensive PSI activities, such as adding PSI components into existing DOD exercises, hosting computer-simulated games or tabletop exercises. However, this funding is inadequate to cover the costs of stand-alone PSI exercises or large exercise planning conferences, according to these officials. For example, one COCOM reported that it will need to request additional funds from DOD or find additional operational funds to host a stand-alone PSI exercise in the next 2 years. Otherwise, the COCOM will have to reduce the scope of the exercise. Although State has an existing structure, it has not established written policies and procedures or developed a budget to facilitate its participation in PSI activities. State placed responsibility for PSI in the Office of Counter Proliferation Initiatives (CPI) within the bureau of International Security and Nonproliferation (ISN). CPI handles a number of WMD and related issues, in addition to PSI, and is primarily involved in PSI’s diplomatic outreach. Besides a mission statement that describes roles of CPI’s PSI activities, State has not created policies or procedures, consistent with internal controls, regarding PSI-related activities. Also, State has not established a separate funding line for PSI in its annual budget but uses operational funds to travel to PSI activities. State stated that its operating funds are sufficient for its officials’ involvement in PSI activities, and it will continue to evaluate any funding requests for PSI in accordance with established department budget procedures. Although relevant law enforcement agencies such as CBP, FBI, and Coast Guard have some basic structures in place, only CBP has written policies and procedures, and none has established PSI funding lines in their annual budgets to facilitate participation in PSI activities. CBP’s Office of International Affairs (INA) has the programmatic lead for the agency’s contributions to PSI. Several personnel from other CBP offices coordinate on legal, intelligence, and operational issues to facilitate support of PSI activities. CBP has issued a PSI directive specifying roles and responsibilities of INA and related program offices. CBP also created an implementation plan that establishes the agency’s leadership role among law enforcement agencies in PSI and specifies strategies to achieve this and other PSI-related goals, including participating in PSI exercises and hosting trainings and workshops. CBP has a limited budget, used mostly for travel to PSI multilateral meetings from existing agency operational funds, but budget constraints could limit the extent of CBP’s participation in PSI activities. According to agency officials, CBP’s internal budget for travel to multilateral PSI planning meetings and exercises was cut from about $100,000 in fiscal year 2007 to about $50,000 in fiscal year 2008. CBP officials stated that additional funds may be needed to host exercises or workshops, or aid CBP’s outreach to industry, as stated in the goals of its implementation plan. FBI has delegated its PSI responsibility to the Counter Proliferation Operations Unit within the WMD directorate and has one staff member dedicated part-time to PSI activities. However, this unit has not created policies and procedures for PSI-related activities. Coast Guard participates in multilateral PSI meetings and exercises through its Office of Law Enforcement, Operations Law Division, and Office of Counterterrorism and Defense Operations. The Office of Law Enforcement and the Operations Law Division also work with State to arrange bilateral PSI shipboarding agreements to conduct interdictions at sea. However, the Coast Guard also has not established policies and procedures to guide its involvement in PSI activities. The FBI has budgeted $40,000 to support staff travel costs to PSI meetings and exercises for fiscal year 2008 but has generally been funding PSI workshops and training exercises on an ad hoc basis. Agency officials stated that additional funding would be needed to host exercises or workshops. Also, the FBI made a special request for a fiscal year 2008 Global War on Terror (GWOT) grant of about $700,000 to fund training for some PSI countries on how to enhance their national interagency decision- making processes and WMD interdiction capabilities. However, FBI officials noted that this type of grant will probably not be available for PSI activities next fiscal year. The Coast Guard has not established a PSI funding line and uses operational funds to travel to PSI activities. PSI exercises, multilateral PSI planning meetings, and workshops are increasingly focused on law enforcement issues, including customs enforcement, and legal authorities to detain and dispose of cargo. Agency officials said that law enforcement agencies are key participants in PSI activities since shipboardings and cargo inspections are conducted by those agencies. For example, CBP and Coast Guard assisted New Zealand with developing a PSI exercise hosted by New Zealand in September 2008. According to agency officials, this was the first live PSI exercise mostly focused on law enforcement issues. Agency officials stated that law enforcement agencies of other countries, instead of their militaries, are increasingly participating in PSI exercises. According to agency officials, it can be challenging to find countries willing to exercise PSI law enforcement issues with the U.S. military in an existing DOD exercise. Constitutions or political considerations of some countries preclude their military’s involvement in exercises with a law enforcement component. For example, one COCOM planned to add a PSI component into an existing DOD military exercise, but the foreign country participants refused to allow such a component to be added. According to COCOM officials, the foreign country participants said a PSI component should be part of a law enforcement exercise with law enforcement agencies; these countries’ military and law enforcement agencies can not exercise together. While the COCOMs assess the extent to which they meet the goals of their mission to combat WMD, they do not make the same kind of assessments for PSI activities. None of the agencies participating in PSI activities has established performance indicators to measure the results of their activities. GAO previously recommended in its 2006 report that DOD and State develop performance indicators to measure PSI results. A good internal control environment calls for agencies to create the means to monitor and evaluate their efforts to enable them to identify areas needing improvement. Further, a good internal control environment requires assessing both ongoing activities and separate evaluations of completed activities and should assess quality of performance over time. Without establishing and monitoring performance indicators, it will be difficult for policymakers to objectively assess the relevant U.S. agencies’ contributions to PSI activities over time. State officials stated that they measure PSI progress by the number of endorsing PSI countries, the number and complexity of PSI exercises around the world, and the number of PSI shipboarding agreements. However, it is difficult to attribute these high-level outcomes to the PSI activities of U.S. agencies because these outcomes are dependent on the actions of other governments as well. CBP officials stated that the agency has designed a PSI Implementation Plan to use when participating in PSI. The plan established expected goals and targets related to each goal. Although the plan indicates which goals have been completed and which are ongoing, the document has not been updated since June 2006. In addition, CBP has not established performance indicators for its involvement in PSI activities. U.S. agencies have made efforts to increase cooperation and coordination with PSI countries by working with the 19 other leading PSI countries at multilateral PSI planning meetings; however, U.S. agencies have not built relationships in the same way with their counterparts from the more than 70 PSI countries who are not invited to these meetings. U.S. agencies also have made efforts to increase cooperation and coordination with PSI countries through exercises and other outreach activities, but the more than 70 PSI countries who are not invited to attend multilateral meetings are not often involved. State and DOD have not developed a written strategy to resolve interdiction issues, as we previously recommended. Agency officials stated that the involvement of the U.S. delegation at the multilateral meetings is part of an attempt to resolve these issues. U.S. agencies have made efforts to increase cooperation and coordination with PSI countries by working with the 19 other leading PSI countries at multilateral PSI planning meetings; however, U.S. agencies have not built and expanded relationships in the same way with their counterparts from the more than 70 PSI countries who are not invited to attend these meetings. According to DOD, multilateral PSI planning meetings are to be held three to four times annually as delegations from 20 leading PSI countries (including the United States) meet to consider ways to enhance the WMD interdiction capabilities of PSI participants. At the meetings, the delegations also consider ways to build support for PSI, share ideas to strengthen legal authorities to interdict, and discuss hosting and participating in PSI exercises. Each of the 20 leading PSI countries sends a delegation to the multilateral PSI planning meetings; the Office of the Secretary of Defense heads the U.S. delegation to these multilateral meetings. According to agency officials, the multilateral PSI planning meetings themselves have no compliance mechanisms. However, according to agency officials, by actively engaging in bilateral meetings, the U.S. delegation is able to reach bilateral agreement with leading PSI countries to take certain actions to support PSI, such as hosting a PSI exercise. Before or during the multilateral meetings, the U.S. delegation often meets with delegations from other leading PSI countries bilaterally. Agency officials use bilateral meetings to reach agreements with other leading PSI countries to host future multilateral PSI planning meetings, participate in PSI exercises, or engage in outreach to countries that do not yet endorse or support PSI. Agency officials said that the bilateral meetings have been useful in increasing U.S. cooperation and coordination with the 19 other leading PSI countries. Meeting bilaterally before the multilateral PSI planning meetings allows the U.S. delegation to make arrangements with other leading PSI countries before the large plenary session of the multilateral PSI planning meeting begins. Agency officials stated that the plenary session and related breakout sessions at multilateral meetings have been useful in increasing cooperation and coordination with their counterparts from other leading PSI countries. The plenary session is where the heads of the delegations from the 20 leading PSI countries meet to discuss current PSI issues and explain their countries’ perspectives and opinions on such issues. Following or concurrent with the plenary session, breakout sessions are held for working-level officials to get together and discuss exercise, law enforcement, intelligence, or legal issues in more detail. However, because the multilateral PSI planning meetings only include the 20 leading PSI countries (including the United States), U.S. agencies have not built and expanded relationships in the same way with their counterparts from the more than 70 additional PSI countries who are not invited to attend these meetings. Agency officials acknowledged that more needs to be done to directly engage these more than 70 additional PSI countries. U.S. agencies also have made efforts to increase cooperation and coordination with PSI countries through hosting and/or participating in PSI exercises, but countries from among the more than 70 PSI countries who are not invited to attend multilateral meetings are not always involved. While the United States encourages PSI supporting countries to participate in PSI exercises, agency officials acknowledged that more needs to be done to directly engage the PSI countries who are not invited to attend multilateral PSI planning meetings. According to DOD, PSI exercises are intended to test national capabilities to conduct air, ground, and maritime interdictions; increase understanding of PSI among participating countries; and establish interoperability among PSI participants. The 20 leading PSI countries have established a schedule of PSI exercises to practice and enhance collective capabilities to interdict suspected WMD cargoes shipped by sea, air, and land. These exercises have also included simulations and scenarios to practice country-to- country and interagency communication processes to conduct WMD interdictions. Twenty-one countries have led 36 PSI exercises from September 2003 through September 2008. As figure 1 shows, these exercises have included sea, land, and air exercises, spanning the different regions of the globe, although more of them have been held in Europe and the Mediterranean. Also, while the United States has led a number of the exercises, the large majority of them have been led by other PSI countries, with European countries leading most of these. However, only 6 of the 36 exercises held from September 2003 to September 2008 were hosted or cohosted by countries from among the more than 70 PSI countries who are not invited to attend the multilateral PSI planning meetings. According to agency officials, U.S. agencies have used PSI exercises to increase cooperation and coordination with PSI countries and educate countries that have not yet endorsed PSI about the initiative. For example, DOD officials stated that they used a U.S.-hosted September 2007 exercise to protect the Panama Canal as a means of increasing cooperation and coordination among the 8 PSI countries (including the United States) that participated in it. However, of the 8 PSI countries who participated, only 3 were from among the more than 70 PSI countries who are not invited to attend multilateral meetings. According to DOD officials, the inclusion of PSI in existing DOD exercises also creates opportunities to educate other countries about PSI. The September 2007 exercise was an existing DOD exercise, which included a PSI component, and involved 9 other countries that have not yet endorsed PSI. However, agency officials cautioned against potential backlash from “overloading” existing DOD exercises with PSI components. For example, foreign countries may choose not to participate in an existing DOD exercise if a PSI component appears to overshadow the original objectives of the exercise. U.S. agencies stated that they have engaged in other outreach activities to increase cooperation and coordination with PSI countries. For example, since we issued our 2006 report, State sponsored a PSI fifth anniversary conference in May 2008 attended by 86 PSI countries. At this conference, these countries restated their support for PSI and the PSI Statement of Interdiction Principles. State officials also stated that their outreach efforts have included promoting the PSI when senior State officials meet foreign representatives or make high-level country visits. In addition, agency officials said the United States and other leading PSI countries sometimes engage in ad hoc outreach activities to other PSI countries before or after multilateral PSI planning meetings, such as a 1-day outreach session with Middle Eastern PSI countries after the February 2008 multilateral meeting in London, England. Also, State is continuing to seek international agreements, such as PSI shipboarding agreements, with input from the U.S. Coast Guard. These legally binding bilateral agreements, between the United States and other countries, facilitate bilateral, reciprocal cooperation by establishing the authorities and procedures the parties use to confirm and authorize flag state consent to board and search each other’s vessels suspected of carrying WMD and related materials. Since PSI was announced in 2003, the United States has signed a total of nine PSI shipboarding agreements, including agreements with Malta, Mongolia, and the Bahamas since we issued our report in 2006. In addition, as we reported in September 2006, the United States helped negotiate an amendment to the Convention on the Suppression of Unlawful Acts Against the Safety of Maritime Navigation that criminalizes WMD proliferation activities. The amendment also created an international framework for nations that are party to the amended convention to board ships believed to be engaged in WMD proliferation activities. Agency officials said that the amended convention was sent to the Senate for review in October 2007, and the Senate Foreign Relations Committee voted favorably on it on July 29, 2008. According to agency officials, the Senate gave its advice and consent to the ratification of the amended convention on September 25, 2008. The administration awaits congressional enactment of the necessary implementing legislation. With the success of amending the maritime convention, U.S. agencies, with other members of the International Civil Aviation Organization, are currently examining ways to amend the Montreal Convention of 1971, to criminalize the airborne transportation of WMD and related materials. Other U.S. agencies have also made some efforts to increase cooperation and coordination with PSI countries through outreach activities. According to DOD officials, DOD has produced talking points on PSI for high-level, military-to-military discussions with PSI countries and, where appropriate, for high-level DOD officials’ discussions with high-level foreign political officials. Also, through the recently established Africa command, DOD officials, in consultation with State, have contacted some North African political officials on enhancing their involvement in PSI activities, including exercises. The FBI sponsored a workshop in 2006 to train law enforcement officials from the 19 other leading PSI countries to identify WMD items. According to agency officials, attendance of representatives from the 19 other leading PSI countries at the conference led to improved relationships between the United States and these countries, and these relationships are still yielding benefits. However, only representatives from the 19 other leading PSI countries who go to multilateral meetings were invited to attend the FBI-sponsored workshop in 2006; no other PSI countries were invited. According to State and DOD officials, the departments have not developed a formal, written strategy to resolve interdiction issues, as GAO previously recommended. Agency officials stated that the involvement of the U.S. delegation at the multilateral meetings is part of an attempt to resolve these issues. The administration’s PSI implementation report states that diplomatic, military, law enforcement, and legal experts from the United States and the 19 other leading PSI countries convene at multilateral PSI planning meetings to develop cooperative strategies to address issues that extend beyond the control of any one country, such as compensation for seized cargo. These issues are discussed through a plenary session and in greater detail through law enforcement, legal, intelligence, and exercise breakout sessions. The PSI implementation report also states that the United States, a leading member of the meetings, continues to develop and implement multinational strategies to resolve issues beyond the exclusive control of the United States. The administration has only partially addressed the provisions of the Implementing Recommendations of the 9/11 Commission Act of 2007. Although relevant agencies perform various activities under PSI, the administration’s approach to PSI activities overall has been ad hoc. While DOD has taken more steps than State and law enforcement agencies to address some of the law’s provisions, such as clarifying policies and procedures, none of the agencies has fully addressed the law’s provisions. Consistent with internal controls, establishing clear PSI policies and procedures and performance indicators to measure results will help the agencies better organize their PSI activities. While U.S. agencies have made efforts to increase cooperation and coordination with the 19 other leading PSI countries that attend multilateral PSI planning meetings, they have not yet built relationships in the same way with over 70 PSI countries that are not part of these meetings. Agency officials acknowledged that more efforts are needed to directly engage these countries; doing so could create opportunities for increased PSI cooperation and coordination, including information exchanges between them and the United States. We also reaffirm the recommendations from our 2006 report on PSI that DOD and State should better organize their efforts for performing PSI activities, including establishing clear PSI policies and procedures and indicators to measure the results of PSI activities, and that they develop a strategy to work with PSI-participating countries to resolve interdiction issues. Since PSI activities are increasingly focused on law enforcement issues, we recommend that relevant law enforcement agencies, such as CBP, FBI, and Coast Guard, establish clear PSI policies and procedures and work toward developing performance indicators to support PSI activities, including PSI workshops, training courses, and exercises. Since U.S. agencies have not built relationships with their counterparts from the more than 70 PSI countries who are not invited to attend multilateral PSI planning meetings to the same extent as with the 19 other leading PSI countries, we recommend that DOD, in cooperation with State, take additional steps to increase cooperation, coordination, and information exchange between the United States and these countries. In building such relationships, DOD and State will obviously have to work cooperatively with the 19 other leading PSI countries that attend the PSI multilateral planning meetings. We provided a draft of this report to the Secretaries of State, Defense, Homeland Security, and Justice for their review and comment. We received written comments from State, DOD, and FBI within Justice that are reprinted in appendixes VI, VII, and VIII; we also received e-mail comments from DHS. DHS and FBI concurred with our first recommendation and State and DOD concurred with our second recommendation. State and DHS also provided us with technical comments, which we incorporated as appropriate. DHS concurred with our first recommendation and provided a Planned Corrective Action for CBP that CBP will update its PSI directive and implementation plan, including adding appropriate performance indicators and milestones. FBI also concurred with our first recommendation and described some steps being taken to mitigate the issues. DOD concurred with our second recommendation and stated that it has already taken several steps to implement it. State also concurred with our second recommendation, recognizing the need to deepen the involvement and knowledge of all PSI endorsing countries and stating that it is undertaking new efforts to address this need. State said that foremost among future plans of the leading PSI countries that attend the multilateral meetings is to focus on regional PSI activities and outreach workshops to increase the participation of those PSI countries who are not invited to attend the multilateral meetings. State maintained that a PSI directive is not necessary to strengthen and expand PSI because an existing WMD interdiction process created by a classified National Security Presidential Directive is sufficient. However, as we noted in our report, the existing WMD interdiction process predates the creation of PSI and does not cover U.S. agencies’ involvement in three broad PSI activities: multilateral planning meetings, exercises, and other outreach efforts. State also said the agency uses the number of countries endorsing PSI, the number and complexity of PSI exercises, and the conclusion of PSI shipboarding agreements as indicators to measure PSI performance. However, a good internal control environment calls for agencies to create their own means to monitor and evaluate their own efforts to identify areas needing improvement and requires assessing the quality of performance of ongoing and completed activities over time. We reaffirm the recommendation from our 2006 report that DOD and State should better organize their efforts for performing PSI activities, including establishing indicators to measure the results of PSI activities. State also said that it is not feasible or effective to develop a single comprehensive written strategy to deal with issues arising after interdictions because every interdiction must be dealt with on a case-by- case basis. While acknowledging the unique characteristics of each interdiction, we reaffirm our prior recommendation; the recurring interdiction issues that are beyond the control of the United States, as noted in our 2006 classified report, demonstrate the need for a written strategy to resolve these issues. State also stated that it has policies and procedures in place for PSI activities, although they are not recorded in a single document, but did not provide us any evidence of these written PSI policies and procedures. We are sending copies of this report to interested congressional committees. We also will make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. To examine U.S. agencies’ efforts to take a variety of actions to expand and strengthen the Proliferation Security Initiative (PSI), we assessed the (1) extent to which the administration issued a PSI directive, a sense of Congress provision in the law, and submitted to Congress required PSI- related reports; (2) steps U.S. agencies have taken to establish clear PSI policies and procedures, structures, budgets, and performance indicators; and (3) efforts U.S. agencies have made to increase cooperation and coordination with PSI countries and develop a strategy to resolve interdiction issues. We employed various methodologies to address these three objectives. We reviewed the Department of Defense’s (DOD) Public Affairs Guidance on the Proliferation Security Initiative, the Report to Congress on Implementation of the Proliferation Security Initiative Pub. L. No. 110-53, Section 1821, the Chairman of the Joint Chiefs of Staff Instruction on the Proliferation Security Initiative (2005 and revised in 2007) and documentation on the PSI fifth anniversary conference held May 2008 in Washington, D.C. In addition, we reviewed various documents produced by the Departments of State (State), DOD, Customs and Border Protection (CBP), and other agencies involved in PSI, such as presentations, management reports, documents, and cables on U.S. agencies’ participation in and management of their involvement in PSI activities. We reviewed various documents produced by the U.S. delegation to multilateral PSI planning meetings, including presentations, exercise summaries, meeting summaries, and DOD documents that discussed best practices for PSI exercises. We met with officials from State, DOD, CBP, the Federal Bureau of Investigation (FBI), Coast Guard, and other agencies in Washington, D.C., involved in PSI activities. We interviewed officials and military personnel at five DOD Combatant Commands (COCOM): (1) Central Command in Tampa, Florida; (2) European Command in Stuttgart, Germany; (3) Africa Command in Stuttgart, Germany; (4) Southern Command in Miami, Florida; and (5) the Strategic Command’s Center for Combating WMD in Fort Belvoir, Virginia. We discussed how DOD manages and coordinates its involvement in PSI activities, including preparation and execution of PSI components within existing DOD exercises, as well as stand-alone PSI exercises; cooperation between the COCOMs, particularly with the Center for Combating WMD; and management of PSI activities between the Joint Staff and the COCOMs. To collect detailed qualitative information from participants on how and why the multilateral PSI planning meetings (including breakout sessions and related bilateral meetings) are or are not useful for the U.S. delegation, we conducted structured interviews with 12 U.S. participants. In addition, we gathered the participants’ perspectives on the structure, evolution, and possible improvements for such meetings through the structured interviews. While we did not select a generalizeable sample, we did select one that included officials with a wide range of views and relatively more experience of the meetings. Specifically, we selected U.S. agency officials and military personnel that had a range of military, law enforcement, legal, diplomatic, and intelligence expertise and that had attended two or more of the last six multilateral PSI planning meetings. To ensure that the structured instrument we used was clear and comprehensive, we pretested the instrument with two agency officials who had attended at least four of the last six multilateral meetings. We made changes to the content and format of the structured interview based on comments from the expert reviews, as well as the pretests. The scope of our review was set by the Implementing Recommendations of the 9/11 Commission Act of 2007. The law specified that the President and relevant agencies and departments take a variety of actions to expand and strengthen PSI, including implementing recommendations from our September 2006 classified report, which identified weaknesses with the U.S. government’s planning and management of PSI. Under a sense of Congress provision of the law, the President is called upon to issue a PSI directive to U.S. agencies, and U.S. agencies are called upon to take actions listed in the law, namely to establish clear PSI policies and procedures, structures, funding, and performance indicators to measure the results of PSI activities; to take steps to increase cooperation and coordination with PSI countries; and to develop a strategy to resolve interdiction issues. The law required the President to submit a PSI implementation report by February 2008 to congressional committees; State and DOD are required to submit a comprehensive joint budget report to Congress describing U.S. funding and other resources for PSI-related activities. Congress required GAO to issue three consecutive reports assessing the effectiveness of PSI, including progress made in implementing the provisions of the act. This report is the first of the three reports. We conducted this performance audit from November 2007 to November 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The PSI is a response to the growing challenges posed by the proliferation of weapons of mass destruction (WMD), their delivery systems, and related materials worldwide. The PSI builds on efforts by the international community to prevent proliferation of such items, including existing treaties and regimes. It is consistent with, and a step in the implementation of the UN Security Council Presidential Statement of January 1992, which states that the proliferation of all WMD constitutes a threat to international peace and security, and underlines the need for member states of the UN to prevent proliferation. The PSI is also consistent with recent statements of the G8 and the European Union, establishing that more coherent and concerted efforts are needed to prevent the proliferation of WMD, their delivery systems, and related materials. PSI participants are deeply concerned about this threat and of the danger that these items could fall into the hands of terrorists and are committed to working together to stop the flow of these items to and from states and nonstate actors of proliferation concern. The PSI seeks to involve, in some capacity, all states that have a stake in nonproliferation and the ability and willingness to take steps to stop the flow of such items at sea, in the air, or on land. The PSI also seeks cooperation from any state whose vessels, flags, ports, territorial waters, airspace, or land might be used for proliferation purposes by states and nonstate actors of proliferation concern. The increasingly aggressive efforts by proliferators to stand outside or to circumvent existing nonproliferation norms, and to profit from such trade, requires new and stronger actions by the international community. We look forward to working with all concerned states on measures they are able and willing to take in support of the PSI, as outlined in the following set of “Interdiction Principles.” PSI participants are committed to the following interdiction principles to establish a more coordinated and effective basis through which to impede and stop shipments of WMD, delivery systems, and related materials flowing to and from states and nonstate actors of proliferation concern, consistent with national legal authorities and relevant international law and frameworks, including the UN Security Council. They call on all states concerned with this threat to international peace and security to join in similarly committing to: 1. Undertake effective measures, either alone or in concert with other states, for interdicting the transfer or transport of WMD, their delivery systems, and related materials to and from states and nonstate actors of proliferation concern. “States or nonstate actors of proliferation concern” generally refers to those countries or entities that the PSI participants involved establish should be subject to interdiction activities because they are engaged in proliferation through: (1) efforts to develop or acquire chemical, biological, or nuclear weapons and associated delivery systems or (2) transfers (either selling, receiving, or facilitating) of WMD, their delivery systems, or related materials. 2. Adopt streamlined procedures for rapid exchange of relevant information concerning suspected proliferation activity, protecting the confidential character of classified information provided by other states as part of this initiative, dedicate appropriate resources and efforts to interdiction operations and capabilities, and maximize coordination among participants in interdiction efforts. 3. Review and work to strengthen their relevant national legal authorities where necessary to accomplish these objectives, and work to strengthen when necessary relevant international law and frameworks in appropriate ways to support these commitments. 4. Take specific actions in support of interdiction efforts regarding cargoes of WMD, their delivery systems, or related materials, to the extent their national legal authorities permit and consistent with their obligations under international law and frameworks, to include: a. Not to transport or assist in the transport of any such cargoes to or from states or nonstate actors of proliferation concern and not to allow any persons subject to their jurisdiction to do so. b. At their own initiative, or at the request and good cause shown by another state, to take action to board and search any vessel flying their flag in their internal waters or territorial seas, or areas beyond the territorial seas of any other state, that is reasonably suspected of transporting such cargoes to or from states or nonstate actors of proliferation concern, and to seize such cargoes that are identified. c. To seriously consider providing consent under the appropriate circumstances to the boarding and searching of its own flag vessels by other states, and to the seizure of such WMD-related cargoes in such vessels that may be identified by such states. d. To take appropriate actions to (1) stop and/or search in their internal waters, territorial seas, or contiguous zones (when declared) vessels that are reasonably suspected of carrying such cargoes to or from states or nonstate actors of proliferation concern and to seize such cargoes that are identified and (2) to enforce conditions on vessels entering or leaving their ports, internal waters, or territorial seas that are reasonably suspected of carrying such cargoes, such as requiring that such vessels be subject to boarding, search, and seizure of such cargoes prior to entry. e. At their own initiative or upon the request and good cause shown by another state, to (1) require aircraft that are reasonably suspected of carrying such cargoes to or from states or nonstate actors of proliferation concern and that are transiting their airspace to land for inspection and seize any such cargoes that are identified and/or (2) deny aircraft reasonably suspected of carrying such cargoes transit rights through their airspace in advance of such flights. f. If their ports, airfields, or other facilities are used as transshipment points for shipment of such cargoes to or from states or nonstate actors of proliferation concern, to inspect vessels, aircraft, or other modes of transport reasonably suspected of carrying such cargoes, and to seize such cargoes that are identified. The following multilateral PSI planning meetings are also known as Operational Expert Group (OEG) meetings: 1. Brisbane, Australia (July) 2. London, United Kingdom (July) 3. Paris, France (September) 4. London, United Kingdom (October) 5. Washington, D.C., United States (December) 1. Ottawa, Canada (April) 2. Oslo, Norway (August) 3. Sydney, Australia (November) 1. Omaha, Nebraska, United States (March) 2. Copenhagen, Denmark (July) 3. Hamburg, Germany (November) – Regional OEG meeting 1. Miami, Florida, United States (April) 2. Singapore (July) 3. Montreal, Canada (December) 1. Auckland, New Zealand (March) 2. Rhodes, Greece (October) 1. London, United Kingdom (February) 2. Paris, France (September) 1. Proliferation Security Initiative Shipboarding Agreement Signed with Liberia Signed February 11, 2004, entered into force December 9, 2004. According to State, Liberia has the second largest ship registry in the world. 2. Proliferation Security Initiative Shipboarding Agreement Signed with Panama Signed May 12, 2004; entered into force December 1, 2004. According to State, Panama has the largest ship registry in the world. 3. Proliferation Security Initiative Shipboarding Agreement Signed with Marshall Islands Signed August 13, 2004; provisionally applied from August 13, 2004; entered into force November 24, 2004. According to State, Marshall Islands has the eleventh largest flag registry in the world. 4. Proliferation Security Initiative Shipboarding Agreement Signed with Republic of Croatia Signed June 1, 2005; entered into force March 5, 2007. 5. Proliferation Security Initiative Shipboarding Agreement Signed with Cyprus Signed July 25, 2005; entered into force January 12, 2006. According to State, Cyprus has the sixth largest ship registry in the world and was the first European Union member to sign such an agreement with the United States. 6. Proliferation Security Initiative Shipboarding Agreement Signed with Belize Signed August 4, 2005; entered into force October 19, 2005. According to State, Belize is the first Caribbean Community (CARICOM) member state to sign such an agreement with the United States in support of PSI. 7. Proliferation Security Initiative Shipboarding Agreement Signed with the Republic of Malta Signed March 15, 2007; entered into force December 19, 2007 According to State, Malta has the eighth largest ship registry in the world. 8. Proliferation Security Initiative Shipboarding Agreement Signed with Mongolia Signed October 23, 2007; entered into force February 20, 2008. 9. Proliferation Security Initiative Shipboarding Agreement Signed with the Bahamas Signed August 11, 2008; not yet in force. According to State, the Bahamas has the third largest flag registry of merchant ships in the world and serves as an open registry for shipowners from dozens of countries. Department of State Comments on GAO Draft Report: NONPROLIFERATION: U.S. Agencies Have Taken Some Steps, but More Effort Is Needed to Strengthen and Expand the Proliferation Security Initiative (GAO-09-43, GAO Code 320563) Thank you for giving the Department of State the opportunity to comment on the draft report NONPROLIFERATION: U.S. Agencies Have Taken Some Steps, but More Effort Is Needed to Strengthen and Expand the Proliferation Security Initiative. The comments below respond to statements made in various places in the GAO’s draft report. GAO Recommendation: DOD and State should take steps to increase cooperation and coordination between the United States and the more than 70 PSI countries who are not invited to attend multilateral PSI planning meetings. Response: The U.S. and the 19 other countries participating in the PSI Operational Experts Group (OEG) have recognized the need to deepen the involvement and knowledge of all PSI endorsing states. This year, we are undertaking several new efforts to implement this objective, including creation of a PSI web portal to share documents among all PSI countries, and creation of a regular PSI newsletter for all PSI countries. The Department of State sponsored a PSI 5th Anniversary Senior-Level Meeting on May 28, 2008 in Washington for all PSI countries. Representatives from 86 PSI countries attended. At this meeting, the attendees discussed current PSI issues and restated their support for the PSI and the PSI Statement of Interdiction Principles, in particular through adoption of the Washington Declaration (available online at http://www.state.gov/r/pa/prs/ps/2008/may/105268.htm). The GAO’s draft report failed to note that, on the following day, the U.S. hosted a PSI outreach workshop, attended by representatives of 21 countries that had not yet endorsed the PSI, as well as most of the PSI participating states. The workshop provided detailed information on the broad range of PSI activities and tools that have been developed for training, organizing for, and conducting interdictions of shipments of proliferation concern. It was designed both to promote PSI endorsement by additional states and to deepen the knowledge of and participation in PSI activities by states that have endorsed the PSI. Foremost among future plans of the countries participating in the OEG is to focus on regional PSI activities and outreach workshops intended to increase active PSI participation by the countries that do not participate in the OEG meetings. For example, the USG will host an OEG meeting in May 2009 in Miami, Florida, and will invite all PSI partners from the Western Hemisphere to actively participate. This will be the first time an OEG meeting will integrate non-OEG regional partners from the Western Hemisphere. The meeting’s content will focus on interdiction issues and challenges most relevant to the region. Other PSI partners also plan to host regional OEG meetings for other regions in 2009 and beyond. These meetings will help to increase the capabilities of all PSI partners to interdict WMD shipments. The Department of State has always disseminated summaries of each PSI OEG meeting to all PSI countries. State also has supported - with funding and/or expert advice - several PSI exercises in Central and Eastern Europe, Africa, and Central/South America hosted by and intended for non-OEG countries. These exercises have enhanced the skills and interoperability of the non-OEG countries in that region in combating WMD-related trafficking. In addition, exercises hosted by OEG countries in the last two years have been attended by a number of non-OEG countries, as well as by countries that have not yet endorsed the PSI, as noted in DOD’s comments on this report. In addition, the Department of State leads USG efforts to conclude bilateral, reciprocal PSI shipboarding agreements with key ship registry states, with support from the U.S. Coast Guard, DOD and the Department of Justice. All nine agreements we have concluded so far are with non-OEG PSI partner nations. Since 2006, three more shipboarding agreements have been signed -- with Malta, Mongolia and The Bahamas. These agreements provide expedited procedures for obtaining authorization to board and search ships suspected of transporting proliferation-related cargo. GAO Statement: The Administration has not issued a PSI directive that directs U.S. agencies to establish clear PSI authorities, structures, roles, responsibilities, policies and procedures, including budget requests for PSI activities. In its implementation report to Congress in July 2008, the Administration stated it is unnecessary to issue a directive for PSI because it believes that an existing WMD interdiction process, documented in an 8- page 2002 National Security Presidential Directive, already addresses the relevant issues that would be covered under a PSI directive. Response: As was the case in 2006, all U.S. PSI activities are conducted via an extensive interagency coordination process through a policy coordination committee chaired by National Security Council (NSC) staff, implementing clearly defined strategy documents that established agency roles, responsibilities, and common goals. In its PSI implementation report to Congress of July 2008, the Administration stated it does not consider issuing an additional Presidential directive to be necessary in order to continue expanding and strengthening the PSI. The Administration continues to hold this view. A classified National Security Presidential Directive governs the interdiction process. The report correctly notes that there is no single Administration budget request for the PSI. In fact, the PSI was designed to be not a single, distinct program, but rather a set of activities interwoven into the USG’s established diplomatic, military, and law enforcement relations with other countries. In addition, many existing programs, missions, international agreements and frameworks promote the same objectives as the PSI without being narrowly defined as part of the PSI. It should remain the responsibility of each agency to determine whether it can accomplish its PSI objective best by establishing a budget line item for PSI activities. GAO Statement: The existing WMD interdiction process covers how U.S. agencies should coordinate U.S. government efforts to conduct WMD interdictions. However, this process predates the creation of PSI and does not cover U.S. agencies’ involvement in three broad PSI activities: multilateral planning meetings, exercises, and other outreach efforts. Response: Presidential directives set out broad U.S. Government policy and goals. Such a document is neither appropriate nor necessary to administer the details of USG agencies’ work on PSI Operational Experts Group meetings, PSI exercises, PSI outreach, and WMD-related interdictions. USG agencies are working together closely and continuously on these PSI activities, via an extensive interagency coordination process through a policy coordination committee chaired by National Security Council (NSC) staff. GAO Statement: U.S. agencies have not established performance indicators to measure the results of PSI activities. Response: Standard Department of State procedures are followed regarding indicators to measure program results for State’s work on the PSI. There are certain unclassified PSI activities that can be quantified, which State uses as indicators to measure the Initiative’s progress as required in the annual Strategic Plan of the Bureau of International Security and Nonproliferation (ISN). These are: increases in the number of countries endorsing the PSI; the number and complexity of PSI exercises conducted around the world; and the conclusion of PSI shipboarding agreements. The Department of State requires evidence of countries’ endorsement of the PSI Statement of Interdiction Principles in order to consider them to be PSI participants. Such evidence can take the form of a diplomatic note to the U.S. or to another PSI partner state, a public statement of endorsement, or representation at a meeting of PSI participating states. Use of this clear criterion allowed the Department to begin publishing in 2006 a list of PSI participants on the State website. GAO Statement: State officials stated that they measure PSI progress by the number of endorsing PSI countries; the number and complexity of PSI exercises around the world; and the number of PSI shipboarding agreements. However, it is difficult to attribute these high-level outcomes to the PSI activities of U.S. agencies because these outcomes are dependent on the actions of other governments as well. Response: The mission of the Department of State is to conduct international diplomacy in support of U.S. foreign policy goals, where all outcomes depend on the actions of other governments. State’s PSI activities are no exception, as the GAO’s previous report highlighted. State uses these performance indicators because we are confident that the results would not have occurred without our efforts. GAO Statement: State and DOD have not developed a written strategy to resolve interdiction issues. Agency officials stated that the involvement of the U.S. delegation at the multilateral meetings is part of an attempt to resolve these issues. Response: U.S. agencies have developed tools and use standard procedures to plan and execute interdictions. To deal with issues arising as a result of interdictions that have taken place, we have not found it feasible or effective to develop a single, comprehensive written strategy, because every interdiction case is unique and each must be dealt with on a case-by-case basis depending on the specific circumstances. U.S. agencies are familiar with the tools and resources available to deal with the issues that come up. Subject matter experts from across the USG consult and coordinate courses of action to address each WMD-related interdiction case, guided by Presidential Directives and agency procedures. Because interdictions involve other countries, resolving interdiction issues is a task the U.S. cannot accomplish by itself. The PSI is based on the concept of cooperation and coordination among PSI partners in countering WMD- related trafficking, each utilizing the national authorities available to it. All PSI activities are aimed at strengthening such cooperation and coordination. GAO Statement: State has an existing structure but does not have policies, procedures, or a budget in place for PSI activities. Response: The Department of State does have policies and procedures in place for its PSI activities, although they are not all recorded in a single document. State updates its PSI plans and strategies frequently to take developments into account. The Department of State has provided funding to support four complex interdiction-related PSI exercises hosted by PSI partners Poland and Ukraine, as authorized under section 504(a) of the FREEDOM Support Act and the Nonproliferation and Disarmament Fund’s (NDF) expanded authority under the Nonproliferation, Anti-terrorism, Demining and Related Programs (NADR). Apart from these line items, the ISN Bureau’s operating budget has been sufficient to fund the expenses for State’s PSI activities. In order to ensure that Department of State activities related to the PSI and interdiction are properly coordinated, in late 2005 the Department created the Office of Counterproliferation Initiatives. This Office is responsible for all State Department PSI activities, as part of its counterproliferation diplomacy mission. As noted in its Mission Statement, the Office of Counterproliferation Initiatives develops and conducts diplomatic outreach to prospective PSI participants, informs current participants of PSI events, and works on broadening their participation; participates in negotiation of ship-boarding and other relevant international agreements and understandings; and facilitates State support to the PSI Operational Experts Group. Most important, this Office routinely interfaces with foreign governments on WMD-related interdictions and the disposition of seized cargo. GAO Statement: International participation is voluntary and there are no binding treaties on those who choose to participate. Response: It is correct that PSI participation is voluntary. Of course, the actions of PSI participants must be consistent with their national legal authorities and relevant international law. The PSI is part of the overall international nonproliferation framework that includes the international nonproliferation treaties – such as the NPT, CWC, and BWC, to which most countries are parties. The Law of the Sea and the Chicago Conventions govern the actions of PSI countries in the maritime and air domains, respectively. In addition, the UN Security Council resolutions addressing North Korea’s and Iran’s WMD-related activities, as well as UNSC Resolution 1540, are legally binding on all UN Member states. Finally, our bilateral PSI shipboarding agreements with other countries are binding on the Parties. GAO Statement: The multilateral PSI planning meetings themselves have no compliance mechanisms. Response: The term “compliance” indicates legal obligations. The meetings of the 20-nation OEG are not based on or involved with establishing legal obligations, so it is meaningless to refer to compliance mechanisms in this context. The operational experts meet to discuss and resolve issues related to interdictions, and to plan exercises and outreach events. This forum for experts to meet regularly with their counterparts from other countries has proven very valuable for strengthening the PSI network and the collective body of knowledge about how to effectively interdict proliferation-related trafficking. We are working on ways to expand the benefits of the OEG to all PSI countries by holding more regionally-focused meetings. GAO Statement: The United States helped negotiate an amendment to the Convention on the Suppression of Unlawful Acts Against the Safety of Maritime Navigation that criminalizes WMD proliferation activities…Agency officials said that the amended convention was sent to the Senate for review in October 2007, and the Senate Foreign Relations Committee voted favorably on it on July 29, 2008. It is now awaiting full Senate action. Update: The Senate gave its advice and consent to the ratification of the 2005 Protocols to the Convention of the Suppression of Unlawful Acts against the Safety of Maritime Navigation on September 25, 2008 (source: Congressional Record). The Administration welcomes the Senate’s action, and awaits Congressional enactment of the necessary implementing legislation before the U.S. can deposit its instruments of ratification. The following are GAO’s comments on the Department of State’s letter dated October 17, 2008. 1. We have added information in the report that State hosted a PSI outreach workshop at the PSI fifth anniversary conference. 2. We have added information in the report on the future multilateral PSI planning meeting in 2009 to be hosted by the United States. 3. Appendix IV provides information on the shipboarding agreements the United States has signed with other countries. 4. As we stated in our report, the existing WMD interdiction process covers how U.S. agencies should coordinate U.S. government efforts to conduct WMD interdictions. This process, as we noted, predates the creation of PSI and does not cover U.S. agencies’ involvement in three broad PSI activities: multilateral planning meetings, exercises, and other outreach efforts. 5. As noted in our report, the WMD interdiction process predates the creation of PSI and does not cover U.S. agencies’ involvement in three broad PSI activities: multilateral planning meetings, exercises, and other outreach efforts. 6. We reaffirm the recommendation from our 2006 report that DOD and State should better organize their efforts for performing PSI activities, including establishing indicators to measure the results of PSI activities. As we stated in our report, a good internal control environment calls for agencies to create their own means to monitor and evaluate their own efforts to enable them to identify areas needing improvement. Further, a good internal control environment requires assessing both ongoing activities and separate evaluations of completed activities and should assess quality of performance over time. 7. See response (6) above. 8. State has not worked with DOD to implement the second recommendation from our 2006 report, as called for in the law. While acknowledging the unique characteristics of each interdiction, we reaffirm our prior recommendation. The recurring interdiction issues that are beyond the control of the United States, as noted in our 2006 classified report, demonstrate the need for a written strategy to resolve these issues. 9. While State said that it has PSI policies and procedures that are not recorded in a single document, it did not provide GAO any evidence of its written PSI policies and procedures. 10. Although State reports providing funding to support certain PSI exercises, State has not requested funds necessary for PSI-related activities, as called for in the law. 11. This statement was based on information from U.S. agency officials. We have modified the text in our report to attribute it to agency officials. 12. We have updated our report to reflect the Senate’s actions. The following are GAO’s comments on the Department of Defense’s letter dated October 10, 2008. 1. We have added information to the report noting the 2009 PSI events DOD will be sponsoring. In addition to the individual named above, Godwin Agbara, Assistant Director; Ian Ferguson; Yana Golburt; Helen Hwang; and Lynn Cothern made key contributions to this report. | The President announced the Proliferation Security Initiative (PSI) in 2003 to enhance U.S. efforts to prevent the spread of weapons of mass destruction. In a 2006 classified report, GAO recommended that agencies establish clear PSI policies and procedures and performance indicators. In 2007, Congress enacted a law calling for the administration to expand and strengthen PSI and address GAO's prior recommendations. This report assesses (1) the extent to which the administration issued a PSI directive and submitted required PSI-related reports to Congress; (2) steps U.S. agencies have taken to establish clear PSI policies and procedures, structures, budgets, and performance indicators; and (3) U.S. agencies' efforts to increase cooperation and coordination with PSI countries and develop a strategy to resolve interdiction issues. GAO reviewed and analyzed agency documents and interviewed officials from the Departments of State (State), Defense (DOD), and other agencies with PSI responsibilitie |
Section 302 of the Ethics Reform Act of 1989 (31 U.S.C. 1353) provides that the Administrator of GSA, in consultation with the Director of OGE, prescribe by regulation the conditions under which an agency or employee may accept payment from nonfederal sources for travel, subsistence, and related expenses with respect to attendance at any meeting or similar function relating to the official duties of the employee. In the request for comments on the implementing regulations, GSA stated that it expected OGE to review agency implementation of section 302 in connection with its ongoing reviews of agency ethics programs. GSA promulgated interim regulations in 41 C.F.R. 304 to implement section 302. These regulations, which have been in effect from March 8, 1991, through the present, include a definition of what constitutes conditions for accepting travel reimbursements, a description of payment methods, and a requirement that all instances of nonfederally reimbursed travel in excess of $250 be reported semiannually to OGE. Initially, these reports were to include the name and position of the employee, nature of the event, dates of travel, and amount of payment. However, GSA amended its regulations effective December 9, 1992, by adding a few additional requirements, including a provision that reports to OGE should also contain the dates of the events and itemized expenses. Commerce and FTC have established regulations that require adherence to GSA’s regulations. Private companies, universities, and other organizations often want federal employees with expertise in such areas as weather forecasting and antitrust and trade regulations to participate in meetings and other events. When these employees receive offers of reimbursed travel, subsistence, or related expenses for a trip to such an event, the employees are supposed to prepare a travel order to obtain approval for the trip and a form requesting approval to accept the offer of reimbursement. At Commerce and FTC, the forms requesting approval for reimbursement, referred to as request forms, require the inclusion of details concerning the trip. These details include the name of the source offering reimbursements, dates and nature of the event, and types and amounts of expenses that will be reimbursed. This information is to be used by the employees’ supervisors and other higher level reviewing officials. If a trip meets the applicable GSA regulations for authorizing travel on a reimbursable basis, a travel order can be approved. At Commerce, all travel reimbursement offers made to the Secretary of Commerce are reviewed by the Department’s Office of General Counsel, while reimbursement offers made to other Commerce employees are reviewed in their respective offices. At FTC, travel reimbursement offers made to any employee are reviewed by FTC’s Office of General Counsel. During the period of March 8, 1991, when GSA’s interim regulations became effective, through September 30, 1993, Commerce had 3,104 nonfederally reimbursed trips, or an average of 1,242 trips per fiscal year. We sampled 160 of these trips and found that the average amount of reimbursed expenses was $1,100 per trip. Reimbursements varied from covering all of the trip’s expenses to some portion, such as airfare or lodging. During the period of October 1, 1992, through September 30, 1993, FTC had 59 nonfederally reimbursed trips. The average amount of reimbursed expenses for FTC employees was $677 per trip. Additional details on our scope and methodology follow. To determine the adequacy of Commerce’s administration of employees’ acceptance of travel funds from nonfederal sources, we reviewed two random samples of nonfederally reimbursed trips at Commerce. First, because of the Subcommittee’s interest in travel by high-level officials, we reviewed available documentation for a sample of 60 of the 112 trips reported during the period from March 8, 1991, through September 30, 1993, by these officials. We defined “high level” as those officials in the Office of the Secretary or the highest level official in all other Commerce offices. We compared these documents, including travel requests, orders, vouchers, and receipts, to the regulations in effect at the time the trips were taken and reported. Second, we reviewed similar information for a sample of 100 of the 2,992 trips reported during the same period by Commerce employees who were not high-level officials. We limited our review to the travel reimbursed on or after March 8, 1991, because the GSA interim regulations implementing Section 302 of the Ethics Reform Act of 1989 took effect at that time. We included travel reported through September 30, 1993, since these were the latest trips identified in OGE’s reports when we began our review. As agreed with the Subcommittee, we also reviewed how the procedures governing reimbursed travel were administered within Commerce’s Office of the Secretary and at the three offices with most of the reimbursed travel during the period. Collectively, the International Trade Administration (ITA), National Oceanic and Atmospheric Administration (NOAA), and National Institute of Standards and Technology (NIST) accounted for about 79 percent of Commerce’s trips. We did this work to better understand the procedures and policies in place for administering reimbursed travel at individual offices. The Subcommittee selected FTC for review in part because it had a centralized system for reviewing travel requests as compared to Commerce’s decentralized system. Because the decision to include FTC was made during the latter part of our review, we limited our review to all FTC employee trips that were reimbursed during fiscal year 1993 to ensure records would be readily available. We performed this review at Commerce, ITA, and FTC headquarters in Washington, D.C.; NOAA headquarters in Silver Spring, MD; and NIST headquarters in Gaithersburg, MD. We obtained comments from Commerce and FTC that are discussed on page 11 and presented in appendixes II and III. Our work was conducted from December 1993 to September 1994 in accordance with generally accepted government auditing standards. Although our review of Commerce’s handling of reimbursed travel showed that applicable requirements were generally complied with, we found some instances of noncompliance. The most common of these instances was that Commerce sometimes approved employees’ travel orders without first reviewing travel requests to obtain all of the necessary information about the trip to be taken. Although less frequent, we also identified certain deficiencies in how Commerce reported these reimbursements to OGE and in how internal controls governing reimbursed travel were applied. While no individual reporting deficiency occurred consistently, a number of the reports had at least one type of deficiency. The frequency and types of these deficiencies are shown later in this section. Under GSA regulations, authorization to accept payment from a nonfederal source should be given in advance of the travel. As GSA states in its regulations, the requirement for advance approval is consistent with the long-standing practice of approving an employee’s official travel plans in advance. Moreover, there is less risk that an employee will receive an improper payment on behalf of the agency if advance approval is required. Our review showed that travel orders indicating the existence of reimbursed travel were almost always approved before the beginning of the trip. However, our samples of 60 high-level officials and 100 other employees identified a total of 36 trips, 13 and 23, respectively, where the travel order was approved without first reviewing a travel request form. Nine of these 36 trips in the samples were in the National Weather Service in NOAA. Weather Service officials told us that they (1) require the employee to include a statement identifying the expenses that will be reimbursed by the nonfederal source on the travel order and (2) allow the employee to complete the travel request form after the trip is complete. The problem with such an approach is that there is no assurance that all of the information necessary to assess conflict-of-interest situations is submitted with the travel order. It should be noted that three of the Weather Service’s nine travel orders did not contain such important information as the identity of the reimbursing organization and/or the amount and type of expenses to be paid. Also, 17 of the other 27 trips had travel orders that did not include some of this important information. GSA’s regulation governing the semiannual reporting of nonfederally reimbursed travel to OGE requires 18 specific items of information to be reported for each trip. These items include the name of the nonfederal source, the nature of the event, the dates of the employee’s travel, an itemization of benefits received, and the amount of each benefit. The regulations also require that the expenses reported must be the actual amount paid by check or the value of in-kind services, other than for meals. While Commerce’s Office of General Counsel, which is responsible for providing the semiannual reports of reimbursed travel to OGE, has several procedures in place to ensure that all reimbursed travel is reported to OGE, we identified some deficiencies in the reported information. Specifically, we found 27 deficiencies that were contained in 23 of the 160 trips in our sample. These deficiencies are shown in table 1. An official in Commerce’s Office of General Counsel said that many of the deficiencies were due to the December 9, 1992, change in GSA’s regulations, which is discussed on pages 2 and 3 of this report. In corroboration of this point, we found that all 12 of the deficiencies involving the dates of events and itemized expenses did occur shortly after the regulations were changed. However, the other deficiencies did not appear to be related to the revised regulations. The individual Commerce offices are responsible for the accuracy of expenses reported to the Office of General Counsel. In 11 of the 160 trips (9 in the all-employee sample and 2 in the high-level officials’ sample) the reported amounts differed from the amounts recorded in the receipts. The reported amount was less than the receipts in five instances, ranging in difference from $7 to $523. In the other six instances, the reported amount exceeded the receipts by $12 to $350. Four of the nine instances of differing amounts in the all-employee sample occurred in ITA. The ITA Director of the Office of Organization and Management Support commented on these four trips. For the trip that resulted in the largest variance, she said that it was possible the employee reported the government rate for the airfare rather than the actual amount paid. She also said that the other three instances, none of which was greater than $15, could have been attributed to math errors. It should be noted that we could not always determine whether the actual costs of the trips were reported in Commerce’s semiannual reports because receipts are not required for expenses paid in-kind. About 71 percent of the 160 reports included in-kind reimbursements. Commerce could improve its controls over travel expenses paid, either by check or in-kind, by ensuring that employees submit receipts and travel vouchers for all reimbursed expenses. The Federal Travel Regulations (Part 301-11) require employees to provide receipts for allowable cash expenditures in excess of $25. Receipts are also required for certain expenditures regardless of amount, including fees relating to travel outside of the United States. When receipts are not available, the only documentation for expenditures is the travel voucher. While the GSA regulations on nonfederally reimbursed travel do not address the need for receipts, both Commerce and FTC believe that the Federal Travel Regulations apply to expenses initially incurred by the employee and the government and later reimbursed by check from the nonfederal source. There were 16 and 36 trips in the high-level officials and all-employee samples at Commerce, respectively, that included expenses that were reimbursed either partially or fully by check. Of these 52 trips, files for 11 cases contained no evidence of receipts’ being obtained to support the expenses that were claimed. According to several Commerce officials, receipts are used to bill the sources of the reimbursements and may have been sent to them. Also, for another 11 of the 160 trips in our sample, Commerce indicated that travel vouchers had never been prepared by the employees. Commerce officials said that the vouchers were not prepared in these 11 cases because all of the expenses were paid in-kind and, thus, there was no cost to the government and no need to account for the expenses incurred. However, since receipts are generally not obtained for in-kind expenses, the travel voucher serves as the only source of information for identifying the actual expenses incurred when reporting to OGE. GSA regulations governing nonfederally reimbursed travel require the actual amounts of expenses paid, in-kind and by check, to be reported to OGE. Of the 160 trips in our sample, 128 included in-kind reimbursements. Since GSA regulations do not require travelers to obtain receipts for in-kind expenses, Commerce did not have receipts for 99 of these trips and thus, there was no assurance that the estimates of reimbursed travel coincided with the actual costs incurred. We did not identify any deficiencies in FTC’s acceptance of reimbursed expenses for the 59 trips we reviewed. FTC also generally complied with all GSA requirements for reporting and documenting reimbursed travel. However, FTC could have improved its reporting by more fully describing the nature of the event in 17 reports to OGE. FTC also had internal controls in place, including the use of letters of commitment to monitor expenses. As previously discussed in the background section, GSA regulation 41 C.F.R. 304-1.9 governing reports of nonfederally reimbursed travel to OGE requires specific information in each report, including the nature of the event. In the 17 reports to OGE where FTC did not fully describe the nature of the event, FTC only stated the name of the organization conducting the event, followed by a term such as “meeting,” “conference,” or “symposium.” The FTC Deputy Ethics Official responsible for the report said that he believes FTC’s practice to be in compliance with GSA regulations. A GSA official in the division responsible for administering the regulations said that providing a specific description of the event’s nature, such as an American Bar Association meeting on foreign trade tariffs, allows OGE and the public to better understand the employee’s reason for attending the meeting. As previously discussed above, the GSA travel regulations do not require employees to obtain receipts for expenses reimbursed in-kind, but do require the actual value of in-kind services be reported to OGE. FTC obtains letters of commitment before the trips that provide estimates of the expenses to be paid and relies on the employees to inform FTC if the actual value of expenses paid in-kind deviates from the estimate. OGE is responsible for providing overall direction of executive branch policies related to preventing conflicts of interest on the part of officers and employees of any executive agency. Specific responsibilities include developing and reviewing statutes and regulations pertaining to conflicts of interest and monitoring agency ethics programs. During a review of Commerce’s ethics program in 1992, Commerce officials denied OGE access to records and supporting documents relating to acceptances of travel, subsistence, and related expenses from nonfederal sources under 31 U.S.C. 1353. According to OGE officials, OGE made two verbal requests for the records and sent a letter with their final report on October 15, 1992. OGE’s letter expressed the need for OGE to have access to such records and asked Commerce to reconsider its decision. OGE’s letter also argued that the law requires GSA to consult with OGE in the development of the regulations and requires agencies to submit semiannual reports of payments accepted under the authority to OGE. Given OGE’s involvement in those areas, combined with its overall oversight responsibilities regarding ethical standards and conduct, OGE believed its authority encompassed reviewing agency compliance with the regulation for nonfederally reimbursed travel. Commerce’s General Counsel responded to OGE’s letter with a letter dated November 5, 1992. The General Counsel’s letter stated that Commerce officials did not provide documents related to nonfederally reimbursed travel because such information seemed to be outside the scope of OGE’s audit. The letter further stated that regulations governing such travel are not found in ethics regulations but in GSA travel regulations. Therefore, Commerce believed that oversight was the responsibility of GSA rather than OGE. Since it is OGE’s responsibility to monitor executive agency compliance with the statutory and regulatory requirements governing travel reimbursement from nonfederal sources, we believe that OGE is entitled to access to all records related to such reimbursed travel that an agency may possess. Commerce now agrees with our position. In May 1994, in response to our inquiries, Commerce’s Assistant General Counsel for Administration sent us a letter stating that Commerce’s “position is that OGE has authority to review records relating to travel expenses accepted under the authority of the Ethics Reform Act, including supporting documents. Any past misunderstandings concerning this issue have been resolved.” In addition, OGE conducted a review of Commerce’s Patent and Trademark Office this year, and OGE was given access to the office’s records of nonfederally reimbursed travel. Commerce’s denial did not adversely affect OGE’s reviews of other agencies’ nonfederally reimbursed travel. Other agencies did not refuse OGE access in reviews of reimbursed travel conducted during or after Commerce’s denial. Although Commerce generally complied with the requirements for nonfederally reimbursed travel, some of its procedures can be strengthened to reduce the risk of conflict of interest and improve internal controls. Specifically, Commerce could better ensure that (1) employees have approved travel requests containing all of the necessary information before trips are taken; (2) travel reports to OGE accurately disclose the circumstances of each travel incident, including the nature, dates, and itemized costs of the trips; and (3) trips are adequately documented with receipts and vouchers. FTC’s procedures for accepting and reporting reimbursed travel were basically sound. However, FTC’s semiannual reports sometimes could have better described the nature of the event to be attended. By including a better description, FTC could enable OGE to better form an opinion as to whether the employee’s attendance may appear to constitute a conflict of interest. The issue of OGE’s access to records relating to reimbursed travel expenses at Commerce has been resolved. Commerce’s letter stating that OGE has authority to review these records and OGE’s recently completed audit of such records at Commerce’s Office of Patent and Trademark indicate that Commerce and OGE are in accord. The overall effect of Commerce’s temporary denial of access was minimal in that OGE eventually received access to Commerce records, and OGE has not been denied access to such records at any other agency. We recommend that the Secretary of Commerce take actions to ensure that all Commerce employees’ travel requests containing all of the necessary information, such as the name of the payer and the amount and type of expenses to be paid, are reviewed and approved before a trip; the Office of General Counsel, as part of its responsibilities for submitting semiannual reports to OGE, ensure that these reports include the required information, including the dates and nature of events attended and expenses paid; and Commerce offices require (1) travel vouchers and (2) receipts for reimbursed expenses, except for meals since they are not required by Federal Travel Regulations to be supported in this manner. Also, we recommend that the Chairman, Federal Trade Commission require the Office of General Counsel to ensure that the agency’s reports to OGE more completely describe the nature of the events attended. Commerce and FTC provided written comments on a draft of this report. These comments are summarized below and included in their entirety, along with our specific responses, in appendixes II and III. Commerce agreed to implement our recommendations that the Department ensure that (1) all Commerce employees’ travel requests be reviewed and approved before beginning a trip, (2) the Office of General Counsel ensure that reports to OGE include the required information, and (3) Commerce offices require receipts for those expenses reimbursed by check. Also, Commerce offered an alternative to our recommendation that its offices require travel vouchers by proposing that it require receipts and other evidence for all reimbursed expenses regardless of whether they were paid in-kind or reimbursed by check. We consider Commerce’s proposal to be an adequate response to our concern that expenses reported to OGE are accurate. However, because a travel voucher is the established way for employees to submit travel receipts and other related information to management, we continue to believe that Commerce should require their use. FTC agreed to implement our recommendation that it ensure that reports to OGE more completely describe the nature of the events attended. We are sending copies of this report to the Secretary of Commerce, the Chairman of FTC, and other interested parties and will also make copies available to others upon request. The major contributors to this report are listed in appendix IV. If you have any questions about this report, please contact me on (202) 512-5074. This office administers the operations of all the offices in the Department of Commerce. ITA is involved in issues concerning import administration, international trade and commercial policy, and trade promotion. Specific operations include conducting a trade adjustment assistance program that provides financial assistance in the form of grants to selected firms and communities, developing and carrying out policies and programs to promote world trade, and strengthening the international trade and investment position of the United States. NOAA has five major program areas, each with its own mission related to some aspect of oceanic and atmospheric conditions. Our review focused on the two offices with the most employees, the National Marine Fisheries Service and the National Weather Service. The Fisheries Service is responsible for promoting the conservation, management, and development of living marine resources for commercial and recreational use. The program includes the management of a nationwide financial assistance program in the form of loan guaranties and a capital construction fund. The Weather Service is responsible for monitoring and predicting the state of the atmospheric and hydrologic environment. The Weather Service has contracts for computers and other services with private entities. NIST’s primary mission is to promote economic growth in the United States by working with industry to develop and apply technology, measurements, and standards. NIST programs include an advanced technology effort that includes entering into contracts and cooperative agreements with businesses. NIST has eight laboratories whose roles range from establishing standards for information processing and various forms of radiation to analyzing the performance of building and construction materials. NIST now works with industry and other federal agencies in four major areas: (1) transferring technology, (2) helping smaller manufacturers tap into regional and national sources of information, (3) recognizing U.S. companies that have successful quality management systems, and (4) assisting federal agencies and industry with specific technically based trade issues related to standards and conformity assessment. FTC is involved in investigation, rule making, and enforcement of laws for organizations engaged in or whose businesses affect commerce, except banks, savings and loans, federal credit unions, and common carriers. FTC utilizes its statutory powers to enforce both consumer protection and antitrust laws. For example, FTC is responsible for keeping the marketplace free from unfair, deceptive, or fraudulent practices by investigating alleged law violations and, when appropriate, taking administrative enforcement action or seeking judicial enforcement. The following are GAO’s comments on Commerce’s letter dated October 12, 1994. 1. Although it is true that our review did not identify any conflicts of interest, it needs to be recognized that identifying such situations was not the specific purpose of our review. The focus of our review was to determine how well these controls over reimbursed travel were being implemented. 2. Commerce said that these issues arose with respect to only a few of its operating units. As stated in our scope and methodology section, we reviewed the procedures governing reimbursed travel at these offices because they accounted for 79 percent of the reimbursed travel by Commerce employees. 3. We understand Commerce’s reluctance to have the Office of General Counsel audit all of the travel gift reports. We believe that obtaining receipts for all reimbursed expenses would improve the accuracy of reported amounts and preclude the need for a General Counsel audit. 4. We believe that the FTC’s favorable experience with securing letters from nonfederal sources listing the expenses to be paid prior to the trip demonstrates that this suggestion is feasible. However, we withdrew this draft recommendation in deference to Commerce and FTC views that the added administrative burden would outweigh the benefits of better assuring that the actual amount of reimbursed expenses was being reported to OGE. The following are GAO’s comments on FTC’s letter dated October 3, 1994. 1. With respect to FTC’s concern that the draft report’s title appeared to focus on the few negative aspects of our review, we modified the title to reflect that FTC and Commerce generally complied with requirements but some improvements are needed. 2. We disagree with FTC’s contention that the potential for “double charging” does not exist. FTC said that the traveling employee would not receive a receipt for a travel-related expense paid in-kind by a nonfederal source and would, therefore, not have the documentation needed to charge the government for the same expense. Our review of FTC trips in 1993 found that employees had been able to obtain receipts for in-kind expenses for three of the trips. In addition, Commerce proposed in its comments that employees obtain receipts for in-kind expenses to better document travel expenses. While it appears that the employees can obtain receipts for some in-kind expenses, we believe that FTC’s practice of obtaining a letter from the nonfederal source citing expenses to be paid is a sufficient internal control against an employee’s double charging the government. 3. While it is true that there is no evidence that travelers inaccurately report in-kind payments received from nonfederal sources, there is also little evidence that they reported accurately. There were no receipts for 25 of the 28 trips by FTC employees in fiscal year 1993 that involved in-kind expenses. Nevertheless, we withdrew our draft recommendation regarding the verification letter, deferring to FTC’s and Commerce’s views that the added administrative burden may outweigh the benefit of improved controls over accurately reporting the receipt of in-kind expenses. Alan N. Belkin, Assistant General Counsel James M. Rebbe, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Commerce's and the Federal Trade Commission's (FTC) controls over the acceptance and reporting of nonfederally reimbursed travel by their employees, focusing on: (1) whether Commerce and FTC met their review and reporting requirements; and (2) the implications of Commerce denying the Office of Government Ethics (OGE) access to its records of reimbursed travel in 1992. GAO found that: (1) although Commerce generally met applicable requirements for accepting and reporting travel reimbursements, certain procedural improvements could be implemented in the receipt and reporting of these funds; (2) Commerce did not ensure that travel requests contained all of the required information, were prepared and reviewed before trips were approved, accurately disclosed the circumstances of each trip, or adequately documented the actual expenses reimbursed; (3) although FTC consistently met the requirements for controlling the acceptance and reporting of travel reimbursements by its employees, it could improve its reporting by better detailing the nature of events attended by FTC employees; (4) Commerce's decision to deny OGE access to its travel records was based on its view that OGE was not responsible for the applicable travel regulations; and (5) although Commerce changed its position and acknowledged that OGE did have the authority to review such records, the overall effect of Commerce's temporary denial of access was minimal. |
The Army is procuring medium tactical trucks—the 2.5- and 5-ton payload classes—to replace most of its current fleet. The truck replacement effort is known as the Family of Medium Tactical Vehicles (FMTV) program. The program is currently nearing the end of its first full-production contract. The Army plans to continue production with the same contractor for new model FMTV trucks. In addition, the Army plans to develop a second source to produce FMTV trucks. After the second source is selected, the current contractor and the second source will share annual production. The FMTV program is one of the Army’s largest acquisition programs at a projected cost of $15.7 billion. From fiscal year 1991 through fiscal year 2022—a 32-year period—the Army plans to purchase 85,488 FMTV trucks to replace its aging medium truck fleet. The program consists of a family of 2.5- and 5-ton trucks based on a common truck cab and chassis. The 2.5-ton trucks, called light medium tactical vehicles, consist of cargo and van variants and a 2.5-ton trailer. The 5-ton trucks, called medium tactical vehicles, consist of seven variants—cargo, long wheel base cargo, dump, fuel tanker, tractor, van, and wrecker—and a 5-ton trailer. The program is nearing the end of its first production contract. The contract was awarded on October 11, 1991, to Stewart & Stevenson Services, Inc., Houston, Texas. It was a $1.2-billion, 5-year, fixed-price production contract for the first 10,843 FMTV trucks. It did not include the production of the 5-ton fuel tanker and van variants or the cargo trailers. These vehicles will be included in later production contracts. Because of funding problems, the fifth year of the contract was extended over 3 years. The Army expects the contractor to complete production under this first contract in December 1998. The Army plans to continue FMTV production with the current contractor. The new contracts will comprise new models, called A1 models, of the FMTV truck variants produced under the original production contract and FMTV trailers. The contract award, however, was delayed until the Army resolved a major problem discovered on fielded FMTV trucks. Under certain operating conditions, the FMTV trucks’ transmission flywheel housing can crack and, if undetected, can lead to a broken drive shaft. If the drive shaft breaks while the truck is operating at highway speeds, it can cause an accident. The Army decided not to award the follow-on production contract until this drive train problem was corrected and the correction was verified through testing. The Army successfully completed the testing of proposed correction to the drive train problem and the Secretary of Defense approved the award of the follow-on production contract in early October 1998. According to a project official, in order to maintain the planned production schedule while the drive train correction was being tested, the Army initially decided to separate the follow-on contract into two contracts—one that would be awarded immediately to produce new models of FMTV trucks and trailers to support a new production qualification test, and one that would be awarded after the drive train correction was verified for full-rate production of the trucks and trailers. A separate testing contract would allow the contractor to start preliminary work on the new design of the new models without actually starting production until after the drive train problem was corrected. Accordingly on June 2, 1998, the Army awarded Stewart & Stevenson a $9.2-million contract for 15 FMTV trucks and 8 trailers to support the production qualification test of the new truck models. After the drive train testing was successfully completed, the Army, on October 14, 1998, awarded Stewart & Stevenson a $1.4-billion, 4-year production contract for 6,430 trucks and trailers, with an option year for an additional 2,920 trucks and trailers. The 5-ton fuel tanker and van were not included in the follow-on contract. These variants were not produced under the original production contract and the Army planned to include them in the follow-on contract. A project official said that they were not included in the follow-on contract because they were not as ready for production as originally thought. In November 1998, the Army plans to award the Stewart & Stevenson a second FMTV production contract for these FMTV variants. This contract would be for enhancements to the designs of the 2 trucks, testing of the trucks, and production of 276 FMTV trucks—138 5-ton fuel tankers and 138 5-ton vans—at an estimated cost of $100 million. While the current contractor is producing under the follow-on contracts, the Army plans to develop a second source to produce FMTV trucks. Starting in fiscal year 2003, the Army plans to split FMTV truck production between the current contractor and a second source by competing production in 5-year increments. The winning contractor for each increment would receive a larger share of production under that increment. The Army plans to award the final 5-year production contract to one contractor in a winner-take-all competition in fiscal year 2018. Senator Harkin requested that we evaluate the Army’s future acquisition plans for the FMTV program. To evaluate the Army’s future FMTV acquisition plans, we interviewed Defense, Army, and contractor officials and reviewed the November 25, 1997, FMTV update to the FMTV acquisition strategy and plan, which provided a general description of the Army’s future FMTV plans. However, we had to rely mainly on oral testimony for this evaluation because the Army’s detailed plans were evolving at the time of our review and were therefore unavailable in written form. For example, at the start of our review, the Army planned to award one follow-on production contract to the current contractor; now the Army plans to award three follow-on contracts to the current contractor. Because the follow-on production contracts were being negotiated at the time of our review, we were unable to obtain copies of the contracts. Also, the Army had not finalized its detailed second-source plan; therefore, no written detailed second-source plans were available for our review. We interviewed the key project officials involved in developing the Army’s follow-on contracts and second-source plans. We evaluated planned production quantities contained in the FMTV selected acquisition report, dated December 31, 1997, to determine whether it would be reasonable to expect benefits from splitting these quantities between two contractors. As part of our evaluation of future FMTV acquisition plans, we evaluated the Army’s efforts under the current FMTV production contract. We interviewed Defense, Army, and contractor officials and reviewed various program documents, including the FMTV acquisition strategy and plan, the current production contract, source selection evaluations, budget documents, and selected acquisition reports. We determined whether the contractor was consistently producing trucks within the quality standards set by the Army for FMTV trucks by analyzing the first inspection acceptance rate of lots accepted by the government between July 1, 1997, and June 30, 1998, and charted the number and type of defects found in the first inspection of lots accepted in 2 recent months. We did not include lots of five trucks or less in this analysis. We did not visit units that received FMTV trucks because the Defense Office of Inspector General was planning to evaluate FMTV trucks in the field; the Inspector General’s audit was started but has been suspended because of higher priority congressional request work. To provide an indication of the kinds of problems identified on fielded FMTV trucks, we reviewed selected weekly reports of deficiencies detected during the FMTV trucks’ receiving inspections at the fielding locations and a summary of quality deficiency reports received by the FMTV project manager’s office as of December 11, 1997. When an FMTV truck is received in the field, it is inspected before it is issued to the unit. The Army does not summarize the results of these inspections. At the time of our visit, we selected and reviewed the most recent receiving inspection reports. The reports covered 45 trucks inspected at Fort Bragg, North Carolina, during 4 weeks in July-August 1997. Because the reports did not differentiate between major and minor deficiencies, a government plant representative office quality specialist reviewed the reports and indicated which deficiencies were major deficiencies. The results of our review cannot be projected to all fielded FMTV trucks because we were unable to define the universe of reports. The official who had the reports said that he did not have all of them. Once the trucks are issued to the units, individual soldiers are supposed to complete a quality deficiency report whenever a problem is found in their trucks. We reviewed a summary of 286 quality deficiency reports received by the project office by December 11, 1997. However, a project official said that he does not believe that all the deficiencies on the FMTV trucks are being reported. Each report would have to be investigated to determine whether similar deficiencies were being reported differently and the root cause of each deficiency. Such a determination was beyond the scope of our review. Our work was conducted at Defense and Army headquarters, Washington, D.C.; Defense Contract Management Command headquarters, Fort Belvoir, Virginia; FMTV project office, U.S. Army Tank-Automotive and Armaments Command, Warren, Michigan; Defense Contract Management Command, Stewart & Stevenson office, Sealy, Texas; and Tactical Vehicle Systems, Stewart & Stevenson Service, Inc., Sealy, Texas. We conducted our review between July 1997 and August 1998 in accordance with generally accepted government auditing standards. The current contract allowed the contractor to continue truck production even though the trucks were unable to pass testing and demonstrate that they met FMTV performance and reliability, availability, and maintainability requirements. Also, the Army relaxed its final acceptance inspection method from 100-percent inspections to a sampling inspection method without validating that the contractor’s production processes were under statistical process control—a method of determining whether a contractor is consistently producing a product within the product’s quality standards. Recent government inspection data indicates that the contractor is still not consistently producing trucks within the quality standards set for FMTV trucks. The Army does not know whether fielded FMTV trucks are performing adequately. It reports that FMTV trucks are doing well in the field but does not have data to support this assessment. FMTV trucks with major deficiencies have been received in the field, but data does not currently exist to determine the range and magnitude of these deficiencies. According to Army officials, the follow-on contract will allow production to start before the new model trucks pass testing. Also, the Army plans to continue to accept new models under the relaxed final acceptance inspection method. The contractor took longer than expected to produce FMTV trucks that could pass production qualification test and operational test and demonstrate that they met FMTV performance and reliability, availability, and maintainability requirements. While this situation persisted, the contract allowed the contractor to continue producing trucks that did not meet requirements. These trucks required modifications to achieve satisfactory performance. The modification effort caused program delays because new production had to be stopped while the modifications were being made. The current contractor was not an experienced truck producer when the Army awarded it the FMTV production contract. The Army selected Stewart & Stevenson because the truck design it submitted was evaluated as the best design and its proposed price was the lowest. However, Stewart & Stevenson had not developed the FMTV truck design. It had subcontracted with an Austrian truck manufacturer, Steyr-Daimler-Puch, AG., to design and develop the FMTV prototypes based on a design of a truck Steyr had produced for the Austrian army. During the prototype demonstration phase, Steyr also provided support that led to the selection of Stewart & Stevenson. Stewart & Stevenson did not continue its relationship with Steyr into the production phase of the FMTV program. It purchased a plant from a manufacturer of oil-drilling equipment, configured the plant to develop the FMTV production line, and established its Tactical Vehicle Systems Division to produce the FMTV trucks. The contractor experienced problems in developing its production line and producing trucks that met FMTV technical and operational requirements. The contract required the Army to conduct a production qualification test and an initial operational test and evaluation to determine whether the trucks met these requirements. The production qualification test was designed to determine whether the FMTV truck variants fulfilled the Army’s technical performance and reliability, availability, and maintainability requirements and met contract specifications. The initial operational test and evaluation was designed to determine whether and to what degree the FMTV truck variants could accomplish their missions when operated and maintained by soldiers in the expected operational environment. The Army began the production qualification test in June 1993 and completed it in December 1994. The trucks failed the test because they were unable to meet reliability and some performance requirements. The Army identified over 90 problems that the contractor was required to correct. The Army began the operational test in October 1993 but suspended it in December 1993 because the trucks were not able to meet their operational reliability, availability, and maintainability requirements. The Army began a series of limited user tests in June 1994. These were unscheduled tests that used operational test personnel and were designed to help the contractor identify potential solutions to the trucks’ continuing problems. In August 1994, the Army started a second operational test with those FMTV truck variants it thought had a chance of meeting operational requirements. It continued the limited user tests with the other variants. In September 1994, operational and limited user tests were suspended because test personnel were deployed on a peacekeeping mission in Haiti. According to Army test assessment officials, the trucks were not meeting reliability requirements at the time the operational test was suspended. In February 1995, the Army started a second production qualification test with improved and newly produced trucks that incorporated changes to address problems identified during earlier testing. In April 1995, the Army started a new operational test with new trucks that also incorporated the changes. It completed both tests in June 1995. The trucks were assessed as having met FMTV requirements in both tests. The Army did not attempt to limit the number of trucks produced before production qualification and operational testing was completed. We have reported on the danger of entering production before adequate operational testing has been completed many times in the past. Beginning production before adequate testing leads to program delays when the already produced systems must be subsequently modified to make them usable. This danger materialized during the current FMTV contract. The Army could have limited its risk by keeping deliveries to the minimum rate needed to complete testing and prove the production line. However, the contract allowed truck deliveries of up to 150 a month until the trucks passed testing. Later, the Army modified the contract to increase the monthly delivery limit to 200 trucks. According to a project official, the Army believed that increasing monthly delivery quantities would allow the contractor to catch up on its scheduled deliveries. Because the higher monthly delivery limit actually exceeded the contractor’s production capability at that time, the contractor produced as many trucks as it could. However, the trucks it produced still could not meet FMTV technical and operational requirements. By the time the production qualification and operational tests were successfully completed in June 1995, the contractor had produced about 3,000 deficient trucks. The contractor had to perform varying levels of work to make the trucks conform to the specifications of those that had passed testing. About 1,474 trucks had to be disassembled to their frames and remanufactured. This additional work on the already produced trucks had a negative effect on the production of new trucks during the 9 months it took the contractor to make the changes to the 3,000 trucks. The contractor had to stop new truck production for 5 months and was able to produce only 175 new trucks in the remaining 4 months. The contract required the contractor to pay for the changes needed to make the trucks meet FMTV requirements. After the FMTV trucks passed the production qualification and operational tests, the contractor was still unable to consistently produce trucks that met FMTV program quality standards necessary to pass the government’s final acceptance inspection. Despite this problem, the Army relaxed its final acceptance inspection method from 100-percent inspections to a sample inspection method and generally accepted the trucks after the contractor made two attempts to remedy defects. The Army did this without meeting the administrative precondition that the contractor demonstrate that its production processes were in statistical process control. The overall effect was to make it easier for FMTV trucks to pass final acceptance inspection. Initially, the government’s plant representative at the FMTV production plant inspected each FMTV truck to determine whether it met the Army’s quality standards. This 100-percent final acceptance inspection is standard procedure when a contractor produces a new product. Each lot that the contractor presented for final acceptance inspection usually consisted of 50 trucks. If one defect was found, the lot was not accepted, and the trucks were returned to the contractor for inspection and correction of the defects. The lot was reinspected until no defects were found by government inspectors. The plant representative office’s quality letter of instruction required the 100-percent final acceptance inspection to continue until the contractor demonstrated that its production processes were under statistical process control. Statistical process control is a standard commercial practice established by monitoring the production processes to see if they consistently result in output within the quality standards set for the overall product. Once a process is producing consistently high-quality output, the process is considered to be under statistical control. Once all processes are under statistical process control, the quality letter allows the government to perform the final acceptance inspections on a sampling basis. On April 19, 1996, the project office instructed the plant representative office to change its FMTV final acceptance inspection to a sampling method. The FMTV quality assurance representative who issued this instruction said that the change was made because the summary data provided by the contractor at monthly management meetings was improving—the contractor was finding more defects in its final inspection than the government. Under the new inspection method, a sample of 5 trucks from each lot of 50 trucks is inspected. If 1 major defect or 15 minor defects are found, the entire lot is returned to the contractor, which is required to inspect the entire lot and correct the defects. The lot is returned to the government, which draws another five-truck sample. The second time, however, the government inspects only for the defects found in the first sample. If the government again finds 1 major defect or 15 minor defects, the lot is rejected and returned to the contractor, which again inspects and corrects the defects. The government generally does not make a third final acceptance inspection. When the contractor provides documentation showing that it has inspected the lot and corrected the defects, the government accepts the lot. A Defense plant representative official said that they have the option to inspect a lot more than two times but does so only in exceptional circumstances, such as when a lot has had many major defects. We could find no evidence that the program office or the plant representative office had shown that the contractor’s processes were under statistical process control at the time of the final acceptance inspection change. A government plant representative official said that the contractor had a 1-percent acceptance rate—1 percent of the trucks submitted to the government were acceptable—when the change was made. Recent government inspection data indicates that the contractor’s production processes are still not under statistical process control and not consistently producing trucks within the quality standards set for FMTV trucks. Between July 1, 1997, and June 30, 1998, about 78 percent of the truck lots presented to the government for final acceptance inspection were rejected on the first inspection. As can be seen in figure 2.1, the number of major and minor defects found during the first inspection can vary greatly by lot, even in lots that were accepted in April and May 1998. For example, the inspectors (1) found no major and 5 minor defects in lot 99, and the lot was accepted on the first inspection; (2) found no major and 25 minor defects in lot 100, and the lot was rejected because the inspectors found 15 or more minor defects; and (3) found 5 major and 15 minor defects in lot 101, and the lot was rejected for both the major and minor defects. The Army does not know whether fielded FMTV trucks are performing adequately. Army officials report that the FMTV trucks are doing well in the field, but the Army does not have adequate data to support this assessment. FMTV trucks with major deficiencies have been received in the field, but without more complete data, we cannot determine the magnitude of the problem. According to Army officials, FMTV trucks are doing well in the field. They base this assessment on (1) individual soldiers’ statements that they are pleased with the trucks and (2) truck performance during comparison tests. Neither of these is a good measure of the FMTV truck’s field performance. Testimonial evidence from individual soldiers is not a reliable way to determine how a new system is performing. The soldiers’ positive statements about the trucks could be explained by the fact that the FMTV trucks have a modern design compared to the trucks they are replacing. The comparison test is designed to check on whether the production trucks still meet the FMTV reliability, availability, and maintainability requirements. Periodically, the Army randomly selects two trucks from the production line to run a 10,000-mile reliability, availability, and maintainability test. The test is not designed to provide a measure of field performance. The Army could better support its claims if it collected data on fielded truck performance using its sample data collection. Sample data collection is a method of selectively sampling field units to collect field maintenance and performance information on selected equipment. However, the Army is not currently collecting this data on FMTV trucks because the project office would have to fund the data collection effort. A project official said that the funds for the FMTV program should not be diverted for data collection because they are limited and are needed to produce additional trucks. The U.S. Army Cost and Economic Analysis Center is collecting data on fielded FMTV truck maintenance through its Operating and Support Management Information System. This system reports operating and support costs, parts usage, and maintenance hours by system and is used to project future operating and support costs for budgeting and other planning purposes. However, the Center has not included FMTV trucks in its database because the trucks were only fielded in 1996. A Center official said that he expects to see some, but not much, data on FMTV trucks by the end of September 1998, when the database is updated. During our review, we found indications that the Army has received trucks in the field with major deficiencies. When an FMTV truck is received in the field, it is inspected before it is issued to the unit. The Army does not summarize the results of these inspections. To determine whether the receiving inspectors were finding problems that could have been found during the final acceptance inspection, we reviewed the most recent receiving inspection reports as of the date of our visit. The reports covered 45 trucks inspected at Fort Bragg, North Carolina, during 4 weeks in July-August 1997. Because the reports did not differentiate between major and minor deficiencies, a government plant representative office quality specialist reviewed the reports and indicated which deficiencies were major deficiencies. The receiving inspectors found deficiencies on every truck, although not every truck had a major deficiency. They found major fluid leaks, missing parts, inoperative lights and gauges, and reversed winch controls. In addition, once the trucks are issued to the units, individual soldiers are supposed to complete a quality deficiency report whenever a problem is found in their truck. As of December 11, 1997, the project office had received 286 quality deficiency reports. The Army had fielded about 4,500 FMTV trucks by that date. A project official said he does not believe that all deficiencies have been reported. Some deficiencies were reported more than once, and some of these were later found to be systemic deficiencies. For example, a broken drive shaft was reported on only two trucks; however, the Army has determined that all FMTV trucks have the potential for developing this problem. Examples of the deficiencies reported include starters failing, windows shattering when doors are closed, major fluid leaks, brakes failing, cab lift mechanisms failing, and alternators overheating. Under the follow-on contracts, the contractor will be producing new model trucks called A1 models. The trucks will be considered new models because they will have new engines that meet the current Environmental Protection Agency standards, new data bus systems—the wiring and other components through which data is transmitted—to enhance maintainability, antilock braking systems to improve braking, and galvanized steel cabs and other changes to improve corrosion protection. These new trucks will have to pass a new production qualification test consisting of a reliability, availability, and maintainability test of 20,000 miles per test truck and performance tests to demonstrate that the new trucks meet FMTV technical requirements. According to Army officials, the follow-on contracts will allow full-rate production to start before the new model trucks pass the production qualification tests. Also, the Army plans to continue the practice of accepting the new models under its relaxed final acceptance inspection methods. Because the FMTV program experienced significant problems under the current production contract, the Army needs to implement safeguards to ensure that the government receives trucks that meet FMTV program quality standards under the follow-on production contracts. The current contract allowed the contractor to continue producing trucks during testing even though the trucks were unable to pass the tests and demonstrate that they met FMTV performance and reliability, availability, and maintainability requirements. These trucks required modifications to achieve satisfactory performance, and the modification effort caused program delays. In addition, the Army relaxed its final acceptance inspection methods from 100-percent inspections to a sampling inspection method without validating the contractor’s production processes. Recent government inspection data indicates that the contractor’s production processes are still not consistently producing trucks within the quality standards set for FMTV trucks. The Army does not know whether fielded FMTV trucks have quality problems. It reports that the trucks are doing well in the field, but it does not collect data needed to support this assessment. There is evidence that trucks with major deficiencies have been received in the field, but without more complete data, we cannot determine the magnitude of the problem. According to Army officials, the follow-on production contracts will allow the start of full-rate production before the new model trucks pass testing. The Army also plans to continue using the relaxed final acceptance inspection procedures to accept the new model trucks. This approach is the same as the one followed during the current production contract, which resulted in program delays and uncertainty over the quality of the fielded trucks. The Army has an opportunity to mitigate future program difficulties by instituting safeguards to ensure that the new model trucks pass testing before production and that the contractor consistently produces trucks that can meet FMTV technical and operational requirements. To improve management of the FMTV program under the current and follow-on contracts, we recommend that the Secretary of Defense direct the Secretary of the Army to fund a data collection effort to determine whether fielded FMTV trucks are performing satisfactorily and to direct government inspectors at the FMTV truck plant to return to 100-percent final acceptance inspection of FMTV trucks until the contractor demonstrates its production processes are under statistical process control. To provide a safeguard on the follow-on contracts that could preclude the type of problems that occurred under the current contract, we recommend that the Secretary of Defense direct the Secretary of the Army to include a clause in the follow-on production contracts that would delay the start of production until the new FMTV model trucks demonstrate that they meet FMTV performance and reliability, availability, and maintainability requirements. In commenting on a draft of this report, the Department of Defense said it partially concurred with our recommendations. It stated that the Army is currently using, to the maximum extent possible, data from existing databases such as the Operating and Support Management Information System and the FMTV weekly fielding site reports and is considering sample data collection as a fleet management tool if it is determined to be cost-effective. Regarding the final acceptance inspection, the Department said that correcting quality problems along the production line is more cost-effective than rejecting lots after they have been presented for acceptance. According to the Department, the current sampling program is catching discrepancies, demonstrating that sampling is working and therefore 100-percent inspection is not warranted. The Department also said that the Army will not authorize production on the follow-on contracts until it is satisfied that the vehicles will successfully pass production qualification testing. Additionally, the Department believes that it has the proper safeguards in place to preclude the problems experienced in the current contract and therefore does not believe that it is necessary to include a specific requirement in the follow-on contracts to delay the start of production until the trucks demonstrate they meet requirements. As we point out in this report, the FMTV weekly fielding site reports and existing databases, such as the Operating and Support Management Information System, at this time do not contain enough information for the program office to determine whether the fielded trucks are performing satisfactorily. The FMTV weekly fielding site reports would not be useful in determining whether the fielded FMTV trucks are performing satisfactorily because the site receiving inspections on which the reports are based are performed before the trucks are issued to the units; that is, before they can perform in the field. In this report, we used the data from the fielding site reports to only obtain an indication of whether the trucks were being received with major defects. Also, the Operating and Support Management Information System does not include data on FMTV trucks. While an Army official responsible for the information system said that some FMTV truck data will be included in the database when it is updated this year, he did not expect the FMTV data to be extensive. We therefore continue to believe that the Army needs to conduct sample data collection on the fielded FMTV trucks to make an adequate assessment of the trucks’ field performance. We agree that building quality into the production process is more effective than inspecting it in at the end of production. However, as we stated in our report, the sampling program is identifying significant numbers of discrepancies at the end of the process. This indicates that the contractor’s production processes are not building quality into the product. Sampling cannot be relied on until it has been established that the production processes are under statistical process control. Therefore, we believe that until production processes need to be brought under this control to ensure consistently high-quality output, before reducing the 100-percent inspection prescribed by the project office. In its comments, the Department said it has proper safeguards to preclude the problems experienced in the current contract, but did not indicate what specific factors it will consider in its decision to authorize full-rate production. Under the follow-on contracts, the contractor will be producing FMTV trucks that will be significantly different from the original trucks. The Army awarded the first follow-on contract on October 14, 1998. We have not had an opportunity to review the contract. However, we believe the Army’s interests would be better protected if the production contract contained a specific requirement that full-rate production under the follow-on contracts would not start until the FMTV trucks pass production qualification testing under the testing contract. The Army plans to compete future procurement of the FMTV trucks with the expectation that program costs can be reduced. Therefore, it has decided to develop a second source for the FMTV trucks. However, it has not performed an analysis to determine the costs and benefits of its plan or compared its plan with other alternatives, including (1) dividing the program into 5-year production increments and competing each increment among all qualified contractors, (2) delaying the development of the second source until funds are available to support both the current contractor and the second source without a fielding break, or (3) continuing with the current contractor for the remainder of the program. Our preliminary analysis of the production quantities that the contractors could expect to share from the competition indicates that the Army’s plan will not result in program cost savings. The FMTV acquisition plans call for the Army to develop a second source for the FMTV truck program. To develop the second source, the Army plans to award production qualification contracts to at least two contractors in fiscal year 1998. The contractors, using the existing FMTV performance specifications and technical data package as a reference, will produce two or three vehicles and compete them against each other. In fiscal year 2000, the Army plans to award the winning contractor a 3-year production contract for up to 800 trucks. Under this contract, the second-source contractor will produce the same models and variants of the trucks that the current contractor will be producing under the follow-on production contract. Starting in fiscal year 2003, the Army plans to compete subsequent FMTV production in 5-year increments. For each increment, the current contractor and the second-source contractor will compete to determine which contractor will receive the larger share of production. The Army has not determined the actual production split for the increments. It plans to award the final 5-year contract to one contractor. Project officials said that developing a second source will initially result in higher program costs. It will increase costs because the Army will have to pay the costs incurred by the competing contractors to develop their versions of FMTV trucks and compete them. Additionally, the Army will have to pay the second-source contractor’s costs for developing its production line and bringing it into full production. Project officials did not provide an estimate of the cost to develop the FMTV second source. In its fiscal year 1999 budget request, the Army reduced the planned quantities the current contractor was to produce during the first 7 months of the follow-on contract from 422 trucks to 171 trucks—mainly 5-ton trucks—and 8 trailers. The Army recognized the cost impact of the lower quantities when it increased by 74 percent—from $142,774 to $251,101—the estimated average cost of a 5-ton truck. Although the fiscal year 1999 budget request reflected a reduced buy of 5-ton vehicles, procurement costs for these vehicles increased by $17.2 million. Additionally, the change in procurement quantities allowed the Army to reallocate part of its total fiscal year 1999 program procurement request to begin the second-source effort. This is another cost associated with developing the second source. In addition, the low production quantities during the first 7 months of the follow-on contract will cause production and fielding breaks. Project officials said that the FMTV second-source plan precluded fielding breaks, as the current contractor would continue to produce trucks while the second source is being developed. However, the Army is planning a 3-month production break between the end of the current contract in December 1998 and the start of production under the follow-on contract in April 1999. A project official said that a 3-month production break will cause a 3-month fielding break. The production break will be caused by the low number of trucks the Army funded for the first 7 months of the follow-on production contract. Subsequent to the fiscal year 1999 budget request, the Army decided to split the follow-on contract into separate testing and production contracts. This split will further reduce the production quantities for the first 7 months of the follow-on contract to 156 trucks. The Army did not compare the cost and benefits of its plan with those of other program alternatives, including (1) dividing the program into 5-year production increments and competing each increment among all qualified contractors, (2) delaying the development of the second source until funds are available to support both the current contractor and the second source without a fielding break, or (3) continuing with the current contractor for the remainder of the program. A Stewart & Stevenson official said that under the current contract, the contractor is producing 375 to 400 FMTV trucks a month. He added that the contractor’s economical production rate is 400 trucks a month; at that rate the contractor can avoid a price increase on the trucks. If the Army reduces the monthly production rate, truck prices will increase and therefore program costs will increase. The same contractor official said that the contractor’s minimum sustaining rate is 160 trucks a month and that if the production quantities drop to that number, the Army could expect a price increase close to 10 percent. The Army’s fiscal year 1999 budget request for the FMTV program shows the contractor’s economical production rate as 350 trucks a month and the minimum sustaining rate as 150 trucks a month. However, a project official said that the budget rates were developed when the contract was awarded and that the contractor’s rates were reasonable and more current. We analyzed a potential 60-40 percent production quantity split under the Army’s plan and compared the monthly production quantities each contractor would receive to the current contractor’s production rates. Our preliminary analysis indicates that the current contractor will not be able to reduce its costs even if it wins the larger share of the production quantities because the larger share will be at or near its minimum sustaining rate. Although the Army has not determined how it will divide production between the two contractors, we based our analysis of the potential split of FMTV production on a 60-40 percent ratio because the Army has used this ratio for planning purposes. Table 3.1 shows the total projected annual and monthly production quantities for the FMTV program and the annual and monthly quantities for each contractor based on a 60-40 percent ratio. Table 3.1 shows that if two contractors compete for planned production quantities based on a 60-40 percent ratio, the current contractor would produce, in most years, at or near its monthly minimum sustaining rate of 160 trucks even if it won the larger share of production in all years. It will be difficult for the current contractor to reduce its price to the Army at these quantities because its FMTV production plant is dedicated solely to FMTV production and was built to produce up to a maximum of 525 trucks per month based on an 8-hour work shift, 5 days a week. When Stewart & Stevenson won the first production contract, the Army’s acquisition plan did not contain plans for developing a second source. Stewart & Stevenson’s fixed costs at its FMTV production plant must be covered by its FMTV contracts and therefore the fixed costs limit the amount of price reduction the contractor can give to the Army. According to a Stewart & Stevenson official, a monthly rate of 160 trucks would cause about a 10-percent increase in the price of the trucks, not a price reduction. We were unable to estimate the effect the production split would have on the prices the second-source contractor would give the Army. The second-source contractor may be able to optimize its FMTV truck production at lower rates than the current contractor. There are several possible scenarios. For example, if the second-source contractor is a truck producer, and if it could add FMTV truck production to a plant in which it produces other trucks, it could share the plant’s fixed costs with other contracts. This would tend to reduce the fixed costs attributed to the FMTV contracts and lower the second-source contractor’s minimum sustaining rate, allowing it to lower the FMTV price. To reduce costs, the Army plans to introduce competition into the FMTV program by developing a second source to produce FMTV trucks. The current contractor and second source will share the annual production. It is not clear whether the Army’s plan to split production of FMTV trucks between two contractors will result in cost savings. The Army has not performed a cost and benefit analysis to justify its plan. A cost and benefit analysis could determine whether, for example, the financial benefits of adding a second source would offset the investment of bringing a second contractor into full production and could compare the costs and benefits of the Army’s plan with other alternatives. To ensure that the Army considers all its options before it starts to develop a second source for the FMTV, we recommend that the Secretary of Defense direct the Secretary of the Army to delay the Army’s plans for developing a second source to produce FMTV trucks until the Army completes an analysis that compares the costs and benefits of its plans with those of other alternatives and to pursue the alternative that is most beneficial to the government. In commenting on a draft of this report, the Department of Defense partially concurred with our recommendation. It said that the Army is conducting an FMTV second-source contractor cost and benefit analysis as directed by the Congress. The fiscal year 1999 Defense Authorization Actrequired the Secretary of the Army to conduct a cost and benefit analysis prior to contracting with a second source for FMTV trucks. The analysis is to support certifications by the Secretary of the Army that (1) total FMTV quantities will be sufficient to enable the prime contractor to maintain a minimum economic production level; (2) total costs of the procurements under the second-source plan will be the same or lower than if the Army proceeds with only one contract; and (3) vehicles produced by both contractors will have common, interchangeable components. The Army’s plan to conduct an FMTV cost and benefit analysis is a step in the right direction; however, according to an Army official, the Army’s analysis will compare the costs and benefits of only two acquisition approaches—the current FMTV second-source plan and continuing with the current contractor for the remainder of the program. Since other alternative acquisition approaches for the program exist, we believe that, as a minimum, the Army should explore the other alternatives. The Army should select the acquisition alternative that is the most cost beneficial to the government to continue the FMTV program. | Pursuant to a congressional request, GAO reviewed the Army's Family of Medium Tactical Vehicles (FMTV) program. GAO noted that: (1) the Army's plan for implementing its follow-on production contracts needs to ensure that the government receives trucks that meet FMTV program quality standards; (2) the current contract allowed the contractor to produce trucks during testing even though the trucks were unable to pass testing and demonstrate that they met FMTV performance and reliability, availability, and maintainability requirements; (3) these trucks required modifications to achieve satisfactory performance that caused program delays; (4) recent government inspection data and quality deficiency reports on trucks in the field show that the contractor is not consistently producing trucks within the quality standards set for FMTV trucks; (5) however, because of incomplete data, the Army does not know overall whether FMTV trucks are performing adequately in the field; (6) under the follow-on contracts, full-rate production of new model trucks will be allowed to start before the trucks pass testing; (7) also, the Army plans to continue to accept the new models under its sampling inspection method; (8) this approach, which was followed under the current contract, caused program delays and uncertainty about the quality of the fielded trucks; (9) the Army has not instituted safeguards to ensure that the follow-on contracts do not result in problems similar to those experienced under the current contract; (10) the Army plans to compete future procurement of the FMTV trucks with the expectation that program costs can be reduced; (11) therefore, it has decided to develop a second source to produce FMTV trucks; (12) the current contractor and second source will share the annual production; (13) the Army has not performed an analysis to determine the costs and benefits of this plan or compared it to other alternatives, including: (a) dividing the program into 5-year production increments and competing each increment among all qualified contractors; (b) delaying the development of the second source until funds are available to support both the current contractor and the second source without a fielding break; or (c) continuing with the current contractor for the rest of the program; and (14) GAO's preliminary analysis of the production quantities that the two contractors could expect to share from the competition indicates that the Army's plan may not result in program cost savings. |
DOD is one of the largest and most complex organizations in the world. In fiscal year 2003, DOD reported that its operations involved over $1.1 trillion in assets, over $1.5 trillion in liabilities, approximately 3.3 million military and civilian personnel—including guard and reserve components, and disbursements of over $416 billion. Execution of these operations spans a wide range of defense organizations, including the military services and their respective major commands and functional activities, numerous large defense agencies and field activities, and various combatant and joint operational commands that are responsible for military operations for specific geographic regions or theaters of operations. To execute these military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. To support its business functions, DOD reported in April 2003 that it relied on about 2,274 business systems, including accounting, acquisition, logistics, and personnel systems. To support its existing systems environment, DOD requests billions of dollars annually. The Assistant Secretary of Defense for Networks and Information Integration—DOD’s CIO—is responsible for compiling and submitting the department’s IT budget reports to Congress and the Office of Management and Budget (OMB). According to a DOD CIO official, the information in the IT budget request is initially prepared by various DOD components and processed through their respective CIOs and comptrollers. The information is then forwarded to the DOD CIO office, where it is consolidated before being sent to OMB and Congress. The DOD component CIOs and comptrollers are responsible for, and are required to certify, the reliability of the information about their respective initiatives that is included in the IT budget request. DOD continues to confront pervasive, decades-old financial and business management problems related to its systems, processes (including internal controls), and people (human capital). These problems have (1) resulted in a lack of reliable information needed to make sound decisions and report the status of DOD’s activities through financial and other reports; (2) hindered its operational efficiency; and (3) left the department vulnerable to fraud, waste, and abuse. For example: Of the 481 mobilized Army National Guard soldiers from six GAO case study Special Forces and Military Police units, 450 had at least one pay problem associated with their mobilization. DOD’s inability to provide timely and accurate payments to these soldiers, many of whom risked their lives in recent Iraq or Afghanistan missions, distracted them from their missions, imposed financial hardships on the soldiers and their families, and has had a negative impact on retention. Some DOD contractors have been abusing the federal tax system with little or no consequence and DOD is not collecting as much in unpaid taxes as it could. Under the Debt Collection Improvement Act of 1996, DOD is responsible—along with the Department of the Treasury—for offsetting payments made to contractors to collect funds owed, such as unpaid federal taxes. However, we found that DOD had collected only $687,000 of unpaid taxes as of September 2003. We estimated that at least $100 million could be collected annually from DOD contractors through effective implementation of the levy and debt collection program. Our review of fiscal year 2002 data revealed that about $1 of every $4 in contract payment transactions in DOD’s MOCAS system was for adjustments to previously recorded payments—$49 billion of adjustments out of $198 billion in disbursement, collection, and adjustment transactions. According to DOD, the cost of researching and making adjustments to accounting records was about $34 million in fiscal year 2002, primarily to pay hundreds of DOD and contractor staff. Tens of millions of dollars are not being collected each year by military treatment facilities from third-party insurers because key information required to effectively bill and collect from third-party insurers is often not properly collected, recorded, or used by the military treatment facilities. The long-standing problems continue despite the significant investments made in DOD business systems each year. The challenges continue, in part, because of DOD’s inability to effectively modernize its business systems. For example, our March 2003 report and testimony concluded that DOD had not effectively managed and overseen its planned investment of over $1 billion in four DFAS systems modernization efforts. DOD has terminated two of the four DFAS systems modernization projects—the Defense Procurement Payment System (DPPS) and the Defense Standard Disbursing System (DSDS). The DOD Comptroller terminated DPPS in December 2002 after more than 7 years of effort and an investment of over $126 million, citing poor program performance and increasing costs. DFAS terminated DSDS in December 2003 after approximately 7 years of effort and an investment of about $53 million, noting that a valid business case for continuing the effort could not be made. These two projects were planned to provide DOD the capability to address some of its long-standing contract and vendor payment problems. Since 1990, we have identified DOD’s management of secondary inventories (spare and repair parts, clothing, medical supplies, and other items to support the operating forces) as a high-risk area. One primary factor contributing to DOD’s inventory management weaknesses is its outdated and ineffective systems. These system deficiencies have hindered DOD’s ability to (1) support its reported inventory balances; (2) provide inventory visibility; and (3) provide accurate financial and management information related to its property, plant, and equipment. For example: DOD incurred substantial logistical support problems as a result of weak distribution and accountability processes and controls over supplies and equipment shipments in support of Operation Iraqi Freedom activities, similar to those encountered during the prior Gulf War. These weaknesses resulted in (1) supply shortages, (2) backlogs of materials delivered in theater but not delivered to the requesting activity, (3) a discrepancy of $1.2 billion between the amount of materiel shipped and that acknowledged by the activity as received, (4) cannibalization of vehicles, and (5) duplicate supply requisitions. Inadequate asset visibility and accountability resulted in DOD selling new Joint Service Lightweight Integrated Suit Technology—the current chemical and biological protective garment used by our military forces—on the Internet for $3 each (coat and trousers) while at the same time buying them for over $200 each. DOD has acknowledged that these garments should have been restricted to DOD use only and therefore should not have been available to the public. Our analysis of data on more than 50,000 maintenance work orders opened during the deployments of six battle groups indicated that about 29,000 orders (58 percent) could not be completed because the needed repair parts were not available on board ship. This condition was a result of inaccurate ship configuration records and incomplete, outdated, or erroneous historical parts demand data. Such problems not only have a detrimental impact on mission readiness, they may also increase operational costs due to delays in repairing equipment and holding unneeded spare parts inventory. Transformation of DOD’s business systems and operations is critical to the department having the ability to provide Congress and DOD management with accurate and timely information for use in the decision-making process. One of the key elements we have reported as necessary to successfully execute the transformation is establishing and implementing an enterprise architecture. In this regard, the department has undertaken a daunting challenge to modernize its existing business systems environment through the development and implementation of a BEA or modernization blueprint. This effort is an essential part of the Secretary of Defense’s broad initiative to “transform the way the department works and what it works on.” As previously noted, the department has designated seven domain owners to be responsible for implementing the BEA, which includes (1) performing system reviews and approving initiative funding as part of investment management and (2) enforcing compliance with the BEA. In April 2003, DOD reported that its business systems environment consisted of 2,274 systems and systems acquisition projects spanning numerous business operations that were divided into the seven domains and established a domain leader for each area. DOD’s efforts to manage the modernization initiative include a strategy to vest the domains with the authority, responsibility, and accountability for business transformation, extension and implementation of the architecture, and investment management. We have also recommended that DOD establish an investment management structure to gain control over business system investments by (1) establishing a hierarchy of investment review boards from across the department, (2) establishing a standard set of investment review and decision-making criteria for its ongoing IT system projects, and (3) directing the boards to perform a comprehensive review of all ongoing business system investments. Two of the business systems modernization efforts DOD has under way to address some of its inventory problems are DLA’s BSM and the Army’s LMP. These two business systems represent approximately 19 percent of the $770 million of the modernization funding requested in fiscal year 2004 for logistics systems. DLA and the Army are using the same commercial-off- the-shelf (COTS) enterprise resource planning software package. DLA and the Army are using the inventory management portion of the package. BSM. In November 1999, DLA initiated an effort to replace its materiel management systems—the Standard Automated Materiel Management System (SAMMS) and the Defense Integrated Subsistence Management System—with BSM. DLA has used the two existing systems for over 30 years to manage its inventory. BSM is intended to transform how DLA conducts its operations in five core business processes: order fulfillment, demand and supply planning, procurement, technical/quality assurance, and financial management. BSM was deployed in July 2002 and is operating at the Defense Supply Center Columbus—Columbus, Ohio; the Defense Supply Center Philadelphia—Philadelphia, Pennsylvania; the Defense Supply Center Richmond—Richmond, Virginia; the Defense Distribution Center—New Cumberland, Pennsylvania; the DLA Logistics Information Service—Battle Creek, Michigan; and DLA headquarters—Fort Belvoir, Virginia. The initial deployment included low-volume, low-dollar- value items. BSM has about 900 users and is populated with over 170,000 inventory items valued at about $192 million. Once it becomes fully operational, BSM is expected to have about 5,000 users and control and account for about 5 million inventory items valued at about $12 billion. DLA currently estimates that it will invest approximately $850 million to fully deploy BSM. LMP. In February 1998, the U.S. Army Materiel Command (AMC) began an effort to replace its existing materiel management systems—the Commodity Command Standard System and the Standard Depot System— with LMP. The Army has used the existing systems for over 30 years to manage its inventory and depot maintenance operations. LMP is intended to transform AMC’s logistics operations in six core processes: order fulfillment, demand and supply planning, procurement, asset management, materiel maintenance, and financial management. LMP is a 12-year acquisition requirements contract. LMP became operational at the U.S. Army Communications and Electronics Command (CECOM), Fort Monmouth, New Jersey, and Tobyhanna Army Depot, Tobyhanna, Pennsylvania, in July 2003. The initial deployment of LMP consisted of inventory items such as electronics; electronic repair components; and communications and intelligence equipment such as night vision goggles, electronic components such as circuit boards, and certain munitions such as guidance systems included in missiles. Currently, LMP has 4,500 users at 12 locations and is populated with over 2 million inventory items valued at about $440 million. When LMP is fully implemented, its capacity is expected to include more than 15,000 users at 149 locations and will be populated with 6 million Army-managed inventory items valued at about $40 billion. The Army currently estimates that it will invest approximately $1 billion to fully deploy LMP. For fiscal year 2004, DOD requested approximately $28 billion in IT funding to support a wide range of military operations as well as DOD business system operations, of which approximately $18.8 billion is for the reported 2,274 business systems—$4.8 billion for business systems development/modernization and about $14 billion for operation and maintenance. As shown in figure 1, the $28 billion is spread across the military services and defense agencies. The $28 billion represents a $2 billion increase over fiscal year 2003. The department’s business systems are used to record the events associated with DOD’s functional areas, such as finance, logistics, personnel, and transportation. Table 1 shows how business system funding is spread across the various DOD components. OMB requires that funds requested for IT projects be classified as either steady state (referred to by DOD as “current services”) or as development/modernization. Current services are funds for operating and maintaining systems at current levels (i.e., without major enhancements). The development/modernization budget category represents funds for developing new IT systems or making major enhancements to existing systems. Some systems, such as BSM, have both current services and development/modernization funding. For BSM, while current services are to be used for operating the system at various DLA locations, development/modernization funds are to be used for activities such as developing additional system functionality. For fiscal year 2004, DLA’s IT budget request, including BSM, was $452 million for current services and $322 million for development/modernization. Generally, current services are financed through the Operation and Maintenance appropriations, whereas development/modernization funding can come from any one or combination of several funding sources, such as the Research, Development, Test, and Evaluation appropriations; the Procurement appropriations; or the Defense Working Capital Fund. As part of DOD’s ongoing business systems modernization efforts, DOD’s Business Management Modernization Program (BMMP) and Business Management and Systems Integration (BMSI) office are creating a repository of the department’s existing business systems. DOD reported that as of April 2003, this environment consisted of 2,274 systems and system acquisition projects. To provide for investment management, DOD assigned the systems to the seven domains. For example, DOD assigned 565 systems to the logistics domain, 210 of which primarily perform inventory functions and 32 of which perform transportation functions. Similarly, the accounting and finance domain has 542 systems of which 240 primarily perform finance and accounting functions. Table 2 presents the composition of DOD’s reported business systems by domain and functional area. Table 2 clearly indicates that there are numerous redundant systems operating in the department today. For example, DOD has reported that it has 16 vendor pay systems that are used to pay contractors for services provided. A further illustration is the department’s statement that the Defense Integrated Military Human Resources System, which is to serve as DOD’s integrated military personnel and pay system, will replace a reported 79 existing systems. BMSI officials stated that they are validating the 2,274 different systems and related functional area categories, as illustrated in table 2, with the domains. Although the systems are different, functional area categories may be the same among the domains. For example, the Accounting and Finance and Strategic Planning and Budgeting domains both report having systems that perform finance and accounting functions. BMSI officials have stated that through the BMSI office’s validation efforts, the functional area categories may be renamed or systems may be reclassified to other functional areas. For example, BMSI officials explained that the finance and accounting functional area within the Strategic Planning and Budgeting domain may be changed to Budgetary Financial Data. Although the BMSI office has created an initial repository of 2,274 business systems to support DOD’s systems modernization efforts, its systems inventory is currently neither complete nor informative enough for decision making. For example, according to logistics domain officials, there are currently about 3,000 systems just within the logistics domain. Of that amount, about 1,900 systems have been validated by the DOD components as logistics systems—that is, they are not merely a spreadsheet or a report. Such a determination has not been made for the other 1,100. Our analysis showed that of the 1,900 systems, 253 systems are included in DOD’s reported 2,274 business systems. According to logistics domain officials, they are in the process of determining if the remaining systems should be classified as a business system or a national security system. The BMSI office has not reported additional systems since April 2003 because it is continuing to reconcile its inventory with two other databases—the IT Registry and the Information Technology Management Application (ITMA). This reconciliation is necessary because the three databases are not integrated. The IT Registry is a database of mission- critical and mission-essential IT systems maintained by the DOD CIO. As reported by the DOD Inspector General (IG), each DOD component could determine whether a system should be reported as mission critical or mission essential in the IT Registry. Since the definitions were subject to interpretation, the DOD IG concluded that the IT Registry would not necessarily capture the universe of DOD business systems. The ITMA is an application used by the DOD CIO to collect system information for the development of the department’s annual IT budget request. Each of these databases—the IT Registry, the ITMA, and the BMMP systems inventory— contains varying information, some of which overlaps. For example, the IT Registry includes warfighting systems as well as some business systems, while the BMMP inventory includes only systems related to the department’s business operations. The ITMA includes initiatives and programs, such as the department’s BEA effort, that are not IT systems. Although DOD recognizes that it needs an integrated repository of systems information in order to control and prioritize its IT investments, the difficulty of developing a single source is compounded by the fact that DOD has not developed a universal definition of what should be classified as a business system. Lacking a standard definition that is used consistently across the entire department, DOD does not have reasonable assurance that it has identified all of its business systems. As a result, DOD does not have complete visibility over its business systems to permit analysis of gaps and redundancies in DOD’s business systems environment and to assist in preventing the continuing proliferation of redundant, stovepiped business systems. Furthermore, DOD cannot provide reasonable assurance to Congress that its IT budget request includes all funding for the department’s business systems. For example, we reported in December 2003, that DOD’s IT budget submission to Congress for fiscal year 2004 contained material inconsistencies, inaccuracies, or omissions that limited its reliability. We identified discrepancies totaling about $1.6 billion between two primary parts of the submission—the IT budget summary report and the detailed capital investments reports on each IT initiative. These problems were largely attributable to insufficient management attention and limitations in departmental policies and procedures, such as guidance in DOD’s Financial Management Regulation, and to shortcomings in systems that support budget-related activities. DOD continues to lack effective management oversight and control over business systems modernization investments. While the domains have been designated to oversee business systems investments, the actual funding, as shown in table 1, continues to be spread among the military services and defense agencies, thereby enabling the numerous DOD components to continue to develop stovepiped, parochial solutions to the department’s long-standing financial management and business operation challenges. Furthermore, the department does not have reasonable assurance that it is in compliance with the fiscal year 2003 defense authorization act, which provides that obligations in excess of $1 million for systems improvements may not be made unless the DOD Comptroller makes a determination that the improvements are in accordance with the criteria specified in the act. Lacking a departmentwide focus and effective management oversight and control of business systems investment, DOD continues to invest billions of dollars in systems that fail to provide integrated corporate solutions to its business operation problems. In response to our September 2003 report, DOD said that it was taking several actions to improve the control and accountability over business systems investments. However, as of March 2004, many of these actions had not been finalized. As a result, the department has not put into place the organizational structure and process controls to adequately align business system investments with the BEA. Each DOD component continues to make its own investment decisions, following different approaches and criteria. The lack of an institutionalized investment strategy has contributed to the department’s current complex, error-prone, nonintegrated systems environment and precluded the development of corporate system solutions to long-standing business problems. In particular, DOD has not clearly defined the roles and responsibilities of the domains, established common investment criteria, and conducted a comprehensive review of its ongoing IT investments to ensure that they are consistent with the BEA. As we have previously reported, best practices recommend that investment review boards be established to control an entity’s systems investments and that the boards use a standard set of investment review and decision-making criteria to ensure compliance and consistency with the architecture. We have also recommended that the department establish investment review boards to better control investments and that each board be composed of representatives from across the department. DOD has decided that in lieu of the investment review boards, the domains will be responsible for investment management. In March 2004, the Deputy Secretary of Defense signed an IT portfolio investment management policy and assigned overall responsibility to the domains. However, the specific roles and responsibilities of the domains had not been formalized and standard criteria for performing systems reviews have not been finalized. According to DOD officials, the related detailed directive and instructions will outline the specific roles and responsibilities of the domains and how they are to be involved in the overall business systems investment management process. The department is drafting a memorandum that will require the domains to develop a plan for implementing the investment management policy. Further, the department has developed draft system review and certification process guidance that outlines the criteria that are to be used by the domains and program managers to assess system compliance with the BEA. The systems covered in the review process consist of new system initiatives, ongoing system developmental projects, and systems in sustainment. According to DOD, once a system is placed in sustainment, modernization funding cannot exceed $1 million. The system review and certification process guidance has been integrated with the department’s existing acquisition guidance—commonly referred to as the DOD 5000 series. The acquisition guidance requires that certain documentation be prepared at different stages—known as milestones—within the system’s life-cycle process. This documentation is intended to provide relevant information for management oversight and for decision making on whether the investment of resources is cost beneficial and technically feasible. DOD officials noted that the system review process would be further enhanced because the DOD Comptroller will have to certify that the proposed investment is consistent and aligned with the BEA at each milestone decision. According to DOD, the certification process will help ensure that the obligation of funds of over $1 million for the modernization of a system are in accordance with the criteria set forth in the fiscal year 2003 defense authorization act. While these actions are aimed at improving the control and accountability over business systems investments, we have previously reported that the department did not adhere to the milestone decision-making and oversight processes it established to ensure the economical and technical risks associated with systems modernizations have been mitigated. For example, our March 2003 report noted that DOD had not effectively managed and overseen its planned investment of over $1 billion in four DFAS system modernization efforts. One project’s estimated cost had increased by as much as $274 million, while the schedule slipped by almost 4 years. For each of these projects, DOD oversight entities—DFAS, the DOD Comptroller, and the DOD CIO—could not provide documentation that indicated they had questioned the impact of the cost increases and schedule delays, and allowed the projects to proceed in the absence of the requisite analytical justification. Such analyses provide the requisite justification for decision makers to use in determining whether to invest additional resources in anticipation of receiving commensurate benefits and mission value. Two of the four projects—DPPS and DSDS—were terminated in December 2002 and December 2003, respectively, after an investment of approximately $179 million that did not improve the department’s business operations. While DOD is continuing to work toward establishing the structure and processes to manage its business systems investments, it has not yet conducted a comprehensive system review of its ongoing IT investments to ensure that they are consistent with its BEA efforts. The domains have raised concerns that they did not have sufficient staff to perform the system reviews. To assist the domains with their system reviews, in December 2003, the Deputy Secretary of Defense allotted the domains 54 additional staff. Despite concerns over the sufficiency of staff resources and the lack of organizational structure and processes for controlling system investments, the department has acted to curtail the funding for some systems. For example, effective October 2003, the DOD Comptroller directed that the Defense Joint Accounting System (DJAS) be put into sustainment. That is, funding would be provided to operate and maintain the system, but not to upgrade or modernize the system. In June 2000, the DOD Inspector General (IG) reported that DFAS was developing DJAS at an estimated life-cycle cost of about $700 million without demonstrating that the program was the most cost-effective alternative for providing a portion of DOD’s general fund accounting. DJAS is only being operated at two locations—Fort Benning, Georgia, and the Missile Defense Agency— and there are no longer any plans to implement the system at other locations. Another system that DOD has placed into sustainment is the Joint Computer Aided Acquisition and Logistics Support (JCALS) system. JCALS was initiated in June 1992 to enable the services to streamline DOD’s logistical and acquisition functions through business process reengineering and eliminating existing systems. In May 2003, Gartner, Inc., reviewed the cost, efficiency, and effectiveness of JCALS and reported that the program is costly to operate and maintain. The study recommended freezing all software and technology spending. According to DOD’s fiscal year 2004 IT budget, over $1 billion had been invested in JCALS since the inception of the program. Placing DJAS and JCALS in sustainment is a step in the right direction. However, execution of a comprehensive review of all modernization efforts by DOD before substantial money has been invested will reduce the risk of continuing the department’s track record of business systems modernization efforts that cost more than anticipated, take longer than expected, and fail to deliver intended capabilities. Further, in developing the fiscal year 2005 budget request, the DOD Comptroller denied DFAS’s request for approximately $32 million for the development of an accounting and budget execution system. The DOD Comptroller appropriately noted that there should not be investments in a new system before the domains define the requirements and the system is justified through the appropriate DOD approval process. The DOD Comptroller also denied DFAS’s request for funding of the Disbursing Transformation Program, which was a proposed $41 million initiative through fiscal year 2009. According to DFAS, the program was to be funded from resources that were budgeted for DSDS, which, as previously mentioned, was terminated in December 2003. The DOD Comptroller noted that the department should not pay for salaries, software development, and systems modernization for a disbursing system before disbursing functionality is defined according to the BEA. It was further stated that it is premature for DFAS to create a new disbursing system when it cannot explain any of the program’s requirements in broad or detailed terms and numerous disbursing systems already exist. It is encouraging to see the DOD Comptroller acting to eliminate budget requests by DFAS for systems that are not justified. However, DFAS, which is under the auspices of the DOD Comptroller, represents a very small percentage—slightly over 2 percent ($103 million of $4.8 billion)—of the total modernization funding. Given that the department lacks a comprehensive inventory of its business systems, it is unknown how many other modernization projects should be questioned. However, since the roles and responsibilities of the domain owners have not been clarified, they have not been empowered to make investment decisions similar to those of the DOD Comptroller. As we have previously recommended, the department needs to assess its current systems and limit current investments to deployment of systems that have already been fully tested and involve no additional development or acquisition cost; stay-in-business maintenance needed to keep existing systems management controls needed to effectively invest in modernized new systems or existing system changes that are congressionally directed or are relatively small, cost-effective, and low risk and can be delivered in a relatively short time frame. As noted in our September 2003 report, DOD had not yet defined and implemented an effective approach for selecting and controlling business system investments. Absent the rigors of these stringent criteria, DOD will continue to invest in systems that perpetuate its existing incompatible, duplicative, and overly costly systems environment that does not optimally support mission performance. DOD has not yet defined and implemented an effective investment management process to proactively identify and control system improvements exceeding $1 million in obligations. DOD officials have acknowledged that the department does not have a systematic means to identify and determine which systems improvements should be submitted to the DOD Comptroller for review and, in essence, depend on system owners coming forward to the domain owners and requesting approval. DOD was unable to provide us comprehensive information on all systems improvements with obligations greater than $1 million since passage of the act. However, based upon limited information provided by the military services for fiscal years 2003 and 2004, we found that modernizations with obligations totaling at least $479 million were not submitted to the DOD Comptroller for any factual determination. The act states that as a condition of making any obligation in excess of $1 million for system improvements, the obligation be reviewed by the DOD Comptroller who must make a determination whether the request is in accordance with criteria specified in the act. To comply with the legislative requirement, the DOD Comptroller issued a memorandum on March 7, 2003, to DOD’s component organizations stating that the BMSI office—which is responsible for overseeing the development and implementation of the BEA—must review all system improvements with obligations in excess of $1 million. In addition, the memorandum directs the DOD components, as an integral part of the review and approval process, to present information to DOD Comptroller officials and relevant domain owners that demonstrates that each investment (1) complies with the BEA and (2) is economically justified. To support that the investment is economically justified, information on the cost and benefit and return on investment, including the break-even point, must be provided. DOD officials acknowledge that the department could utilize the IT budget to assist in the identification of systems that could be subject to the act’s requirements. While we recognize that this is budgetary data, rather than the obligational data referred to in the act, this information could provide a starting point for the domains identifying potential projects that should be submitted to the DOD Comptroller. For example, we analyzed the DOD IT budget request for fiscal years 2003 through 2005 and identified over 200 systems in each year’s budget, totaling over $4 billion per year that could involve obligations of funds that exceed the $1 million threshold. Table 3 presents our summary analysis by DOD component. The list in table 3 may not be complete. According to the DOD CIO and military service officials, the “All Other” category in the IT budget exhibits includes system projects that do not have to be identified by name because they fall below the $2 million reporting threshold for budgetary purposes. In an attempt to substantiate that the obligations for business systems modernization were in accordance with the act, we requested that DOD activities provide us with a list of obligations greater than $1 million for fiscal year 2003 and fiscal year 2004, as of December 2003. As of February 2004, we received responses from the Army, the Navy, and the Air Force, but did not receive responses from any of the defense agencies such as DFAS and DLA. To ascertain if the DOD Comptroller had made the determination required in the act, we compared a list of system approvals provided by the BMSI office with the obligational data (by system) provided by the military services. Based upon a comparison of the limited information available, we identified $479 million in reported obligations over $1 million by the military services for system improvements that were not submitted to the DOD Comptroller for review and determination as required by the act. Table 4 summarizes our analysis. Examples of DOD system improvements included in table 4 that were not submitted include the Air Force obligating over $9 million in fiscal year 2003 and about $4 million in fiscal year 2004 for the Integrated Maintenance Data System, the Navy obligating about $18 million in fiscal year 2003 and about $6 million in fiscal year 2004 for the Electronic Military Personnel Records System, and the Army obligating about $22 million in fiscal year 2003 and about $10 million in fiscal year 2004 for the Transportation Coordinators' Automated Information for Movements System. Appendix III provides a list of modernization projects with obligations totaling over $1 million that were reviewed by the DOD Comptroller as required by the act. Appendix IV provides a detailed list of the individual systems not submitted to the DOD Comptroller and the related amount of the total obligations for fiscal years 2003 and 2004. The act places limitations on the legal authority of individual program and government contracting officials to obligate funds in support of the systems for which they are responsible, but DOD has yet to proactively manage investments to avoid violations of the limitations and to review investments in any meaningful way to enforce these statutory limitations. Until DOD strengthens its process for selecting and controlling business system investments and adopts an effective governance concept, it remains exposed to the risk of spending billions of dollars on duplicative, stovepiped, nonintegrated systems that do not optimize mission performance and accountability and, therefore, do not support the department’s transformation goals. We also identified inconsistencies in how the military services categorized systems. For example, the Air Force did not categorize its Global Combat Support System as a business system, while the Army and the Navy consider their respective Global Combat Support Systems business systems. Additionally, the Navy categorized the Defense Message System as a business system, but the Army and the Air Force did not. This inconsistency further reiterates the need for a standard database and uniform definition of a business system that properly categorizes DOD’s numerous systems. For those systems that were submitted for review, we found that most had the supporting documentation called for in the DOD Comptroller’s March 7, 2003, memorandum. For example, the return on investment was identified. However, the one common element lacking was the assertion that the system projects were compliant with the BEA or otherwise met the criteria set out in the act. As noted earlier, BMMP has developed a draft BEA system compliance assessment certification for program managers to use; however, the process has not been finalized. The inability to assert compliance with the BEA is consistent with our September 2003 report, which noted that the BEA lacked the details needed to provide DOD with a common vision and constrain or control investments. We also identified instances in which the justification for the approval was questionable. These investments were made without DOD knowing whether these systems are aligned or consistent with part of DOD’s long-term system modernization strategies. For example: In October 2003, the DOD Comptroller approved obligations of $8 million for the Standard Procurement System (SPS) even though the supporting documentation noted that there was insufficient documentation to validate all requirements and some were found to be noncompliant with the BEA. We and the DOD IG have previously reported concerns with the overall management and implementation of SPS and the ability to deliver its intended capability. Initiated almost 10 years ago in November 1994, the system was to provide DOD with a single automated system to perform all functions related to contract management within DOD’s procurement process for all DOD organizations and activities. The system was also intended to replace the contract administration functions currently performed by MOCAS, a system implemented in 1968 and still operating today. Further, as will be discussed later in this report, difficulty with the implementation of SPS is one of the factors that contributed to the slippage in DLA’s BSM implementation schedule. In May 2003, the DOD Comptroller approved funding of about $4 million for the Army’s Integrated Facilities System (IFS). Initially, the Director of the BMSI office denied the funding request in part because it was noted that the system would be replaced by an enterprise solution. In response, the installations and environment domain noted that a final system solution had not been determined and stated that if IFS was found to be compliant with the “yet to be determined revised business process,” it could be designated the enterprisewide solution. The response also noted that IFS “might prove to have the best functionality and technical capabilities for a DOD real property inventory solution.” However, until the department’s BEA becomes more robust, it remains unclear if this system will be part of the ultimate system solution. Until that decision is made, it is unknown what benefit will be derived from further investment in this system. We also identified some instances in which the DOD Comptroller’s approval depended on specific actions being taken by a given date. However, prior to December 2003, the BMSI office did not have a process in place to track and follow up on required actions and did not have reasonable assurance that the required actions were taken. For example: In April 2003, the DOD Comptroller approved the expenditure of about $53 million for the convergence of four separate Navy enterprise resources planning solutions into one initiative. This approval was subsequent to an approval in February 2003 of about $21 million for the continuance of two of the four Navy efforts. The approval memorandum outlined three specific actions that needed to be taken and established time frames for the completion of each action. As of February 2004, BMSI officials were not able to attest to whether these actions had been completed. However, the Navy continues to move forward with this effort. The DOD Comptroller approved a pilot project for the National Security Agency on March 7, 2003, for $13.4 million. The approval depended on the completion of an overall planning document that outlined the various areas that were to be addressed. This document was to be completed by March 16, 2003. As of February 2004, BMSI officials stated that only minimal supporting documentation had been provided. Thus, even for the systems modernization efforts approved by the DOD Comptroller, serious questions remain as to whether these investments are justified. BSM and LMP were initiated in November 1999 and February 1998, respectively, prior to DOD undertaking the BEA and establishing the domains. As such, they are not directed toward a corporate solution to resolving the department’s long-standing weaknesses in the inventory and logistics management areas, such as total asset visibility or an integrated systems environment. Both projects are more focused on DLA’s and the Army’s respective inventory and logistics management operations. If effectively implemented, BSM and LMP are expected to provide benefits associated with private industry’s logistics reengineering efforts, such as inventory reduction, improved cycle time, improved customer satisfaction, and increased response time. Additionally, BSM and LMP are intended to improve supply and demand forecast planning, maintenance workload planning, provide a single source of data, and improve data quality. However, the initial deployment of BSM and LMP did not operate as intended and, therefore, did not meet DLA’s and Army’s component-level needs. In large part, these operational problems were due to DLA and the Army not effectively implementing the disciplined process that are necessary to manage the development and implementation of BSM and LMP in the areas of requirements management and testing. DLA and Army program officials have acknowledged that requirements and testing defects were factors contributing to these operational problems as well as schedule slippages and cost increases. Further, BSM and LMP have accumulated numerous lessons learned and have assembled teams to analyze these lessons and to develop an implementation strategy for corrective action. Additionally, to their credit, DLA and the Army have decided that future deployments of BSM and LMP will not go forward until they have reasonable assurance that the deployed systems will operate as expected for a given deployment. Effectively managing and overseeing the department’s $19 billion investment in its business systems is key to the successful transformation of DOD’s business operations. The transformation also depends on the ability of the department to develop and implement business systems that provide users and department management with accurate and timely information on the results of operations and that help resolve the numerous long-standing weaknesses. As DOD moves forward with continued development and implementation of its BEA, it needs to ensure that the department’s business systems modernization projects are part of a corporate solution to preclude the continued proliferation of duplicative, stovepiped systems. Three of the long-standing problems in logistics and inventory management have been related to total asset visibility, integrated systems, and valuation of inventory. We found that BSM and LMP will not resolve problems associated with total asset visibility and integrated systems and the first deployment of LMP did not provide for the valuation of inventory at the depot in accordance with federal accounting standards and departmental guidance. Details on each of these areas follow. Although BSM and LMP are enterprise resource planning systems based on commercial software that incorporates best business practices for logistics supply chain management, their planned capabilities do not provide a corporate solution for total asset visibility—a key gap in DOD’s capabilities to track and locate items across the department. A corporate solution for total asset visibility depends on the successful development and implementation of other systems. The time frame and costs associated with these other system projects have not been fully defined. To illustrate the lack of asset visibility, in October 2002, a DLA official testified that BSM would provide improved control and accountability over the Joint Service Lightweight Integrated Suit Technology (JSLIST)—a chemical/biological suit. The official stated that the JSLIST suits would be included in BSM at the earliest practicable date, which was estimated to be December 2003. BSM, however, is not designed to provide the corporate total asset visibility necessary to locate and track the suits throughout DOD’s supply chain. While the suits are expected to be included in a future deployment of BSM, program officials have not yet specified a date when they will be included. Even when the suits are included, BSM is designed to provide visibility over the suits only within the DLA environment— something DLA has stated already exists within its current legacy system environment. As we have previously reported, the lack of integrated systems hinders DOD’s ability to know how many JSLIST it has on hand and where they are located once they leave the DLA warehouse. For example, we found that military units that receive JSLIST from DLA warehouses maintained inventory data in nonstandard, stovepiped systems that did not share data with DLA or other DOD systems. The methods used to control and maintain visibility over JSLIST at the units we visited ranged from stand- alone automated systems, to spreadsheet applications, to pen and paper. One military unit we visited did not have any inventory system for tracking JSLIST. BSM does not address asset visibility outside of DLA’s supply chain for the JSLIST, and thus cannot provide total asset visibility for this critical inventory item. Having the ability to readily locate sensitive items, such as JSLIST, is critical, particularly if a defect is found and the items must be recalled. A case in point is the JSLIST predecessor, the Battle Dress Overgarment (BDO). Over 700,000 of these suits were found to be defective and were recalled. Since DOD’s systems did not provide the capability to identify the exact location of each suit, a series of data calls were conducted, which proved to be ineffective. We reported in September 2001 that DOD was unable to locate approximately 250,000 of the defective suits and therefore was uncertain if the suits were still in the possession of the military forces, or whether they had been destroyed or sold. Subsequently, we found that DOD had sold many of these defective suits to the public as excess, including 379 that we purchased in an undercover operation. In addition, DOD may have issued over 4,700 of the defective BDO suits to local law enforcement agencies. This is particularly significant because local law enforcement agencies are most likely to be the first responders to a terrorist attack, yet DOD failed to inform these agencies that using these suits could result in death or serious injury. BSM will not provide DOD with the capability to readily locate JSLIST for any reason, including the need to recall defective suits. Similar to BSM, LMP will not provide the Army with total asset visibility until a suite of other systems has been developed and implemented. Specifically, Army officials have stated that LMP will require integration with other Army systems that are under development in order to achieve total asset visibility within the Army. These additional systems are the Product Lifecycle Management Plus (PLM+) and Global Combat Support System—Army (GCSS–A). According to the Army, PLM+ is to integrate LMP and GCSS–A to create a seamless end-to-end solution for Army logistics. According to information provided by the Army, PLM+ was initiated in December 2003. No estimates have been developed as to the cost of this project, nor has a time frame for development and implementation been established. The Army has stated that GCSS–A will provide visibility of supplies and equipment in storage and in transit. The Army began development of GCSS–A in fiscal year 1997 and since then has invested approximately $316 million in this effort. In May 2003, the Army decided to pursue a COTS solution for GCSS–A rather than continue to develop the system in house. The Army recently stated that the total cost of GCSS–A cannot be accurately estimated until all of the “to be” business processes are identified, which is expected to occur in October 2004. However, the fiscal year 2004 capital investment report shows that the Army estimates that it will invest over $1 billion in GCSS–A through fiscal year 2009. To help provide for departmentwide total asset visibility, DLA is undertaking the implementation of the Integrated Data Environment (IDE) program. According to DLA, this initiative is intended to provide the capability for routing data from multiple systems within DLA and DOD into one system. According to DLA, the contract was signed in September 2003, and IDE is expected to reach full operational capability in August 2007. The current estimated cost of the effort is approximately $30 million. However, the completion date of August 2007 depends on other departmental efforts being completed on time, for example, PLM+, for which a completion date has not been established. One of the long-standing problems within DOD has been the lack of integrated systems. This is evident in the many duplicative, stovepiped systems among the 2,274 that DOD reported as its systems environment. Lacking integrated systems, DOD will have a difficult time obtaining accurate and reliable information on the results of its business operations and will continue to rely on either manual reentry of data into multiple systems, convoluted system interfaces, or both. These system interfaces provide data that are critical to day-to-day operations, such as obligations, disbursements, purchase orders, requisitions, and other procurement activities. For BSM and LMP, we found that the system interfaces were not fully tested in an end-to-end manner, and therefore DLA and Army did not have reasonable assurance that BSM and LMP would be capable of providing the intended functionality. We previously reported that Sears and Wal-Mart, recognized as leading- edge inventory management companies, had automated systems that electronically received and exchanged standard data throughout the entire inventory management process, thereby reducing the need for manual data entry. As a result, information moves through the data systems with automated ordering of inventory from suppliers; receiving and shipping at distribution centers; and receiving, selling, and reordering at retail stores. Unlike DOD, which has a proliferation of nonintegrated systems using nonstandard data, Sears and Wal-Mart require all components and subsidiaries to operate within a standard systems framework that results in an integrated system and do not allow individual systems development. For the first deployment, DLA has had to develop interfaces that permit BSM to communicate with more than 23 systems, including 3 DFAS, 6 DOD- wide, and 14 DLA systems. The Army has had to develop 215 interfaces that permit LMP to communicate with more than 70 systems, including 13 DFAS, 6 DLA, 2 Navy, 5 Air Force, and over 24 Army systems. Figures 2 and 3 illustrate BSM’s and LMP’s numerous required system interfaces. When BSM and LMP became operational, it became evident that the system interfaces were not working as intended. Such problems have led BSM, LMP, and organizations with which they interface—such as DFAS—to perform costly manual reentry of transactions, which can cause additional data integrity problems. For example: BSM's functional capabilities were adversely affected because a significant number of interfaces were still in development or were being executed manually once the system became operational. Since the design of system interfaces had not been fully developed and tested, BSM experienced problems with receipts being rejected, customer orders being canceled, and vendors not being paid in a timely manner. At one point, DFAS suspended all vendor payments for about 2 months, thereby increasing the risk of untimely payments to contractors and violating the Prompt Payment Act. In January 2004, the Army reported that due to an interface failure, LMP had been unable to communicate with the Work Ordering and Reporting Communications System (WORCS) since September 2003. WORCS is the means by which LMP communicates with customers on the status of items that have been sent to the depot for repair and initiates procurement actions for inventory items. The Army has acknowledged that the failure of WORCS has resulted in duplicative shipments and billings and inventory items being delivered to the wrong locations. Additionally, the LMP program office has stated that it has not yet identified the specific cause of the interface failure. The Army is currently entering the information manually, which as noted above, can cause additional data integrity errors. While these numerous interfaces are necessary because of the existing stovepiped, nonintegrated systems environment, they should have been fully developed and tested prior to BSM and LMP being deployed. In moving forward with the future deployments of BSM and LMP, it is critical that program officials ensure that the numerous system interfaces are operating as intended. Additionally, until the business enterprise architecture is further developed and DOD has decided which systems will be part of the future business systems environment, there is uncertainty as to the number of these systems with which BSM and LMP will continue to interface. Federal accounting standards require inventories to be valued based on historical costs or a method that approximates historical costs. DOD’s inability to account for and control its huge investment in inventories effectively has been an area of major concern for many years. DOD’s antiquated, duplicative systems do not capture the information needed to comply with federal accounting standards. BSM and LMP are to provide DOD the capability to comply with federal accounting standards in the valuation of its billions of dollars of inventory. DLA has stated that BSM has the capability to compute the value of inventory in accordance with federal accounting standards. Based upon information provided by DLA and our analysis, we found that the value of the inventory recorded in BSM changed each time new items were procured to reflect a moving average (historical) cost valuation of the inventory—which is an acceptable method permitted by federal accounting standards and is in accordance with DOD’s stated policy. However, the first deployment of LMP did not have the capability to value all inventory in accordance with federal accounting standards. In its evaluation of LMP, the Army Audit Agency found that it had the capability to compute the value of inventory in accordance with federal accounting standards at the command level—CECOM—but not at the depot level. The Army decided to proceed with deployment of LMP, recognizing that the issue would have to be resolved prior to further deployments to the other depots. The Office of the DOD Comptroller has also directed that there is to be no further deployment of LMP until the inventory valuation problem has been fixed. BSM and LMP experienced significant problems once they became operational at the first deployment sites. Although BSM and LMP were not designed to provide a corporate enterprise solution for inventory and logistics management, the first deployment did not address DLA’s and Army’s component-level operational needs as intended. These problems have resulted in schedule slippages and cost increases. Detecting such problems after the system is placed into operation leads to costly rework due to factors such as (1) fixing the defect, (2) entering transactions manually, and (3) adjusting reports manually. Furthermore, the manual processes required to enter the transactions and adjust related reports may introduce data integrity errors. Our analysis indicated that many of the operational problems experienced by DLA and the Army can be attributed to their inability to effectively implement the disciplined requirements management and testing processes, as discussed in this report. In fact, DLA and Army program officials acknowledged that requirements and testing defects were factors contributing to the operational problems and stated that they are working to develop more effective processes. DLA and the Army recognized that serious operational problems exist and have decided that future deployments will not go forward until they have assurance that the deployed system operates as expected for a given deployment. Operational problems include the following: Army and DFAS officials reported that LMP’s operational difficulties at CECOM and Tobyhanna Army Depot have resulted in inaccurate financial management information. More specifically, the depot is not (1) producing accurate workload planning information; (2) generating accurate customer bills; and (3) capturing all repair costs, which is impeding the Army’s ability to calculate accurate future repair prices. These problems can also hinder the Army’s ability to accurately report the results of its depot operations and limits customers’ ability to develop accurate budget estimates. LMP users experienced difficulty in providing contract information to MOCAS. Due to the operational problems, DFAS was unable to electronically process contract modifications and contract payment terms and make disbursements to contractors, thereby increasing the risk of not making timely payments to contractors and violating the Prompt Payment Act. BSM experienced significant data conversion problems associated with purchase requisitions and purchase orders that were created in SAMMS. Moving the data from SAMMS to BSM proved difficult because BSM required more detailed information, which was not identified during the requirements phase. This additional information needed to be manually entered into BSM, resulting in numerous errors that caused vendors not to be recognized and shipments from the depot to be rejected. As a result of these problems, additional tables, such as vendor master files, were created within BSM to process orders for the converted purchase requisitions and purchase orders. BSM users experienced a number of problems, such as incorrect information on customer orders, customer orders never being sent, and vendor invoices not being paid in a timely manner. These operational problems have been at least partially responsible for schedule slippages and cost increases for both systems. In the case of BSM, it was originally scheduled to achieve full operational capability (FOC) in September 2005. However, BSM is now expected to reach FOC during the second quarter of fiscal year 2006. Further, BSM’s estimated cost has increased by approximately $86 million since the program was initiated in November 1999. Figure 4 shows the schedule slippages and cost increases. Part of the schedule slippage and cost increase can be attributed to problems encountered with DLA’s effort to implement SPS, which was to provide BSM with the required procurement functionality. Since a large part of DLA’s overall business is the procurement of inventory items, difficulties in establishing a viable system solution for this critical aspect of its business seriously impaired DLA’s ability to meet BSM’s schedule and cost goals. We have previously reported that DOD’s ineffective management approach for SPS put the project at risk. During the initial implementation of BSM, program officials found that SPS did not have the capability to handle DLA’s large volume of procurement requisitions. According to BSM program officials, DLA will spend about $9 million to resolve the shortcoming in SPS. Since SPS will not meet its needs when BSM is fully operational at all sites, DLA has negotiated with the BSM software developer to purchase new procurement software as the long-term solution. DLA estimated that this software would cost approximately $30 million, which contributed to the increased BSM program costs. Similar to BSM, LMP has also experienced schedule slippages and cost increases since the project was approved in February 1998. Figure 5 shows the schedule slippages and cost increases. As shown in figure 5, as of March 2004, the current estimated cost of LMP is over $1 billion, with more than $400 million spent to fund the project during the past 5 years. In October 1999, we reported that the Army’s estimated cost of LMP over the 10-year period of the contract was approximately $421 million. However, as discussed in that report, the $421 million estimate did not include an additional $30.5 million per contract year that would be needed for data processing. The amount allowed for data processing in the original estimate was based directly on the percentage of data processing performed by the contractor, with the Defense Information Systems Agency performing the residual processing. Further, the original estimate was based on a 10-year contract and the current estimate is based on a 12-year contract, and each additional contract year can be as much as $65 million. Considering these two factors, a more accurate cost estimate in 1999 would have been approximately $856 million. In our discussions with LMP program officials, additional factors were identified that have caused the cost of LMP to increase to over $1 billion. For example, since the initiation of LMP, the Army has directed that the program be (1) integrated with the Army Single Stock Fund effort and (2) extended to the Army depot maintenance operations. These additional capabilities were not part of the standard LMP software package and were not envisioned to be part of LMP when the original cost estimate was developed. Therefore, additional development and implementation costs were incurred and increased the overall cost of the program by over $91 million. Further, the LMP program manager acknowledged that the 1999 estimate did not include adequate DOD program management costs. The additional program management costs are estimated to be about $104 million and include such items as personnel and travel. Additionally, as shown in figure 5, the original FOC date was scheduled for fiscal year 2004. However, because of the operational problems that were identified with the first deployment, the Army is in the process of developing a new deployment schedule, and as of March 2004, no future deployment dates had been established. The problems we identified in the areas of schedule, cost, and performance of the two systems can be linked, at least in part, to DLA’s and the Army’s failure to follow disciplined processes in the key areas of requirements management and testing. While there may have been contributing factors in other areas of the system acquisition efforts, we selected these two areas because our assessments, as well as others, have shown that agencies do not invest adequately for success in these areas, which form the foundation for success or failure. Lacking such disciplined processes exposes these projects to the unnecessary risk that costly rework will be required, which in turn, will continue to adversely affect these projects’ cost, schedule, and performance goals. Our analysis of selected BSM and LMP key requirements and testing processes found that (1) the functionality to be delivered was not adequately described or stated to allow for quantitative evaluation; (2) the traceability among the various process documents (e.g., operational requirements documents, functional or process scenarios, and test cases) was not maintained; and (3) system testing was ineffective. Because of the weaknesses in these key processes, program officials do not have reasonable assurance that (1) the level of functionality that will be provided by a given deployment is understood by the project team and users and (2) the resulting system will provide the expected functionality. We have previously reported concerns with BSM’s lack of a documented requirements development and management plan. Such a plan provides a road map for completing important requirements development and management activities. Without it, projects risk either not performing important tasks or not performing them effectively. Historically, projects that experience the types of requirements and testing process weaknesses found in BSM and LMP have a high probability of not meeting schedule, cost, and performance objectives. Disciplined processes have been shown to reduce the risks associated with software development and acquisition efforts to acceptable levels and are fundamental to successful systems acquisition. Said another way, a disciplined software development and acquisition process can maximize the likelihood of achieving the intended results (performance) within established resources (costs) on schedule. Although a “standard cookbook” of practices that will guarantee success does not exist, several organizations, such as the Software Engineering Institute and the Institute of Electrical and Electronics Engineers (IEEE), and individual experts have identified and developed the types of policies, procedures, and practices that have been demonstrated to reduce development time and enhance effectiveness. Key to having a disciplined system development effort is to have disciplined processes in multiple areas, including project planning and management, requirements management, configuration management, risk management, quality assurance, and testing. Effective processes should be implemented in each of these areas throughout the project’s life cycle since constant changes occur. In reviewing BSM and LMP, we focused on requirements management and testing. Requirements represent the blueprint that system developers and program managers use to design, develop, and acquire a system. Requirements should be consistent with one another, verifiable, and directly traceable to higher-level business or functional requirements. It is critical that requirements be carefully defined and that they flow directly from the organization’s concept of operations (how the organization’s day-to-day operations are or will be carried out to meet mission needs). Improperly defined or incomplete requirements have been commonly identified as a cause of system failure and systems that do not meet their costs, schedules, or performance goals. Without adequately defined requirements that have been properly reviewed and tested, significant risk exists that the system will need extensive and costly changes before it will achieve its intended capability. According to IEEE—a leader in defining the best practices for such efforts—good requirements have several characteristics, including the following: The requirements fully describe the software functionality to be delivered. Functionality is a defined objective or characteristic action of a system or component. For example, for inventory, key functionality as previously discussed includes total asset visibility and valuation in accordance with federal accounting standards. The requirements are stated in clear terms that allow for quantitative evaluation. Specifically, all readers of a requirement should arrive at a single, consistent interpretation of it. Traceability among various requirement documents is maintained. Requirements for projects can be expressed at various levels depending on user needs. They range from agencywide business requirements to increasingly detailed functional requirements that eventually permit the software project managers and other technicians to design and build the required functionality in the new system. Adequate traceability ensures that a requirement in one document is consistent with and linked to applicable requirements in another document. Industry best practices, as well as DLA’s and Army’s own system planning documents, indicate that detailed system requirements should be documented to serve as the basis for effective system testing. Both projects documented their high-level or operational requirements and had designed hierarchical processes for documenting the various requirements and related documents needed to build and design tests at the transaction level as well as tests of chains of transactions that flow together to support multiple business functions and processes. Because requirements provide the foundation for system testing, specificity and traceability defects in system requirements preclude an entity from implementing a disciplined testing process. That is, requirements must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that its testing efforts will not detect significant defects until after the system is placed into production. Industry experience indicates that the sooner a defect is recognized and corrected, the cheaper it is to fix. As shown in figure 6, there is a direct relationship between requirements and testing. Although the actual testing activities occur late in the development cycle, test planning can help disciplined activities reduce requirements-related defects. For example, developing conceptual test cases based on the requirements derived from the concept of operations and functional requirements stages can identify errors, omissions, and ambiguities long before any code is written or a system is configured. Disciplined organizations also recognize that planning testing activities in coordination with the requirements development process has major benefits. Our analysis and evaluation of DLA’s and the Army’s requirements management and testing processes found that BSM and LMP program officials did not effectively implement the disciplined processes associated with requirements management and testing in developing and implementing their systems. We identified numerous instances in which each documented requirement used to design and test the system did not build upon the next in moving through the hierarchy. Specifically, the requirements (1) lacked the specific information necessary to understand the required functionality that was to be provided and (2) did not describe how to determine quantitatively, through testing or other analysis, whether the systems would meet DLA’s and Army’s respective needs. One reason that users have not been provided with the intended systems capabilities is because of the breakdown in the requirements management process. As a consequence, DLA and the Army have been forced to implement error- prone, time-consuming manual workarounds as a means to minimize disruption to critical operations. DLA and Army officials acknowledged that improvements in their requirements management processes are needed and have stated that they are working to develop more specific requirements that better describe required system functionality and support more effective system testing. DLA’s basic hierarchical approach to developing BSM requirements was to (1) define high-level requirements, commonly referred to as operational requirements; (2) define more specific blueprint requirements; (3) develop functional scenarios; (4) define functional designs; (5) define technical designs; (6) create test cases; and (7) define test conditions. Similarly, the Army’s basic approach to developing LMP system requirements was to (1) develop a blueprint of its business processes following the Integration Definition for Function modeling standards established by the National Institute of Standards and Technology and IEEE, (2) define high-level requirements, (3) develop process scenarios, (4) develop test cases, and (5) use subject matter experts to determine whether the application met the business processes envisioned by the users and as developed by a contractor to provide the functionality currently provided by the Army’s existing systems. If effectively implemented, either methodology can be used to develop and implement a system. The key is that each step of the process builds upon the previous one. Accordingly, unidentified defects in one step migrate to the subsequent steps where they are more costly to fix and increase the risk that the project will experience adverse impacts on its schedule, cost, and performance objectives. The following are examples of the BSM and LMP requirements that we reviewed that lacked the specificity necessary to describe the functionality to be delivered. One BSM requirement stated that the system should be able to reconcile inventory between the depots (where inventory items are located) and the inventory control point and that the reconciliation should be performed daily. It also stated that the inventory control point must request that the depot perform a physical count once inventory differences have met certain criteria, such as dollar value or large quantities. However, the various requirement documents did not (1) define what is meant by “large” or (2) specify how the notification of the requirement to conduct the inventory was to be accomplished, for example, by e-mail. Without such specificity, it is unclear how this requirement could be tested since an evaluator would not be able to design a test of the trigger for a physical count because the quantity difference had not been defined. For LMP, the operational activity “Manage Assets” did not adequately describe how to maintain visibility over all assets. Specifically, the requirement states that the system “maintains wholesale and retail asset balances and provides visibility of On-Hand Asset Balances by identifying assets being repaired, modified, or tested at depots, contractor and intermediate level repair facilities as well as those on- hand at storage sites, retail activities and other services.” However, there is no further information that specifies how asset visibility is maintained or the sources that are to be used in accumulating these data. Therefore, the risk is increased that the Army will not be able to maintain asset visibility over all Army-managed assets. In fact, in January 2004, the Army reported that it was having difficulty obtaining accurate data related to material movement (in-transit), assets received, and assets issued or shipped. In reviewing the process documents that DLA and Army used to define their requirements, that is, operational requirement, functional scenario, functional design, technical design, and test case, we found that the forward and backward traceability defined by IEEE, and as described by BSM’s and LMP’s hierarchical approaches and management plan, was not always maintained. Traceability allows the user to follow the life of the requirement both forward and backward through these documents and from origin through implementation. Traceability is also critical to understanding the parentage, interconnections, and dependencies among the individual requirements. This information, in turn, is critical to understanding the impact when a requirement is changed or deleted. Without an effective traceability approach, it is very difficult to perform actions such as (1) accurately determining the impact of changes and making value-based decisions when considering requirement changes, (2) maintaining the system once it goes into production, (3) tracking the project's progress, and (4) understanding the impact of a defect discovered during testing. For almost all of the requirements we analyzed, we found that traceability was not maintained. For example: An operational requirement stated that BSM maintain the effective date for pricing information. The subsequent requirements document stated that all amendments/modifications to the award instrument—purchase orders and requisitions—should be documented on the prescribed General Services Administration form. In our analysis, we were only able to trace portions of the requirements through BSM’s hierarchical process. Since traceability was not maintained through the key documents, it was unclear why the testing documents included requirements that were not included in the functional scenarios, technical design, or test conditions, since these documents should have provided the detailed information necessary to test the requirements. Further, since traceability is lacking, it is uncertain how DLA will ensure that BSM will meet this requirement. One capability of LMP is to support workload planning for the Army’s depot maintenance facilities. Data related to scheduled and historical depot maintenance activities that should be considered in developing budget requirements, such as assets due in for repair or maintenance, price data, assets in stock, and maintenance schedules, were included in the requirement. However, we found that only the prior month’s sales data were used in designing the test case—not the information specified in the requirement. As a result, the risk is increased that LMP is determining workload-planning requirements for the Army’s depot maintenance facilities using incorrect data. This resulted in the Army reporting in January 2004 that Tobyhanna Army Depot was unable to develop its working capital fund budget submissions for its operations and that it will have to perform complex manual calculations to satisfy its budgetary planning requirements. BSM and LMP did not implement disciplined testing activities. Not carrying out this recognized best practice materially increases the risk that defects would not be detected until the systems were placed into production and that costly rework will be needed to satisfy end-user requirements, including materiel readiness in support of military operations. Testing is the process of executing a program with the intent of finding errors. Furthermore, if a requirement has not been adequately defined, it is unlikely that a test will discover a defect. System testing is a critical process utilized by disciplined organizations and improves an entity’s confidence that the system will satisfy the requirements of the end user and will operate as intended. Since requirements provide the foundation for system testing, requirement defects discussed earlier, such as the lack of specificity, significantly impaired and will continue to impair the ability of DLA and the Army to detect defects during system testing. As a result of requirement defects and ineffective testing, DLA and the Army testing activities did not achieve the important goal of reducing the risk that BSM and LMP would not operate as intended. For example: One BSM requirement involved preparing customer payments. The system, according to the test case, was required to (1) prepare a summary bill and (2) present the sales summary report in federal supply class sequence. The actual result for one test stated that the system passed this test even though only one item was used to generate the summary bill. It was unclear from this test case whether the system (1) could summarize multiple items and (2) had any limitations on the number of items that could be summarized. Furthermore, the test that evaluated the sorting of items by federal supply class divided the cost of the sales summary report by two. If this result matched the expected result, BSM passed the test. However, documentation was not available to explain why the item cost needed to be divided by two. Based on our review of the test cases linked to this requirement, we could not validate that the requirement had been adequately tested. Therefore, DLA does not have reasonable assurance that BSM can perform this required functionality. Based on our analysis of LMP’s December 2003 and January 2004 project status reports, we found that the Army continued to experience problems with the accuracy of data related to budgeting; workload planning and forecasting and depot maintenance operations; and accounting records such as customer orders, purchase orders and requisitions, obligations, and disbursements. DFAS and Army officials acknowledged that these problems were attributable to relying on subject matter experts to develop tests for their respective functional areas, such as budgeting, accounting, and workload planning, and not performing testing end to end across the various functional areas. Rather, the testing was stovepiped in that subject matter experts performed tests for their own respective areas. As a result of the specific problems discussed in this report related to BSM and LMP, such as the lack of total asset visibility, DLA and the Army cannot be assured that BSM and LMP will routinely generate timely, accurate, and useful financial information. The inaccuracy and unreliability of financial information has been a long-standing DOD weakness. As mentioned previously, BSM and LMP rely on information received from and sent through the various systems. However, the interfaces with these multiple systems were not fully developed, nor were they tested when BSM and LMP become operational. As a result, DLA and the Army do not have reasonable assurance that their respective systems are capable of providing the intended capability. In fact, the reported operational problems clearly indicate that BSM and LMP are not providing accurate data. For example, the manual workarounds that were required to compensate for the data conversion problems associated with SAMMS caused additional errors, which affected the accuracy of data produced. In the case of LMP, the Army has acknowledged that accurate information on its depot operations is not readily available. This problem severely impairs the Army’s ability to develop accurate prices for its depot operations. Inaccurate prices could result in customers being charged too much or too little for the services provided. Furthermore, the overall concerns we raised with regard to DLA and the Army not following disciplined processes in the key areas of requirements management and testing further expose BSM and LMP to unnecessary risks. Specifically, the resulting systems will not provide the accurate and complete information that is crucial to making informed decisions and controlling assets so that DOD’s mission and goals are efficiently and effectively accomplished. Further, although DLA and the Army have asserted that BSM and LMP, respectively, are compliant with the requirements of the Federal Financial Management Improvement Act of 1996 (FFMIA), we have concerns with the methodology followed in reaching that conclusion. FFMIA builds on the foundation laid by the Chief Financial Officers Act (CFO) of 1990 by emphasizing the need for agencies to have systems that can generate reliable, useful, and timely information with which to make fully informed decisions and to ensure accountability on an ongoing basis. FFMIA requires the 23 major departments and agencies covered by the CFO Act to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U.S. Government Standard General Ledger (SGL) at the transaction level. DLA’s and the Army’s assertions are based upon self-assessments of the financial management requirements that were reviewed by independent parties. For both systems, testing of transactions was not performed to validate that they would be able to process the data as intended. For example, in the case of BSM, for one requirement the contractor stated that “a sample of transactions were reviewed, it appears that BSM properly records transactions consistent with the SGL posting rules.” However, we found no indication that this requirement was tested, and therefore, we cannot conclude whether BSM has the capability to meet this requirement. In the case of LMP, we found that the Army relied upon Joint Financial Management Improvement Program (JFMIP) testing for 147 requirements because JFMIP had validated these requirements when it tested the vendor’s commercial software used for LMP during fiscal year 1999. JFMIP testing should not be considered a substitute for individual system testing of the actual data that will be used by the entity. Further, JFMIP’s tests of the software do not address entity-specific integrated tests of end-to-end transactions or system interfaces. Because the Army had to make modifications to the basic commercial software package to accommodate some of its business operations, the Army cannot be assured, without retesting, that these 147 requirements will produce the intended results. Without adequate documentation to support testing of the FFMIA requirements and based on our findings, it is questionable whether BSM and LMP are substantially compliant with FFMIA. As a result, DLA and the Army cannot provide reasonable assurance that BSM and LMP will routinely generate timely, accurate, and useful information with which to make informed decisions and to ensure accountability on an ongoing basis. DOD has made limited progress in achieving effective management oversight, control, and accountability over its $19 billion in business system investments. As a result, DOD cannot provide Congress reasonable assurance that the billions of dollars being spent annually on system modernizations are not being wasted on projects that will perpetuate the current costly, nonintegrated, duplicative systems environment. Our two cases studies—BSM and LMP—are prime examples of DOD business system modernization projects costing billions of dollars that are not directed toward a corporate solution for resolving some of DOD’s long- standing financial and inventory management problems. Rather, these efforts are more narrowly focused on DLA’s and the Army’s business operations, but even within that more restricted scope, weaknesses in project management have resulted in problems in delivering the intended capabilities. As the department moves forward with the continued development and implementation of the business enterprise architecture, it is critical that actions be taken to gain more effective control over business system funding. Maintaining the status quo of permitting each of the military services and DOD agencies to manage and oversee its business systems investments only serves to perpetuate the existing nonintegrated and duplicative systems environment and continues to impede the department’s overall transformation as envisioned by the Secretary of Defense. The manner in which business system funding is currently controlled hampers the development and implementation of broad-based, integrated corporate system solutions to address DOD-wide problems. Each military service and defense agency receives its own funding and is largely autonomous in deciding how to spend these funds, thereby enabling multiple system approaches to common problems. This funding structure has contributed to the duplicative, nonintegrated, error-prone systems environment that exists today. To improve management oversight, accountability, and control of the department’s business systems funding, Congress may wish to consider the following four legislative initiatives: Assign responsibility for the planning, design, acquisition, deployment, operation, maintenance, modernization, and oversight of business systems to domain leaders (e.g., the Under Secretary of Defense for Acquisition, Technology and Logistics and the DOD CIO). Direct the Secretary of Defense, in coordination with the domain leaders, to develop a defense business system budget that (1) identifies each business system for which funding is being requested, (2) identifies all funds by appropriation type and whether they are for current services or modernization, and (3) provides justification for expending funds on system(s) that are not in compliance with the department’s business enterprise architecture. Appropriate funds to operate, maintain, and modernize DOD’s business systems to domain leaders rather than the military services and defense agencies. Direct that each domain establish a business system investment review board that is to be composed of representatives from the military services and defense agencies who will be responsible for review and approval of all business system investments. To help improve the department’s (1) control and accountability over its business systems investments and (2) future deployments of BSM and LMP, we are making the following four recommendations. We recommend that the Secretary of Defense direct: The Under Secretary of Defense (Comptroller) and the Assistant Secretary of Defense for Networks and Information Integration to develop a standard definition for DOD components to use to identify business systems. The Assistant Secretary of Defense for Networks and Information Integration to expand the existing IT Registry to include all business systems. The Under Secretary of Defense (Comptroller) to establish a mechanism that provides for tracking all business systems modernization conditional approvals to provide reasonable assurance that all specific actions are completed on time. The Director, Defense Logistics Agency, and the Commanding General, Army Materiel Command, to take the following actions: Develop requirements that contain the necessary specificity to reduce requirements-related defects to acceptable levels. The requirements management process used to develop and document the requirements should be adequate to ensure that each requirement (1) fully describes the functionality to be delivered; (2) includes the source of the requirement; (3) is stated in unambiguous terms that allow for quantitative evaluation; and (4) is consistent, verifiable, and traceable. Conduct thorough testing before (1) making further deployment decisions and (2) adding functionality to existing deployment locations. We received written comments on a draft of this report from the Acting Under Secretary of Defense (Comptroller) (see app. II). DOD agreed with our four recommendations to the Secretary of Defense and two of the four matters for congressional consideration. With regard to the recommendations to the Secretary of Defense, the department identified actions it has under way and planned to address the concerns discussed in the report. For example, the department stated that a system has been developed that will track all business systems modernization conditional approvals until all required actions are completed. In addition, the department acknowledged that the initial implementations of BSM and LMP experienced problems that could be attributed to the lack of adequate requirements determination and system testing. To address these inadequacies, the department noted that requirements analysis had been expanded to include greater specificity and to require the successful completion of comprehensive testing prior to further implementation of either system. The department also stated that industry best practices would be followed. With regard to our matters for congressional consideration, the department disagreed that (1) responsibility for the planning, design, acquisition, deployment, operation, maintenance, modernization, and oversight of business systems be assigned to domain leaders (e.g., the Under Secretary of Defense for Acquisition, Technology and Logistics and the DOD CIO) and (2) funds to operate, maintain, and modernize DOD’s business systems be appropriated to domain leaders rather than the military services and defense agencies. On the first matter, the department stated that it is developing its business enterprise architecture and its business IT investment management structure and that these structures will provide the necessary management and oversight responsibility. DOD also noted that business system portfolio management would be an integral part of its oversight efforts. Further, DOD noted that the domain leaders will work closely with component acquisition executives and the DOD CIO, who have statutory responsibilities for IT related investment activities. We continue to believe that Congress may wish to consider assigning to the domains the responsibility for the planning, design, acquisition, deployment, operation, maintenance, modernization, and oversight of business systems. DOD components being responsible for these functions has resulted in the existing business system environment of at least 2,274 systems that are not capable of providing DOD management and Congress accurate, reliable, and timely information on the results of the department’s vast operations. DOD has recently stated that the actual number of systems could be twice the amount currently reported. Further, because the various DOD components are largely autonomous, despite DOD’s assertion that component acquisition executives will work more closely with domain leaders under current statutory structure, there is no incentive for them to seek corporate solutions to problems. Our two case studies— BSM and LMP—clearly demonstrate that these two system modernization efforts are not directed toward a corporate solution to resolving the department’s long-standing weaknesses in areas such as inventory and logistics management. Within the current departmental organization structure, DOD components are able to develop multiple system approaches to common problems. With regard to the funding being provided to the domains, the department stated that the portfolio management process being established—to include investment review boards—would provide the appropriate control and accountability over business system investments. DOD also noted that beginning with the fiscal year 2006 budget review process, the domains will be actively involved in business system investment decisions. While the establishment of the investment review boards is consistent with our previous recommendations, we continue to believe that appropriating funds for DOD business systems to the domains will significantly improve accountability over business system investments. DOD’s comments indicate that the domains will be more accountable for making business system investment decisions, but unless they control the funding, they will not have the means to effect real change. Continuing to provide business system funding to the military services and defense agencies is an example of the department’s embedded culture and parochial operations. As a result of DOD’s intent to maintain the status quo, there can be little confidence that it will not continue to spend billions of dollars on duplicative, nonintegrated, stovepiped, and overly costly systems that do not optimize mission performance and accountability and, therefore, do not support the department’s transformation goals. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its date. At that time, we will send copies to the Chairmen and Ranking Minority Members, Senate Committee on Armed Services; Subcommittee on Defense, Senate Committee on Appropriations; House Committee on Armed Services; Subcommittee on Defense, House Committee on Appropriations; Senate Committee on Governmental Affairs; and House Committee on Government Reform. We are also sending copies to the Director, Office of Management and Budget; the Under Secretary of Defense (Comptroller); the Under Secretary of Defense (Acquisition, Technology and Logistics); the Assistant Secretary of Defense (Network and Information Integration); the Director, Defense Logistics Agency; and the Commanding General, Army Materiel Command. Copies of this report will be made available to others upon request. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact Gregory D. Kutz at (202) 512-9505 or kutzg@gao.gov or Keith A. Rhodes at (202) 512-6412 or rhodesk@gao.gov. GAO contacts and key contributors to this report are listed in appendix V. We reviewed the Department of Defense’s (DOD) $28 billion fiscal year 2004 information technology (IT) budget request to determine what portion of the budget relates to DOD business systems. We reviewed the budget to determine, of the approximately $19 billion related to the department’s business systems, the amount allocated for operation, maintenance, and development. Additionally, we reviewed DOD’s business systems inventory, as reported by the department in April 2003, to ascertain if the systems were identified in the budget request. To obtain an overview of how an IT budget request is developed, we also met with officials in the offices of the DOD Comptroller and DOD Chief Information Officer (CIO), as well as CIO and financial management officials from the military services. To determine the effectiveness of DOD’s control and accountability over its business systems investments, we met with DOD officials to obtain an update on the status of our prior recommendations. We also met with appropriate officials in the DOD Comptroller and DOD CIO offices to discuss the status of various draft policies and guidance that are aimed at improving the department’s control and accountability over business system investments. We also reviewed and analyzed the DOD budget requests for fiscal years 2003 through 2005 to identify the business systems investments that could be subject to the requirements of the Bob Stump National Defense Authorization Act for Fiscal Year 2003, which requires the DOD Comptroller to review all system improvements with obligations exceeding $1 million and make a determination whether the improvement is in accordance with criteria specified in the act. To assess DOD’s compliance with the act, we also obtained and reviewed departmental guidance, memorandums, DOD Comptroller review decisions, and other documentation provided by the Business Management Systems Integration (BMSI) office. Additionally, we requested that DOD provide us obligational data in excess of $1 million for business systems for fiscal years 2003 and 2004, as of December 2003. We received obligational data from the military services, but did not receive any information from the defense agencies. We then compared the obligation data provided by the military services with the information from the BMSI office to determine if the modernizations were reviewed as stipulated by the act. To augment our document reviews and analyses, we interviewed officials from various DOD organizations, including the Office of the Under Secretary of Defense (Comptroller); Office of the Under Secretary of Defense (Network and Information Integration)/Chief Information Officer; Office of the Under Secretary of Defense (Acquisition, Technology and Logistics); and CIO and financial management officials from the military services. To determine if selected DOD business system projects are being effectively managed and will help resolve some of DOD’s long-standing business operation problems, we selected the logistics domain from which we chose individual case studies for detailed review. We selected the logistics domain because it represents $770 million, or 16 percent, of modernization funding requested in fiscal year 2004 for the department’s business systems. The logistics domain was also selected because of its significance to DOD operations and its long-standing and inherent inventory and related financial management weaknesses, such as the inability to support its inventory balances and provide total asset visibility. We selected the Defense Logistics Agency’s (DLA) Business Systems Modernization (BSM) and the Army’s Logistics Modernization Program (LMP) for detailed review. For these two business systems, we focused on two key processes, requirements management and testing. To assess whether DLA and the Army had established and implemented disciplined processes related to requirements management and testing, we reviewed DLA’s and the Army’s procedures for defining requirements management frameworks and compared these procedures to their current practices; reviewed guidance published by the Institute of Electrical and Electronics Engineers and the Software Engineering Institute and publications by experts to determine the attributes that should be used for developing good requirements; reviewed BSM’s system requirement documents related to finance, order fulfillment, planning, and procurement and LMP’s system requirement documents related to planning and budget development, asset management, inventory management, and maintenance analysis and planning; and selected 13 of BSM’s 202 system requirements and 12 of LMP’s 293 system requirements and performed an in-depth review and analysis to determine whether they had the attributes normally associated with good requirements and whether these requirements traced between the various process documents. To augment these document reviews and analyses, we interviewed DLA and Army program officials and Defense Finance and Accounting Service (DFAS) officials. To identify the costs associated with BSM and LMP, we reviewed data provided by DLA and Army program officials. We also reviewed prior GAO, DOD Inspector General, and service auditors’ reports, as well as DOD’s agencywide financial statements to obtain further information on inventory costs. We conducted our work at the Office of the Under Secretary of Defense (Comptroller); the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics); the Office of the Assistant Secretary of Defense (Network and Information Integration)/Chief Information Officer; DLA; the Army Materiel Command; and the CIO and financial management offices for the military services. We also visited two locations—the Defense Supply Center in Richmond, Virginia, and the Army’s contractor site (Computer Sciences Corporation) in Moorestown, New Jersey—to gain an understanding of user involvement in the development and operation of BSM and LMP, as well as the business processes associated with each system. We conducted our work from August 2003 through March 2004 in accordance with U.S. generally accepted government auditing standards. We did not verify the accuracy and completeness of the cost information provided by DOD for the two projects we reviewed. We requested comments on a draft of this report from the Secretary of Defense or his designee. We received written comments on a draft of this report from the Acting Under Secretary of Defense (Comptroller), which are reprinted in appendix II. Air Force Financial Information Resource System Navy Enterprise Resource Planning Pilots National Security Agency Pilot Initiative Navy Enterprise Resource Planning Program Defense Integrated Military Human Resources System DFAS Mechanization of Contract Administration Services Rehost DFAS PowerTrack (SCR) Army Integrated Facilities System (SCR) Navy Enterprise Maintenance Automated Information System DFAS e-Biz Capital Investment Reprogramming DFAS Operational Data Store (SCR)Composite Health Care System II DFAS General Accounting and Finance System Rehost Air Force Reserve Travel System DFAS Automated Time, Attendance and Production System (SCR) DFAS Defense Joint Military Pay System—Active Component (SCR) DFAS Defense Joint Military Pay System—Reserve Component (SCR) DFAS Defense MilPay Office (SCR) DFAS Defense Retired and Annuitant Pay System (SCR) DFAS Marine Corps Total Force System (SCR) Transportation Coordinators’ Automated Information for Movements System II Army Recruiting Information Support System Defense Civilian Personnel Data System- Sustainment MEPCOM Management Information Reporting System Joint Computer-Aided Acquisition and Logistics Support Navy Tactical Command Support System Marine Corps Common Hardware Suite Electronic Military Personnel Records System Navy Standard Integrated Personnel System Conventional Ammunition Integrated Management System Shipyard Management Information Systems- Financials SPAWAR Financial Management - ERP MSC Afloat Personnel Management Center Transportation Coordinators’ Automated Information for Movements System II $4.4 (Continued From Previous Page) Military Sealift Command Financial Management System Asset Tracking Logistics and Supply System Integrated Logistics System – Supply Depot Maintenance Accounting and Production System Supply Working Capital Fund Decision Support System (Keystone) Reliability and Maintainability Information System For fiscal year 2004, DOD did not report obligational data. Staff members who made key contributions to this report were Beatrice Alff, Johnny Bowen, Francine DelVecchio, Stephen Donahue, Francis Dymond, Jeffrey Jacobson, Jason Kelly, Mai Nguyen, Michael Peacock, David Plocher, and Katherine Schirano. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Despite its significant investment in business systems, the Department of Defense (DOD) continues to have long-standing financial and inventory management problems that prevent it from producing reliable and timely information for making decisions and for accurately reporting on its billions of dollars of inventory. GAO was asked to (1) identify DOD's fiscal year 2004 estimated funding for its business systems, (2) determine if DOD has effective control and accountability over its business systems investments, and (3) determine whether selected business systems will help resolve some of DOD's long-standing problems and whether they are being effectively managed. DOD requested approximately $19 billion for fiscal year 2004 to operate, maintain, and modernize its reported 2,274 business systems. This stovepiped and duplicative systems environment evolved over time as DOD components--each with its own system funding--developed narrowly focused, parochial solutions to their business problems. As a result of this uncontrolled spending, DOD reported over 200 inventory systems and 450 personnel systems. DOD's fundamentally flawed business systems affect mission effectiveness and can contribute to the fraud, waste, and abuse that GAO continues to identify. Further, the number of business systems is likely understated in part because DOD does not have a central systems repository or a standard business system definition. DOD does not have an effective management structure for controlling business systems investments and the business domains' roles and responsibilities have not been defined. Further, DOD does not have reasonable assurance that it is in compliance with the National Defense Authorization Act for Fiscal Year 2003, which requires the DOD Comptroller to determine that system improvements exceeding $1 million meet the criteria specified in the act. Based on limited information provided by DOD, system improvements with at least $479 million of obligations over $1 million were not reviewed by the DOD Comptroller. GAO's two case studies are examples of DOD spending hundreds of millions on business systems that will not result in corporate solutions to its longstanding inventory and related financial management problems. While these efforts should provide some improvement to the Defense Logistics Agency's and the Army's business operations, implementation problems have resulted in schedule slippages, cost increases, and critical capabilities not being delivered. These issues can be attributed, in part, to the lack of disciplined processes in the areas of requirements management and testing. If not corrected, the problems will result in two more costly, nonintegrated systems that only marginally improve DOD business operations and further impede DOD's transformation as envisioned by the Secretary of Defense. |
In 2007, we identified eight key programs that aim to protect critical technologies: Arms and Dual-Use Export Controls, Anti-Tamper Policy, the Foreign Military Sales Program, the National Disclosure Policy Committee, the Militarily Critical Technologies Program, the National Industrial Security Program, and the Committee on Foreign Investment in the United States. Responsibilities for these programs are shared among multiple federal agencies and offices, primarily within the Departments of Commerce, Defense, Homeland Security, Justice, State, and Treasury. As shown in table 1, multiple agencies either have the lead or are stakeholder agencies for the programs for the identification and protection of critical technologies. In our 2013 high-risk update, we noted that these programs do not work collectively as a system and that the administration had not taken steps to re-examine the portfolio of programs to address their collective effectiveness; any actions to improve programs had largely focused on addressing challenges in individual programs. Although we did not previously divide this list of programs into export control and non- export-control programs, an ongoing presidential initiative known as Export Control Reform has emphasized the relationship between two of the programs on the list: the Arms Export Control System and the Dual- Use Export Control System. These two programs—which we refer to collectively as export control programs—impose licensing requirements on persons that create or trade in specified categories of items and information. The other programs in the portfolio—which we refer to as non-export-control programs—are not part of this system for controlling exports. The cognizant agencies have taken actions in each of the programs designed to protect critical technologies since our January 2007 high-risk update, in response to changes in law, our prior recommendations, or their own internal identification of weaknesses. For instance, initiatives are under way in the area of export controls, which comprises two of the eight programs, based on an April 2010 framework announced by the administration. The six non-export-control programs have undergone changes through internal agency or department initiatives, or through legislative requirements. However, some of these eight programs are facing implementation or additional challenges. In 2009, the administration directed an interagency review of the U.S. export control system that resulted in the establishment of an Export Control Reform initiative a year later. This initiative is under way and actions have been implemented using a phased approach, with three planned phases—Phase I developed plans and made preparations for Phase II, which is the implementation of steps to reconcile various definitions, regulations, and policies for export controls, all while building toward Phase III. This third phase is to result in implementation of major changes supported by these reconciliations, by consolidating export control efforts in four reform areas, to create a single, consolidated control list, a single licensing agency, a primary export enforcement coordination agency, and a unified information technology system. As we concluded in our November 2010 review of the Export Control Reform initiative, this approach has the potential to address weaknesses in the U.S. export control system, including areas where agencies have not addressed prior The reform effort is currently in Phase II, and changes are GAO findings.occurring in each of the four reform areas, to varying degrees. Challenges are also present in each, such as achieving full implementation of the Federal Export Enforcement Coordination Center, designed to coordinate enforcement efforts across all export agencies. Further, delays exist in agencies’ use of DOD’s USXPORTS system as the unified information technology system for licensing. Moreover, full implementation of Export Control Reform in Phase III is dependent upon congressional action to revise legislation, particularly in licensing and enforcement activities. In order to regulate the export of items and information with military applications, State and Commerce each maintain a separate control list of items that require a license before they can be exported—the U.S. Munitions List, for State, and the Commerce Control List, for Commerce. Because State and Commerce have different restrictions on the items they control, determining which agency controls exported items is fundamental to the effectiveness of the U.S. export control system. Over 10 years ago, we found that both departments had claimed jurisdiction over the same items, such as certain missile-related technologies, and the administration noted that these types of jurisdictional issues were still present in 2010, when they began Export Control Reform. Such jurisdictional disagreements and problems in the past have often resulted from the departments’ differing interpretations of the regulations and from minimal or ineffective coordination among the departments. As part of the reform initiative, a task force created new export control criteria to determine which items and technologies should be controlled by Commerce and which by State, thus helping to reduce uncertainty. In implementing this process, Commerce, State, and Defense officials involved in the reform initiative are working to reach agreement on the appropriate controls over items in the 21 categories of State’s U.S. Munitions List and the corresponding controls for items Commerce, State, and Defense officials determine should be moved to the Commerce Control List. Since the first set of revised rules went into effect in 2013, 15 of the 21 categories of the U.S. Munitions List have been reviewed and final rules have been issued to clearly identify the jurisdiction of controlled items. These revisions are intended to move certain less sensitive items from State’s U.S. Munitions List to the Commerce Control List, while leaving high-risk and high-priority items and information on State’s list. The moved items are subject to Commerce’s more flexible Export Administration Regulations. The aim of these revisions is to enhance national security by increasing interoperability with allies, maintaining the U.S. defense industrial base, and enabling the U.S. export control agencies to focus on items and destinations of greater concern. For example, military aircraft instrument flight trainers not specially designed to simulate combat have been transitioned from the U.S. Munitions List to the Commerce Control List. An additional three categories of the U.S. Munitions List, pertaining to arms and ammunition, are on hold because they relate to the politically sensitive issue of gun control policy, according to senior level export administration officials at both State and Commerce. The final three categories are still under review. Current efforts have focused on transitioning less sensitive items from the U.S. Munitions List to a new section of the Commerce Control List called the 600 Series, which was added in order to provide a separate classification for munitions newly under Commerce’s jurisdiction. According to Commerce’s Assistant Secretary for Export Administration, the completion of all of these revisions is expected by late 2015. As a result of the control list revision process, however, licensing staff and industry are contending with three general types of controls—the U.S. Munitions List, the 600 Series controls on the Commerce Control List, and the dual-use controls on the Commerce Control List. This is intended as an intermediary step on the path to a single list, but an official from State’s Directorate of Defense Trade Controls noted that, for now, some exporters are confused by the multiple lists. This confusion among exporters, although typically expected when there are various lists, could delay or impair achievement of the Export Control Reform’s goal of overcoming the inefficiencies of the previous export control system until the final integration to a single list is completed. These changes to the lists are also affecting export control enforcement actions that rely on processing of commodity jurisdictions—which determine whether an item is controlled by State or Commerce—by the Department of State, according to enforcement agency officials. Two officials from the Department of Justice, as well as the Deputy Assistant Director of the Department of Homeland Security’s (DHS) Homeland Security Investigations Counter-Proliferation Investigations Program, told us that it is taking longer for State to issue decisions, including commodity jurisdiction determinations, because of the recent changes to the control lists. They stated that the changing jurisdiction of items is resulting in a greater need for investigators to obtain timely commodity jurisdictions than in the past. Consequently, these officials noted that, given the time it is taking to complete the commodity jurisdiction decision—upwards of 6 months in some cases, which is well beyond State’s goal of 60 days—it is difficult for law enforcement to build a case and receive timely information to take specific enforcement actions, such as authorization to execute a search warrant or to obtain criminal indictments. The Deputy Assistant Director of DHS’s Homeland Security Investigations Counter-Proliferation Investigations Program also noted that commodity jurisdictions often involve review by other agencies, such as DOD technical experts in addition to the licensing agencies of State and Commerce, and that this involvement, in addition to the limited staff at the State Department available to conduct commodity jurisdictions for law enforcement agencies, may be contributing to the length of time taken. Compliance officials at State indicated that they try to prioritize commodity jurisdiction requests from law enforcement, but increases in the frequency of these requests and duplication of requests has made it difficult for them to keep up with law enforcement needs. According to the Department of Justice officials, the length of time it takes to receive the certification necessary from the State Department to proceed with their enforcement actions, such as search warrants, results in cases losing momentum. These delays also contribute to increased difficulty in keeping witnesses interested and available if and when the case goes to trial. In addition, the Deputy Assistant Director of DHS’s Homeland Security Investigations Counter-Proliferation Investigations Program told us that DHS is experiencing the same challenge in conducting enforcement activities. Officials with both Homeland Security Investigations and the Executive Office for U.S. Attorneys stated that these delays are having an adverse effect on numerous cases and investigations. DHS documents show that the number of requests for support by State that would include a commodity jurisdiction have doubled since 2008, reaching more than 250 requests in 2014. Further, two officials from the Department of Justice told us that in this transitional period, the revisions to the control list are creating some degree of confusion and it is becoming more subjective to prosecute cases. The burden of export control cases is to establish that the individual or entity willfully and knowingly intended to violate the law, and the increased confusion can complicate efforts to prove that intent. These officials stated that they are beginning to collect information on the impact of this confusion; and according to officials at State, DHS, and Justice, they are working together to develop updated procedures for requesting commodity jurisdictions that will facilitate the process on both sides and reduce confusion. In the meantime, some export control enforcement actions may continue to be inhibited. Efforts to create a single licensing agency are awaiting Phase III legislative authorization, but Phase II actions are under way. In order to address the national security risks of controlled items falling into the wrong hands, the export control programs may require licenses for the export of controlled items. Under the current export control system, the Departments of State and Commerce each have the authority to issue export licenses for items within their respective jurisdictions. In 2010, licensing agencies within these departments processed over 100,000 For some transactions, exporters were required to apply for licenses.licenses from both departments, because the transactions contained both U.S. Munitions List and Commerce Control List items. The goal of the reform initiative is to create a single licensing agency, which would act as a single source for businesses seeking an export license and for the U.S. government to coordinate review of license applications. As one step, State has been authorized to issue licenses for items subject to Commerce’s jurisdiction that are used in or with items subject to State’s jurisdiction. Such action—when combined with the revised control lists— is expected to result in fewer license requests and the use of a greater number of license exceptions. GAO, Export Controls: U.S. Agencies Need to Assess Control List Reform’s Impact on Compliance Activities, GAO-12-613 (Washington, D.C.: Apr. 23, 2012). U.S. Munitions List to the Commerce Control List is completed.assessment of licensing resources is important, but does not fully address our 2012 recommendation that Commerce review its resource needs for all of its compliance activities. GAO, Export Controls: Challenges Exist in Enforcement of an Inherently Complex System, GAO-07-265 (Washington, D.C.: Dec. 20, 2006). goal of limiting duplicative or counterproductive activities. The Director of the E2C2 provided data showing that over 3,000 submissions have been made through the deconfliction process, and slightly over half have resulted in additional information-sharing of some kind. Enforcement officials at multiple agencies described positive effects of the E2C2 deconfliction process in enabling coordination amongst the agencies. According to key enforcement officials that we spoke with at the E2C2, DHS, and at the Departments of Justice and Commerce, this is a resource intensive process, in part, because it is still managed manually, which slows down their ability to quickly deconflict information. Initial steps to address these inefficiencies by automating the deconfliction process are under way. These efforts taken by the E2C2 are not yet complete, but they are a good start to achieving a more coordinated approach to the enforcement of export controls. Finally, the Export Control Reform initiative proposes a single information technology system to administer the export control system and share information regarding licensing and related actions among the export control agencies. According to administration Export Control Reform plans, DOD’s USXPORTS database will eventually serve as the single electronic system to process export licensing. Several agencies, including State and DOD, are using USXPORTS for export control licensing; however, Commerce is not yet using this system—a more than 2-year delay from the originally anticipated migration date of May 2012; which, according to the Assistant Secretary for Export Administration, was largely a result of sequestration and budget issues. According to Commerce officials, Commerce is working with the DOD contractor to mitigate issues with two major system requirements concerning crossover from classified to unclassified domains and interface between Commerce’s licensing and enforcement databases. The Commerce officials stated that once these issues have been addressed, Commerce’s export licensing process will transfer to the USXPORTS system. Additionally, the unified information technology system may not address all the information technology needs of the export control enforcement agencies. The other agencies that conduct export enforcement activities, within DHS and Justice, are not presently using USXPORTS because it is not intended as a repository of enforcement information, but stated that the licensing data it will contain may be a useful tool for them in the future. Moreover, in November 2010, we found that the Export Control Reform initiative for information technology did not fully address findings from our previous work. In addition, in October 2007, we found that export control enforcement agencies lack a system to identify all parties that engage in nuclear proliferation and are impaired from judging their progress in preventing nuclear networks because they cannot readily identify basic information on the number, nature, or details of all their enforcement activities involving nuclear proliferation. Since that report was issued, Commerce has implemented procedures to address a recommendation on this issue, but Treasury has not. Across all four areas of Export Control Reform, full implementation is expected to occur in its third and final phase—Phase III—which focuses on implementing the reform proposals that are dependent upon congressional action, such as creating a single licensing agency and a primary export enforcement coordination agency. For example, because there are separate statutory bases for State and Commerce to review and issue export licenses, legislation will be required to consolidate the current system into a single licensing agency. Further, Phase III of the reform initiative plans to merge export control investigative resources from Commerce into DHS’s Immigration and Customs Enforcement. Moreover, officials from Justice’s National Security Division noted that the enforcement agencies at Commerce, DHS, and the Federal Bureau of Investigation currently provide for a diverse group of investigators with varying, but valuable assets to the prosecutorial community, which they hope will be sustained through the Phase III effort. For these reasons, significant collaboration by the participating agencies is essential to the Phase III consolidation efforts. The remaining programs that have a role in protecting critical technologies—designated as non-export-control—have also undergone individual changes in response to previously identified weaknesses. Four of the major programs in the portfolio are led by offices at DOD—Anti- Tamper Policy, the National Disclosure Policy Committee, the Militarily Critical Technologies Program, and the National Industrial Security Program—with a fifth, the Foreign Military Sales Program, led by State in approving the transfers and administered by DOD. Another program, the Committee on Foreign Investment in the United States (CFIUS), is led by Treasury, with participation from several other agencies, and has undergone changes in response to legislative action in 2007. We found that some of these programs have processes in place for sharing information on potential threats or needed actions between the programs, but these actions have not yet been completed. DOD established its Anti-Tamper Policy in 1999, requiring the military departments to implement techniques to protect critical technologies that might be vulnerable to exploitation—through such means as reverse engineering—when weapons leave U.S. control through export or loss on the battlefield. Examples of anti-tamper techniques include software encryption, which scrambles software instructions to make them unintelligible without first being reprocessed through a deciphering technique, and hardware protective coatings designed to make it difficult to extract or dissect components without damaging them. We reviewed this program in 2008, and at that time we found that, although DOD program managers were ultimately responsible for implementing its anti- tamper policy, a lack of direction, information, and tools created significant challenges for them. Since 2008, in response to our recommendation that DOD identify and provide additional tools to assist program managers in the anti-tamper decision process, DOD’s Anti- Tamper Executive Agent’s Office has improved the training for anti- tamper policies that it offers to program managers. In addition, DOD’s acquisition reform initiatives of Better Buying Power 2.0, including the Defense Exportability Features, are building anti-tamper features into the design phase of a weapon system’s development process—much earlier than in the past.these changes, but they represent positive actions to improve past weaknesses in this program. Each year, the U.S. government sells billions of dollars of defense articles and services to foreign governments through the Foreign Military Sales program. The Arms Export Control Act authorizes the sale of defense articles and services to eligible foreign customers by the President under the Foreign Military Sales program. The President has delegated transfer approval to State under the Foreign Military Sales program and implementation authority to DOD to administer it. Both agencies have taken steps to reform the program in response to some, but not all, of our findings and recommendations from multiple prior reports examining this program. Specifically, in May 2009, we recommended that DOD take actions to improve its verification and tracking of Foreign Military Sales shipments, which led to DOD improvements in its systems to expand the available information for tracking Foreign Military Sales shipments, as well as its guidance on how to verify those shipments. However, although the agencies generally concurred with it, our interagency recommendation on ensuring Customs and Border Protection officials have the necessary information to verify shipments remains unaddressed. Based on recommendations from a report we issued in November 2012, the Defense Security Cooperation Agency has updated its policies to improve the quality of information sharing and to better track timeliness of shipments. However, additional recommendations on metrics for assessing timeliness of other aspects of the shipping process have not yet been implemented, although DOD concurred with these recommendations and told us it is working to collect the necessary information to better measure timeliness. The National Disclosure Policy Committee determines the releasability of classified military information, including classified weapons and military technologies, to foreign governments. As members of the Committee, each military department has its own administrative process for reviewing requests for transfers of classified weapons and information, within the parameters of the National Disclosure Policy. Since 2008, in support of its portfolio of security cooperation programs, which include the Foreign Military Sales program and the National Disclosure Policy Committee, a DOD coordinating body has met monthly to discuss potential technology transfers to foreign governments and improve processes for reviewing transactions that implicate critical technologies protection issues. The Arms Transfer and Technology Release Senior Steering Group (ATTR SSG) brings together representatives from numerous DOD offices. It is co-chaired by the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics and the Office of the Under Secretary of Defense for Policy, and members include the Defense Security Cooperation Agency, the military departments, the Joint Staff, and other DOD agencies with technology security and foreign disclosure responsibilities. Additionally, due to their shared responsibilities on Foreign Military Sales and export controls, two offices from the Department of State participate in the ATTR SSG, as well. A representative of State’s Office of Regional Security and Arms Transfers was formally added as an observer to the ATTR SSG in 2012 and a representative from the Directorate of Defense Trade Controls has been added more recently. In response to the Export Administration Act of 1979, DOD established the Militarily Critical Technologies Program in 1980 to develop the Militarily Critical Technologies List (MCTL) of technologies possessed by sources in the U.S. that, if exported, would permit a significant advance in Its original purpose was to inform the military system of another country.export licensing determinations, and it was to be integrated into the Commerce Control List on an ongoing basis. Since then, the list has expanded to capture technology capabilities developed worldwide. In January 2013, we found that the MCTL was out-of-date and was no longer being published online, but that widespread requirements to know what is militarily critical remained. We recommended that the Secretary of Defense (1) determine the best approach to meeting users’ needs for a technical reference, whether it be MCTL, other alternatives being used, or some combination thereof; and (2) ensure that resources are coordinated and efficiently devoted to sustain the approach chosen. We further recommended that if DOD determines that the MCTL is not the optimal solution for aiding programs’ efforts to identify militarily critical technologies, the Secretary of Defense seek necessary relief from DOD’s current responsibility. According to DOD officials responsible for the MCTL, they are no longer updating the list, and are in the process of determining whether it is appropriate to seek relief from the requirement to maintain the list. They stated that alternatives to the MCTL are being employed based on the specific needs of each agency, and DOD offices are using the U.S. Munitions List, the Commerce Control List 600 Series, and the Industrial Base Technology List as alternatives to the MCTL. For example, officials in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics stated that DOD offices and agencies are using the U.S. Munitions List in support of export license processing and foreign disclosure decisions. However, DOD has not formally determined the best approach to meet users’ needs for a technical reference and to ensure that resources are coordinated and efficiently devoted to sustain the approach chosen. DOD’s National Industrial Security Program (NISP) was established in 1993 to ensure that federal contractors cleared for classified information, including information associated with critical technologies, are taking the proper steps to appropriately safeguard that information. DOD’s Defense Security Service administers NISP by reviewing contractor applications for clearance and overseeing cleared facilities. NISP’s role within the critical technologies portfolio relates to its review of contractors under foreign ownership, control, or influence (FOCI). In our previous work on this topic, we found insufficient oversight to ensure the security of classified information (including critical technology information) from foreign interests and cited lengthy delays in security reviews. In 2004, we reviewed the NISP program and recommended improvements to oversight of contractor protection of classified information. Although DOD concurred with these recommendations, they were not implemented. In 2005, we made a number of recommendations for better administration of FOCI oversight, including two, which DOD subsequently implemented, on efforts to develop a human capital strategy that would better serve the needs of FOCI security representatives. In a recent interview with officials in charge of this program, they told us that they have increased their staff resources approximately from 5 people in 2004 to 40 people presently; and also implemented a risk-based decision-making and evaluation process for overseeing facilities that handle classified information. This risk-based approach to reviewing facilities includes an annual update of the list of facilities in the United States that conduct classified work, and a prioritization of which facilities will be visited based on criteria and input from key stakeholders within DOD and the intelligence community. Prior to these changes, many cases were taking upwards of one year to conduct reviews and put FOCI measures in place for companies that conduct classified work for the U.S. government. According to the head of the Defense Security Service’s FOCI Operations Division, these changes in staffing and taking a risk-based approach have reduced the backlog of reviews of new companies handling classified information. The Analytic Division now conducts FOCI reviews of all of the roughly 1300 new companies seeking clearance each year. For previously cleared facilities, officials stated that this risk-based approach allowed Defense Security Service staff to complete security vulnerability assessments at about half of the roughly 13,500 cleared facilities under their purview in 2014. They told us that this process also allows staff to prioritize and target those reviews to higher risk facilities; for example, in 2014, they conducted reviews at 586 of the roughly 600 cleared facilities that have FOCI mitigation in place. NISP had previously used the MCTL to categorize the types of classified information by technology that was being targeted in cleared facilities by foreign entities. However, according to the officials with Defense Security Service’s Counterintelligence Directorate, MCTL was not broad enough to cover non-defense-related technologies, in fields such as agriculture. With the transition away from MCTL, the Defense Security Service developed the Industrial Base Technology List to better cover the range of categories of concern to NISP; and according to these officials, Counterintelligence continually updates this list to include new technologies requiring oversight as they are developed, as well as including non-defense technologies that fall under the purview of NISP. CFIUS is an interagency committee that serves the President by overseeing the national security implications of foreign investment in the U.S. economy. CFIUS is chaired by Treasury, and includes members from other federal agencies such as Commerce, Defense, Energy, Homeland Security, Justice, and State, among others. CFIUS reviews foreign acquisitions, mergers, or takeovers of a U.S. business to determine whether it poses a threat to the national security of the United States. CFIUS may also enter into an agreement with, or impose conditions on, parties to mitigate national security risks. One component of this review involves discussion of any relevant critical technologies and the potential impacts of foreign ownership of, or access to, such technologies. In 2008, in response to Congressional action partially driven by findings and recommendations that we raised in earlier reports, CFIUS implemented reforms increasing its efforts on national security- related topics and defining categories of transactions subject to review, such as those resulting in control of critical U.S. infrastructure by a foreign person. In 2012, the growing number of investments in the United States by Chinese firms sparked concerns by a number of groups over the economic and security impact of the investments, according to a report by the Congressional Research Service. The scope of potential national security risks presented by foreign investment in the United States has evolved beyond ownership and control concerns. Specifically, the issue of proximity has broadened to consideration of the geographic location of foreign-owned businesses and their capability to collect intelligence on U.S. military installations. For example, the CFIUS review process came under increased scrutiny after the attempted purchase by a Dubai company, Dubai Ports World, of a company that operated various U.S. port facilities. Although initially allowed to proceed by CFIUS in 2006, subsequent congressional and media attention ultimately caused the company to sell the U.S. portion of the business to another U.S. company. In addition to the Dubai Ports case, according to a Congressional Research Service report, an investment by a Chinese firm in a wind farm project in Oregon recently attracted public and congressional attention. CFIUS recommended that the company stop operations until an investigation could be completed as a result of objections by the U.S. Navy over the placement of wind turbines near or within restricted Naval Weapons Systems Training Facility airspace where unmanned aerial vehicles are tested. After a full investigation, CFIUS recommended that the President block the investment and he issued an Administrative Order stating that there was credible evidence that the acquisition threatened to impair U.S. national security; the case is under appeal. Further, in December 2014, we issued a report that examined DOD military installations and critical infrastructure which included information on the proximity of foreign- owned businesses near military bases. Although this report did not have any findings or recommendations related to the CFIUS process, it identifies foreign-owned businesses near military bases as another potential area for CFIUS consideration. Recent initiatives in response to identified weaknesses in the critical technologies programs have resulted in improved interagency collaboration. Some programs have developed mechanisms for interagency collaboration across the participating agencies for their individual program. However, current collaboration mechanisms do not involve direct communication among all the programs in the protection of critical technologies portfolio. There are both existing mechanisms and new initiatives among the critical technologies programs that support collaboration. In some cases, these programs promote interagency collaboration through formal and long- standing mechanisms. Most notably, CFIUS was established as an interagency body to review transactions that could result in a foreign party’s gaining control over a U.S. company. Under the CFIUS process, CFIUS member agencies work toward reaching a consensus on decisions. The consensus-based decision-making process ensures that representatives of each stakeholder agency are aware of the basis of the decision, including any future actions that CFIUS might be relying on each agency to take to address national security risks. CFIUS, or a lead agency, may negotiate agreements with any party to a covered transaction in order to mitigate the national security risks that may result from the transaction, when other provisions of law do not adequately address these risks. The CFIUS lead agency on the transaction is responsible for monitoring the agreement to ensure compliance with it. GAO has not recently examined CFIUS agencies’ efforts to enforce these security agreements, and we are not aware of any ongoing changes or initiatives that involve CFIUS. We also found that agencies have fostered new opportunities to promote interagency collaboration in their shared goal of protecting critical technologies. For example, the creation of the ATTR SSG created new opportunities for regular communication, through monthly meetings, among DOD offices and between DOD and State, while preserving DOD’s control over the coordinating body. A new office, the Technology Security and Foreign Disclosure Office, serves as the administrative arm of the ATTR SSG and participates in creating and disseminating policies in this area. State Department officials from the Regional Security and Arms Transfers office and the Directorate of Defense Trade Controls participate in the ATTR SSG as observers. These State representatives have raised concerns about individual transactions at the ATTR SSG and initiated policy discussions, but are considered non-DOD participants; therefore they do not have voting rights within the group. In 2014, DOD organized a new office within the Office of the Under Secretary of Defense for Policy, devoted to improving the strategic posture of DOD security cooperation activities by, among other things, coordinating DOD’s use of legal authorities, including Foreign Military Sales and the National Disclosure Policy, for transfers to foreign partners. To this end, the office facilitates the inclusion of key stakeholders in its strategic initiatives, including those involved in critical technologies protection and foreign disclosure within DOD, as well as at State. At this point, it is too early to determine what effects this office will have on intra- and inter-agency coordination. In addition to the formal coordinating bodies discussed above, the agencies also make use of informal processes to ensure an ongoing flow of information. As part of the administration’s Export Control Reform initiative, State and Commerce regularly consult with DOD officials and subject matter experts about revisions to export control regulations. DOD and State also have plans to detail staff to their counterpart’s offices in order to improve communication on their shared programs, particularly Foreign Military Sales. Officials stated that this should enable them to learn about how information is handled at the other agency and about one another’s practices. Agencies have taken steps to collaborate with other agencies to manage their individual critical technologies programs; however, current collaboration mechanisms do not involve direct communication among all the programs in the protection of critical technologies portfolio. For example, although the ATTR SSG has developed processes for interagency collaboration on security cooperation programs, it does not provide a forum for direct communication among all programs with critical technologies responsibilities, such as Commerce’s export control officials. All of the eight critical technology programs in this portfolio share a goal of protecting national security. In January 2007, when we designated the protection of critical technologies as high risk, our body of work on programs designed to protect critical technologies showed fragmentation, including poor coordination among the multiple agencies involved. The agencies responsible for many of these programs have since made progress toward improving coordination and reducing fragmentation, individually, and in some instances collectively. Past work on interagency collaboration notes that many of the results that the federal government seeks to achieve require the coordinated efforts of more than one federal agency and often more than one sector and level of government. Both Congress and the executive branch have recognized the need for improved collaboration across the federal government, as stated in our September 2012 report on interagency collaboration. In a June 2010 report on interagency collaboration in national security, we also concluded that when multiple agencies are managing similar information, challenges may exist among agencies regarding redundancies in information sharing, unclear roles and responsibilities, and data comparability.including differences in agencies’ structures, planning processes, and funding sources—can hinder interagency collaboration. That report also noted that organizational differences— According to officials involved in administering critical technologies programs, different programs use different terminology, and the usage and understanding of terms can vary. Under the administration’s Export Control Reform initiative, State and Commerce have worked together to revise regulatory definitions of key terms, and this collaboration is ongoing. Across the broader portfolio of critical technologies programs, however, definitions may not always be clearly aligned, and categories such as critical technologies may be understood in different ways at different programs. Best practices for interagency collaboration include using consistent terminology to establish a common understanding and improve collaboration among the various programs. In some cases, distinct uses of the same or similar terms may be appropriate, but make it more important that the programs have a plan for sharing these distinctions to ensure a common understanding. As the use of the U.S. Munitions List and the Commerce Control List expands to areas beyond export controls, taking steps to apply the concepts and terms used by the lists consistently would help eliminate confusion and facilitate collaboration. State’s export compliance officials noted that the U.S. Munitions List sets out a procedure for assessing items to determine whether they are subject to State’s export control regulations, and that other potential users of this list need to understand how this procedure works in order to avoid confusion. Some impediments to collaboration could be addressed when the implementation of certain initiatives is completed. For example, DOD’s use of Better Buying Power 2.0’s Defense Exportability Features enables DOD to more clearly inform acquisition programs about their responsibilities for critical technologies programs such as Anti-Tamper Policy and Foreign Military Sales at the design stage, rather than waiting until decisions are made about where to deploy or sell a system. DOD plans to continue the Defense Exportability Features initiative in Better Buying Power 3.0, which launched in September 2014. In addition, the establishment of the E2C2 was a step toward addressing concerns about collaboration we had raised in prior reports, and the E2C2’s deconfliction process provides significant opportunities for improved information sharing. However, the full benefit of export enforcement coordination is limited until all of the standard operating procedures are completed, including the one that allows for greater collaboration between the enforcement and intelligence communities. In a September 2014 meeting with senior representatives of the agencies involved in the protection of critical technologies, we discussed their efforts to address our designation of this area as high risk and also discussed the possibility of having one agency in charge of this area. These agencies expressed concern over their distinct roles and responsibilities and which agency would take the lead for coordinating efforts to protect critical technologies. In subsequent discussions with these agencies, the officials responsible for the operations of these programs generally agreed with the need for better collaboration among the programs, including actions not currently being taken. Such actions could include holding an annual meeting of the programs designed to protect critical technologies to discuss the technologies they are protecting, their programs’ intent, and any new developments or changes planned for their programs. For example, the Director of DOD’s Defense Technology Security Administration stated that, even within DOD, these programs expand beyond any one organization and initiatives are occurring within these programs. Interagency collaboration mechanisms for various agencies involved in common goals, such as the protection of critical technologies, are essential to avoid the potential for a patchwork of activities that could waste scarce funds and limit the overall effectiveness of federal efforts. Cross agency collaboration may strengthen the alliance of these programs and create common understanding of these technologies and better ensure that they are provided to foreign entities in a manner consistent with U.S. interests. For these reasons, it is important that the agencies responsible for the protection of critical technologies continue to promote and strengthen mechanisms for effective collaboration, both within their programs and agencies, as well as across the interagency community. In the 8 years since critical technologies programs were added to the GAO high-risk list, the agencies responsible for their implementation have taken positive steps and developed a number of initiatives to improve their individual programs. The critical technologies portfolio is a complex array of programs, subject to a myriad of laws, regulations, and policies, and administered by multiple offices across several departments. Effective coordination across the portfolio of programs is important to mitigate national security risks, and interagency collaboration is essential to realizing the potential effectiveness of the programs. This is especially true in light of the initiatives under way and the changing nature of issues related to the protection of critical technologies. It is important that collaboration and information sharing is optimized among agencies, not just within each agency. Doing so would improve their ability to protect critical technologies and national security interests. Within individual or closely related programs, ensuring that a consistent approach is taken by the lead and stakeholder agencies in meeting the program goals would help coordinating bodies to ensure that the protecting of critical technologies remains up to date and effective. Ongoing improvements to the individual programs may help to address some of these coordination issues, but interagency collaboration across the portfolio remains an important challenge as these changes occur. To ensure a consistent and more collaborative approach to the protection of critical technologies, we recommend that the Secretaries of Commerce, Defense, Homeland Security, State, and the Treasury; as well as the Attorney General of the United States, who have lead and stakeholder responsibilities for the eight programs within the critical technologies portfolio, take steps to promote and strengthen collaboration mechanisms among their respective programs while ongoing initiatives are implemented and assessed. These steps need not be onerous; for example, they could include conducting an annual meeting to discuss their programs, including the technologies they are protecting, their programs’ intent, any new developments or changes planned for their programs, as well as defining consistent critical technologies terminology and sharing important updates. We provided a draft copy of this product to the Departments of Commerce, Defense, Homeland Security, Justice, State, and the Treasury for comment. Each concurred with our recommendation that they take steps to promote and strengthen collaboration mechanisms among their respective programs. Justice and Treasury stated their concurrence with our recommendation in e-mailed comments. Commerce, Defense, Homeland Security, and State provided written comments and identified approaches to implementing our recommendation, including continuing existing collaborative initiatives as well as working with other departments to seek new opportunities for collaboration; and these are reproduced in Appendixes I, II, III, and IV, respectively. Commerce, Defense, and Homeland Security also provided technical comments that were integrated into the report, as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretaries of Commerce, Defense, Homeland Security, State, and the Treasury; the Attorney General of the United States; and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Marie A. Mak, (202) 512-4841, or makm@gao.gov. In addition to the contact named above, Lisa Gardner, Assistant Director; Scott Purdy; Ted Alexander; Robert Swierczek; Susan Ditto; Marie Ahearn; Kenneth Patton; and Hai Tran made key contributions to this report. | Each year, the federal government spends billions of dollars to develop and acquire advanced technologies in order to maintain U.S. superiority in military technology. The U.S. government permits and facilitates the sale and transfer of its technologies to allies in order to promote U.S. national security, foreign policy, and economic interests. However, these technologies can be targets for theft, espionage, reverse engineering, illegal export, and other forms of unauthorized transfer. Accordingly, the U.S. government administers programs to identify and protect its critical technologies. GAO (1) assessed the progress of the various agencies' efforts and identified implementation challenges, if any, to reform programs and processes to protect critical technologies; and (2) determined the extent to which cognizant agencies are coordinating with stakeholder agencies on their respective reform efforts to ensure effective collaboration. GAO reviewed laws, regulations, and guidance, as well as documentation of agency initiatives to reform programs that protect critical technologies and interviewed officials from lead and stakeholder agencies. The agencies responsible for eight programs designed to protect critical technologies have implemented several initiatives since 2007, but face some implementation challenges. Agencies have made progress addressing previously identified weaknesses in response to changes in law, GAO recommendations, or agencies' own internal identification of them. For instance, the area of export controls has seen significant action for reform, based on an April 2010 framework announced by the administration. Other programs, such as the Committee on Foreign Investment in the United States, have undergone reform through legislative requirements. As shown in the table below, multiple agencies have responsibility for these eight programs designed to protect critical technologies. However, some of these eight programs have additional challenges that remain to be addressed. For example, the Department of Defense (DOD) has not yet completed an evaluation of the Militarily Critical Technologies List or potential alternatives in response to GAO recommendations regarding the need to determine the best approach for meeting users' requirements for a technical reference. Further, DOD and the Department of Homeland Security still need to take additional actions to improve shipment tracking and verification procedures of arms sales to foreign allies for the Foreign Military Sales program. Both existing mechanisms and some new initiatives among the critical technologies programs support collaboration, but collaboration among lead and stakeholder agencies remains a challenge. GAO's September 2012 work on interagency collaboration mechanisms notes that many of the meaningful results the federal government seeks to achieve require the coordinated efforts of more than one federal agency. Recent initiatives have resulted in improved interagency collaboration. For example, DOD offices now communicate with non-DOD agencies through a formally instituted group to discuss potential technology transfers to foreign governments. However, current collaboration mechanisms do not involve direct communication among all the programs in the protection of critical technologies portfolio. Improved collaboration among the programs and agencies involved in the protection of critical technologies could help increase their efficiency and effectiveness. To ensure a consistent and collaborative approach to the protection of critical technologies, GAO recommends that agencies with lead and stakeholder responsibilities take steps to promote and strengthen collaboration mechanisms among their respective programs. |
The Forest Service’s mission includes sustaining the nation’s forests and grasslands, managing the productivity of those lands for the benefit of citizens, conserving open space, enhancing outdoor recreation opportunities, and conducting research and development. To help fulfill its mission, the Forest Service devotes considerable resources to suppressing wildfires. To coordinate the firefighting efforts of the Forest Service and other federal land management agencies, the interagency National Wildfire Coordinating Group (NWCG) was established. This group adopted an interagency incident command system (ICS) and firefighting standards for responding to wildland fires. Federal employees in the land management agencies assume specific roles within the ICS, a command structure used at all levels of government and organized around five primary functional areas: command, operations, planning, logistics, and finance and administration. There are about 80,000 federal employees and retirees who mobilize to assist state and local organizations to fight fires and respond to national emergencies as needed. These employees must receive standardized training and be certified in specific ICS duties (e.g., communications, aircraft management, and dispatch) before being available to respond to wildfires and other emergencies. Approximately 30,000 permanent Forest Service employees are ICS- certified, with 10,000 to 12,000 of these employees holding fire-related positions. The remaining 18,000 to 20,000 employees are part of the “Forest Service militia.” These are employees who are ICS-certified to fight fires but for whom the militia duty is a volunteer, collateral duty. These militia members will often leave their primary work duties and travel to the incident scene to provide assistance. A militia member may typically spend between 1 to 3 weeks each year fighting wildfires and responding to emergency situations. Militia teams that assisted the Federal Emergency Management Agency following Hurricane Katrina logged as many as 60 to 90 days of duty. As part of their militia duties, Forest Service employees often perform activities related to their regular work duties. For example, when responding to a wildfire, an employee whose regular duties involve IT support might provide this support at the scene of the fire. Employees may also perform militia duties that are unrelated to their regular work duties. For example, an IT employee who is ICS-certified in logistics might be responsible for providing food, supplies, and equipment at the scene of a fire. The federal government has had a long-standing acquisition policy that, when permissible and cost-effective, agencies are to rely on the private sector to perform activities that are regularly performed in the commercial marketplace, such as IT, maintenance and property management, and logistics. This policy was laid out in OMB’s 1966 Circular No. A-76, which was last revised in 2003. The circular’s stated goal is to obtain maximum value for taxpayers’ dollars by taking advantage of competitive forces. This policy, described in the 2001 President’s Management Agenda as competitive sourcing, is one of five governmentwide initiatives intended to improve the federal government’s management and performance so that resources entrusted to the federal government are well managed and used wisely. In addition, the circular provides agency management with a structured process to compare the public and private sector costs of performing an activity and to select the lowest cost provider through competition. This comparison may result in outsourcing federal jobs to the private sector. The first step toward competitive sourcing is identifying work activities that are suitable for competition. The FAIR Act, as implemented by the circular, requires federal agencies to annually inventory all of the activities that federal employees perform—the FAIR Act inventory. For the inventory, activities are classified as inherently governmental or commercial. Specifically: Inherently governmental activities are those activities that are so intimately related to the public interest that they require performance by federal government employees. The circular exempts these activities from competition. Commercial activities are those that are not inherently governmental and could be performed by the private sector. Commercial activities listed in the FAIR Act inventory are subject to the competition process detailed in the circular, unless they are placed in a subcategory of commercial activities that are exempt from competition. Core-commercial activities are those in a subcategory of commercial activities that are identified to be essential, or “core,” to the agency’s mission. The circular allows agencies to exempt these activities from competition with sufficient written justification. To determine whether federal employees or private sector organizations should perform commercial activities, the circular establishes the following three-stage competitive process (referred to as an A-76 competition): A precompetition planning stage that, among other things, determines the scope of the competition. In this stage, the agency determines the commercial activities to be competed and the precompetition cost to perform that activity. It also appoints competition officials who will be in charge of developing, for example, the performance work statement (PWS), which specifies the work to be performed by the winning bidder. In addition, agency officials unaffiliated with the PWS create the MEO, which is generally a smaller, streamlined version of the government organization that is currently doing the work. A competition stage that begins with a public announcement of a competition and ends with the selection of the competition’s winner. During this stage, an agency develops and issues a solicitation, receives offers, and follows a process to select the winning bidder. A postcompetition accountability stage that involves such activities as monitoring and reporting on the winner’s performance, whether that winner is the MEO or a private sector organization. The circular also directs agencies to designate a competitive sourcing official (CSO) with responsibility for implementing the circular who, with certain exceptions, may delegate those responsibilities to other officials in the agency. USDA designated its Chief Financial Officer as its CSO. He and his office, the Office of the Chief Financial Officer (OCFO), provide oversight to USDA and its agencies, including the Forest Service. In 2002, the Forest Service established the Competitive Sourcing Program Office (CSPO) in its headquarters. The CSPO oversees the preparation of the FAIR Act inventory and provides written guidance to employees throughout the agency. In 2003, competitive sourcing responsibilities were further centralized in the CSPO to include oversight of A-76 competitions and postcompetition reporting. Supporting the CSPO are regional, national forest, and district office staffs. OMB requires that agencies develop a strategic plan—known as a Green Plan—for implementing their competitive sourcing programs. OMB’s guidance on how to develop a Green Plan describes it as a long-range plan to ensure that competitive sourcing is a carefully and regularly considered option for improving the cost-effectiveness and quality of an agency’s commercial activities. According to the guidance, agencies should include in the plan a description of how they are going to take timely and effective advantage of competition, a list of activities being announced for competition, an overview of their decision-making process, and a strategy to limit potential constraints on competition. The guidance also requires that agencies update their plans as organizational conditions change. To comply, the Forest Service has periodically submitted its Green Plan to USDA for incorporation into the department-level Green Plan, which is then presented to OMB for approval. Since December 2003, the Forest Service has submitted at least five versions of its Green Plan to USDA. The Forest Service’s most recent OMB-approved Green Plan was issued in December 2005 and covers fiscal years 2005 through 2009. In its Green Plan, the Forest Service proposes conducting a series of competitive sourcing feasibility studies before holding A-76 competitions. Feasibility studies enable an agency to first examine the practicality of subjecting activities to a competition before committing to one. OMB recognizes the value of this step and has recommended in its guidance that agencies conduct feasibility studies to streamline the competitive sourcing process. The OCFO issued guidance in May 2004 on conducting feasibility studies that outlined USDA agencies’ responsibilities and specific procedures to be followed during a feasibility study. The Forest Service typically assembles a team of six to eight employees to conduct a feasibility study. Each study usually involves one or more requests for data from field offices, which may require information from several hundred Forest Service employees. In addition to recommending whether to proceed with an A-76 competition, a Forest Service feasibility study can make other recommendations, such as to reorganize the way in which the Forest Service performs the activity being studied without engaging in an A-76 competition. When the feasibility study is completed, the Chief of the Forest Service reviews the study team’s report and recommendations and decides on the best course of action, which may or may not be an A-76 competition. In fiscal years 2004 through 2006, the Forest Service completed three A-76 competitions (see table 1). The Consolidated Appropriations Act, 2004, requires executive agencies, such as USDA, to report to Congress on their competitive sourcing activities for the prior fiscal year, including the total number of competitions announced and completed; the incremental costs directly attributable to conducting these competitions; and the total savings actually, or estimated to be, derived from such competitions. As an agency within USDA, the Forest Service must report this information to USDA for inclusion in USDA’s report to Congress. Congress also limited the Forest Service’s funds available for “competitive sourcing studies and related activities” in each year’s appropriations act for fiscal years 2004 through 2007, because of concerns about how the Forest Service had implemented its competitive sourcing initiative. Specifically, the spending limitations were $5 million (fiscal year 2004), $2 million (fiscal year 2005), $3 million (fiscal year 2006), and $3 million (fiscal year 2007). The Forest Service lacks a realistic strategic plan and adequate guidance to help ensure that it can effectively and efficiently implement its competitive sourcing program. Specifically, the Forest Service’s Green Plan proposes to subject all commercial activities to feasibility studies without identifying the personnel and funding resources that are likely to be available for the studies. Furthermore, Forest Service officials responsible for planning the three competitions completed in fiscal year 2004 told us that they did not find the FAIR Act inventory data useful for identifying and exempting inherently governmental and core-commercial activities, and that they received little guidance on how to supplement the inventory data or how to identify inherently governmental and core- commercial activities without using the inventory data. Nevertheless, Forest Service officials told us that the lack of guidance did not result in competing inherently governmental and core-commercial activities because of the small size and commercial nature of these competitions. However, without clear guidance, we believe that the agency risks subjecting inherently governmental and core-commercial activities to future A-76 competitions. In addition, the Forest Service does not have a strategy on how to assess the cumulative effect that competing activities could have on its ability to fight wildland fires and respond to other emergencies, even though outsourcing a large number of federal jobs to the private sector could reduce the availability of certified responders in the long term. The Forest Service’s December 2005 Green Plan for managing its competitive sourcing program is not realistic because it does not take into account the personnel and funding resources that are likely to be available to implement the plan, even though it proposes to subject all commercial activities—performed by approximately 24,500 FTEs—to feasibility studies during fiscal years 2005 through 2009. This is a significant increase over the activities it proposed to study in a draft of this plan issued 5 months earlier in July 2005. The July 2005 draft Green Plan scheduled 13 feasibility studies for activities associated with 6,180 FTEs for fiscal years 2005 through 2009. The Forest Service selected these activities by identifying good candidates for feasibility studies and then selecting a level of effort the officials believed could be managed over the next 5 years. For example, in selecting the studies, the officials said they considered the complexity of the studies, the additional workload that would result from conducting them, and the personnel resources required in selecting these activities. The Forest Service used the following nine criteria as the basis for identifying good candidates for feasibility studies: Potential for savings. An activity with greater potential for savings through more effective or efficient performance is considered a stronger candidate for a feasibility study. Availability of private sector contractors. An activity performed by a large number of commercial companies is considered a stronger candidate for a feasibility study. Severability. An activity that can be performed by an independent business unit is considered a stronger candidate for a feasibility study. Preferred government performance. An activity that management prefers to be performed by a government position is considered a weaker candidate for a feasibility study. Location. An activity that does not have to be performed locally is considered a stronger candidate for a feasibility study. Fragmentation. An activity that is the minor responsibility of a large number of positions, while difficult to compete, may benefit from restructuring and is considered a stronger candidate for a feasibility study. Centrality of performance. An activity that is typically performed at a central location is considered a stronger candidate for a feasibility study. Potential for process improvement. An activity with a greater potential for improvement through modernization, reorganization, or some other means is considered a stronger candidate for a feasibility study. Impact on incident support. An activity that supports emergency situations, such as firefighting, is considered a weaker candidate for a feasibility study. However, after reviewing the Forest Service’s proposed Green Plan, USDA directed the Forest Service to revise its plan to include all 24,512 commercial FTEs eligible for competition in either a feasibility study or an A-76 competition. In response, the Forest Service issued a revised Green Plan in December 2005, which OMB subsequently approved. The revised plan included a single “catch-all” feasibility study—labeled “All Other Commercial B Activities”—which had 15,000 FTEs associated with it, nearly all of the commercial FTEs not already identified in the draft Green Plan. According to a Forest Service official, the additional feasibility study was added to comply with the USDA directive to include all commercial FTEs in the Green Plan, not because it included activities that might benefit from an A-76 competition. OCFO officials explained to us that USDA agencies have an option to perform feasibility studies to identify good candidates for more targeted feasibility studies, and that this was the purpose of the 15,000 FTE feasibility study. As of September 30, 2007, the Forest Service had not started this study. As table 2 shows, the Forest Service had completed only five of the nine feasibility studies scheduled to be completed by September 30, 2007. The completed studies account for only 2,580 FTEs of the approximately 19,000 FTEs that were scheduled to be studied by this date. In our previous work, we found that effective strategic plans take into account the resources required to implement the plan, such as human capital, technology, and information. While Forest Service officials could not provide us with any documents showing resources involved in conducting feasibility studies, the effort does not appear to be insignificant. According to OCFO guidance, 13 separate steps are involved in conducting feasibility studies, with many of the procedures involving several additional subtasks. Table 3 shows OCFO’s guidance for conducting feasibility studies. According to a senior Forest Service official, while the agency considered personnel requirements when selecting the original 13 activities for feasibility studies, it did not do so when it expanded the Green Plan to include all commercial FTEs. According to this official, there would be no practical way to conduct the 15,000-FTE study because it would include so many dissimilar activities. Consequently, FTEs would first need to be grouped into many separate activities, each requiring its own feasibility study, and as the number of studies increased, so too would the demands placed on Forest Service personnel. Several senior Forest Service officials with whom we spoke said that it is inconceivable that the schedule of feasibility studies in the OMB-approved Green Plan for fiscal years 2005 through 2009 could be met. Just as it did not take into account personnel resources in its OMB- approved December 2005 Green Plan, the Forest Service also did not consider congressionally directed funding limitations. While the Forest Service’s July draft Green Plan acknowledged the fiscal year 2005 $2 million statutory spending limitation on the Forest Service’s competitive sourcing activities, the December Green Plan did not. In directing the Forest Service to include all commercial activities in the plan, USDA said to do so as if there were no funding limitations. Because Congress had placed limitations on the Forest Service’s spending for competitive sourcing activities in the previous 2 fiscal years—2004 and 2005—factoring in the possibility of limited funds available in future years would have been appropriate. Since an agency’s FAIR Act inventory designates all of an agency’s FTEs as inherently governmental, commercial, or commercial but exempt from competition (e.g., core-commercial), the development of an accurate inventory becomes the foundation for determining which activities agencies select for competition. As we have previously reported, other agencies have had difficultly in classifying positions when preparing their FAIR Act inventories, and the Forest Service is no exception. The Forest Service’s difficulty is exemplified by significant fluctuations in the percentages of inherently governmental activities in its inventory data for fiscal years 2004 through 2006. In addition, there are differences between the Forest Service’s initial classifications and those reported in the agency’s OMB-approved Fair Act inventory. See table 4. According to a Forest Service official involved in preparing the agency’s FAIR Act inventory, the fluctuations in the percentages of FTEs designated as inherently governmental—ranging from a high of over 50 percent to a low of 7 percent—were the result of changes from year to year in both the Forest Service’s criteria for classifying activities, and the methodology it used to calculate the percentages of FTEs performing inherently governmental activities. In addition, the differences in the percentages of FTEs designated as core-commercial activities stemmed from disagreements between the Forest Service and OMB about what constituted a core-commercial activity. Specifically, in fiscal years 2004 and 2005, OMB did not approve the Forest Service’s written justifications for core-commercial activities. Instead, it directed the Forest Service to reclassify all of the activities the agency had identified as core-commercial activities to commercial, thus making these activities eligible for competition. The Forest Service’s lack of consistency in its classification methodology, coupled with disagreement between the Forest Service and OMB regarding activity classifications, call into question the accuracy and usefulness of the Forest Service’s FAIR Act inventory data for identifying inherently governmental and core-commercial activities when planning specific A-76 competitions. Forest Service officials told us that the FAIR Act inventory represents only a “rough snapshot” of the inherently governmental and core-commercial activities within the Forest Service, and that much additional work must be done to identify specific activities suitable for competition. They raised the following concerns about using FAIR Act inventory data to identify inherently governmental and core- commercial activities: Improper classification of activities in the FAIR Act inventory. Disagreements between the Forest Service and USDA regarding activity designations raise questions about appropriate classification. Furthermore, agency employees may be tempted to classify an activity as either inherently governmental or core-commercial to exempt it from competition. Mandatory use of OMB function codes. The Forest Service is required by OMB to use OMB-assigned function codes for the FAIR Act inventory. Although the guidance allows the Forest Service flexibility in defining the codes, the officials told us that some of the codes were too broad to be of any use or did not capture the actual work activities Forest Service employees carried out. Because Forest Service officials did not find the agency’s FAIR Act inventory data useful during the A-76 precompetition planning stage, those officials responsible for the three completed competitions—IT infrastructure, road maintenance, and fleet maintenance—said that they developed their own methodologies to classify inherently governmental and core-commercial activities. For each competition, agency officials collected additional work activity information from field offices and performed additional analysis beyond that conducted for the FAIR Act inventory. In doing so, officials said, they received little guidance on how to supplement the FAIR Act inventory data or how to identify inherently governmental and core-commercial activities during the precompetition stage of the completed competitions. Despite the lack of guidance, Forest Service officials involved in the three completed competitions said that they succeeded in identifying and exempting from competition inherently governmental and core- commercial activities by relying on intuitive knowledge and outside consultants. Officials were confident of their success because of their expertise in the activity being competed; the clearly commercial nature of the activities; and the small size of the competitions, especially the fleet maintenance and the road maintenance competitions. However, the three completed competitions may not be representative of future competitions. Since these competitions, USDA and the Forest Service have taken steps to better define inherently governmental and core-commercial activities for the purposes of completing the FAIR Act inventory. However, while we believe that the FAIR Act inventory data could be a useful tool for planning competitive sourcing activities at an agencywide level—for example, they can form the basis of the Green Plan—we believe that additional guidance will be needed on how to classify work activities during the precompetition planning stage of an A-76 competition to ensure that key work activities are excluded from the specific competition being planned. Without such guidance, the Forest Service is at risk of subjecting inherently governmental and core-commercial activities to A-76 competitions. This is particularly true as the Forest Service continues to implement its Green Plan, which could potentially subject up to two thirds of its FTEs to A-76 competitions. For the three competitions the Forest Service completed—IT infrastructure, road maintenance, and fleet maintenance—officials responsible for planning the competitions told us they likely had a negligible effect on the Forest Service militia’s ability to fight fires and respond to emergencies for the following reasons: The three competitions affected a relatively small number of Forest Service employees—1,323—compared with the approximately 80,000 federal employees who are ICS-certified. Contract provisions required the winning organization to provide emergency incident support for activities within the scope of the contract. For example, the MEO that won the road maintenance competition was obligated to provide road maintenance, if needed, to support the response to a wildfire incident. The largest of the three competitions—the IT infrastructure, which affected 1,200 FTEs—was won by the MEO. Because the MEO is still a unit of the Forest Service and staffed by Forest Service employees, the agency was able to direct the MEO to allow ICS-certified employees to volunteer to fight wildfires and respond to other emergencies by performing non-IT-related duties. While the Forest Service has thus far minimized the impact of A-76 competitions on the availability of ICS-certified personnel to fight wildfires and respond to other emergencies, the following other factors may affect the availability of ICS-certified personnel in the future: The Forest Service cannot realistically expect a private sector firm to provide emergency services unrelated to the activity being competed. For example, the Forest Service could not hold a competition for fleet maintenance and expect firms that specialize in fleet maintenance to provide unrelated services at the scene of the fire, such as providing food and supplies. Whether an MEO or a private sector firm wins a competition, the availability of ICS-certified personnel could decline. As with any reorganization, competitive sourcing may cause some personnel to leave the Forest Service. These employees could retire or be hired by other federal agencies that participate in the NWCG, and thus they could continue to fight fires and respond to other emergencies. Other employees, however, may no longer be available if their new employment situation does not allow them to take extended leaves of absence to fight fires and respond to other emergencies. In its fiscal years 2006 and 2007 appropriations for the Forest Service, Congress required the agency, in carrying out any competitive sourcing competition involving Forest Service employees, to take into account the potential effect that contracting with a private sector organization would have on the agency’s ability to fight and manage wildfires. For the only A-76 competition started since the law was passed—the communications competition—the 130 employees potentially affected by the competition were asked to report the amount of time they spent responding to emergencies during the previous year. However, the Forest Service did not collect information on what specific duties those employees performed during the emergency response, nor the ICS-qualifications they hold. Without this information, the Forest Service cannot assess the full impact of this competition on its emergency response capability. While it is important to know the impact of individual competitions, it is even more important to know the cumulative impact of multiple competitions. However, the Forest Service does not have a strategy to assess the cumulative impact that future competitions could have on its firefighting capability. The absence of such a strategy could prove significant if the Forest Service implements its plan to consider over 24,000 FTEs—or nearly two thirds of its workforce—for A-76 competition. The Forest Service does not know how much it spent on competitive sourcing activities and, therefore, cannot be assured that it stayed within the spending limitations or that it accurately reported savings to Congress. For fiscal years 2004 through 2006, we found that the Forest Service (1) narrowly interpreted the spending limitations to exclude certain costs and (2) lacked sufficiently complete and reliable cost data to demonstrate its compliance with the appropriations acts’ spending limitations on its competitive sourcing activities. Furthermore, Congress may not have an accurate measure of the savings from the Forest Service’s A-76 competitions because the agency (1) does not have complete and reliable cost data and (2) did not include all costs associated with its competitive sourcing program. For fiscal years 2004 through 2006, the Forest Service did not attempt to collect cost data on all competitive sourcing activities because it believed that some costs associated with these activities were not subject to the spending limitations of $5 million, $2 million, and $3 million, respectively, as established in its appropriations acts. Specifically, the Forest Service reasoned that it did not have to collect all the cost data because it interpreted the spending limitations as being intended to restrict the number of A-76 competitions it conducted. It, therefore, asserted that the costs associated with the competition stage were subject to the spending limitations, while costs associated with the FAIR Act inventory; precompetition planning; and postcompetition accountability activities should not be included. However, we found that the Forest Service’s interpretation of which competitive sourcing activities are subject to the spending limitations was too narrow. Specifically, we concluded that, with only a limited exception, the spending limitations apply to all costs attributable to the Forest Service’s competitive sourcing program, including feasibility studies and other precompetition planning activities, the competition itself, postcompetition accountability activities, and the CSPO’s costs to manage the program. Only the costs incurred to comply with the FAIR Act, such as those to develop the inventories of activities, are exempt from the limitations because the Forest Service is statutorily required to perform FAIR Act-related activities even if it makes no effort to conduct competitive sourcing. (See app. II for further discussion on our legal interpretation.) In April 2007, we sought the opinion of the USDA’s General Counsel on whether certain Forest Service competitive sourcing activities are subject to the annual statutory spending limitations. Our interpretation of the costs that were subject to the spending limitations for fiscal years 2004 through 2006 is consistent with the interpretation that USDA’s General Counsel provided to us. The USDA’s General Counsel further stated that it believed that the Forest Service may not have complied with the spending limitations in fiscal years 2004 through 2006. Even when it used its own interpretation of costs subject to the spending limitations, the Forest Service still did not know whether it complied with the limitations because it did not have a cost accounting system sufficient to track costs related to competitive sourcing. First, the Forest Service failed to establish tracking codes, known as job codes, in its financial management system to enable it to distinguish cost data on the activities that it believed were subject to the spending limitations from other cost data. Forest Service officials could not explain why the agency had not established these job codes. Second, the Forest Service lacked guidance and management oversight to ensure that employees were accurately and consistently using the job codes that were established for competitive sourcing activities. In particular, Forest Service officials could not provide us with any guidance that employees could use to determine when to charge time to these job codes and when to charge time to codes associated with their regular duties. Officials acknowledged that without this guidance, employees probably continued to charge time spent on competitive sourcing activities to their regular job codes. For example: In fiscal year 2004, Forest Service employees charged only 0.07 FTEs to the job code established to track costs with the IT infrastructure competition, even though competitive sourcing activities for the competition took place throughout the entire fiscal year. In fiscal years 2005 through 2006, Forest Service employees charged only 0.22 FTEs to the job code established to track costs associated with the communications competition, even though competitive sourcing activities for the competition began in fiscal year 2005 and were ongoing at the end of fiscal year 2006. In consultation with Forest Service officials, we agreed that it was not feasible to reconstruct cost data for competitive sourcing activities between fiscal years 2004 and 2006 to determine if the Forest Service exceeded the appropriations acts’ spending limitations. Forest Service officials told us it would require a significant amount of time and resources to query employees on their past work activities associated with competitive sourcing. Furthermore, it is unlikely that employees could reliably report the time they spent on competitive sourcing activities that took place months and years ago. Finally, officials told us that many employees involved with the competitions have since left the agency. Recognizing these shortcomings, the Forest Service has made some efforts to improve its policies and guidance on how to establish job codes and how employees are to use them to track competitive sourcing costs. In fiscal year 2007, the Forest Service issued general policy on when to charge time to competitive sourcing job codes. Among other things, it issued a directive specifying that the cost of performing some precompetition planning activities be charged to competitive sourcing job codes. However, the Forest Service has yet to provide details on how it intends to implement this policy, and thus we were not able to evaluate it. The Consolidated Appropriations Act, 2004, establishes a governmentwide requirement for each executive agency to report to Congress on the actual savings derived from the implementation of competitions for the prior fiscal year. In addition, OMB provides guidance on preparing the report, including how to calculate savings. According to the guidance, savings from completed competitions is defined as the difference between the cost to the federal government of performing the activity prior to the competition—the baseline cost—and the cost to the government of performing the activity or paying for it after the winner of the competition has begun performing the activity. This is the postcompetition cost. Figure 1 shows how savings from competitions are calculated. While OMB guidance specifies the postcompetition cost to the government if a private sector contractor wins the competition, it does not provide specific direction on how to calculate the postcompetition cost to the government if the MEO wins the competition. Instead, the guidance suggests determining the cost of in-house performance using a methodology similar to that used to calculate the baseline costs. In its reports to Congress on the actual savings derived from the implementation of competitions, USDA reported that the Forest Service saved over $38 million between fiscal years 2004 through 2006 as a result of the three completed competitions—IT infrastructure (approximately $35.2 million), fleet maintenance (approximately $716,000), and road maintenance (approximately $2.2 million). For these three competitions, Forest Service officials at the agency’s headquarters and in the regions could not provide us with the information necessary to fully substantiate the savings reported to Congress. Headquarters officials directed us to the regions responsible for the competitions, where officials were able to provide some cost data, but overall could not provide us with the information necessary to substantiate the savings reported to Congress. GAO’s document entitled The Standards for Internal Control in the Federal Government requires that all transactions and other significant events, such as the Forest Service competitive sourcing costs and savings analyses, be clearly documented, and the documentation should be readily available for examination. Specifically, we found the following: IT infrastructure. Officials could not tell us the methodology they used to determine postcompetition costs and did not have available the data they used to calculate savings. They reconstructed the personnel and overhead costs for operating the MEO, but could not reconstruct the cost of outside contractors that the MEO employed because these costs were not discernable from other cost data that the Forest Service collected. Ultimately, these officials could only speculate about what the contract costs might have been and how savings were calculated. Fleet maintenance. As with the IT infrastructure competition, officials could not tell us the methodology they used to determine postcompetition costs and did not have available all of the data they used to calculate savings. Although officials provided us with the bulk of the postcompetition cost data—the payments made to Serco, the private sector organization that won the competition—they could not provide us with all of the costs. Road maintenance. Unlike the other two competitions, officials responsible for this competition described the methodology. However, they were unable to provide us with baseline cost data for fiscal year 2004, and, as a result, we could not verify the reported savings for that year. In April 2007, OMB issued guidance to help agencies substantiate the savings they have achieved through A-76 competitions. The guidance describes agency responsibilities related to tracking and reviewing cost data to ensure savings are being realized. It also requires all agencies to develop plans to independently validate a sampling of competitions to confirm projected savings. Among other things, the guidance requires agencies to assess the completeness and accuracy of cost data. Forest Service officials told us that the agency is working to implement OMB’s guidance. USDA told us that it is directing the Forest Service to validate the savings from the IT infrastructure competition by the fourth quarter of fiscal year 2008. While the Forest Service could not substantiate the savings it reported to Congress using OMB’s guidance, the guidance itself allows agencies to exclude some costs associated with A-76 competitions, which, if excluded, may not provide Congress with an accurate measure of the savings produced by the competitions. Although Forest Service officials could not tell us all of the costs that were included or excluded in their savings estimates, they stated with confidence that some costs were excluded. Specifically, they said that transition costs associated with transferring responsibilities to the winning organization were not included in the savings calculations for the three competitions we reviewed. Transition costs include costs associated with decreasing the size of the workforce through buyouts and retirements, and costs to transfer employees who are being retained to other locations. These are not necessarily one-time costs because an agency is required to complete a follow-on competition for an activity, generally after about 3 to 5 years. In the three competitions we reviewed, we found the following transition costs were excluded: IT infrastructure. Forest Service officials told us that they excluded approximately $40 million in transition costs from the savings calculations that were reported to Congress. These costs exceeded by about $5 million the $35.2 million that the Forest Service reported to have saved during fiscal years 2005 and 2006 as a result of the competition. Fleet maintenance. Forest Service officials told us they excluded about $670,000 in transition costs from the savings calculations that were reported to Congress, nearly equaling the approximately $716,000 that the Forest Service reported to have saved since fiscal year 2005 as a result of the competition. Road maintenance. Forest Service officials told us they excluded about $320,000 in transition costs from the savings calculations that were reported to Congress. These costs are approximately 15 percent of the total $2.2 million in reported savings since fiscal year 2004. In addition to transition costs, there are also precompetition planning costs (including, as of May 2004, feasibility study costs). Like transition costs, OMB guidance does not direct that precompetition planning costs be included in the savings calculations. A Forest Service official told us that precompetition planning costs were excluded from the savings calculations for the three completed competitions. Forest Service officials could not provide us with estimates of these costs because they were not tracked. OMB guidance also does not direct agencies to include in their savings calculation other potential costs associated with the termination of a contract. To illustrate, 14 months after the private sector contractor, Serco, began performing fleet maintenance activities, the Forest Service terminated the contract. Under the terms of the contract, the Forest Service must negotiate with Serco on any costs associated with the termination itself. Forest Service officials stated that in March 2007, Serco proposed a settlement amount that it contends would reimburse it for costs such as those associated with vacating sites as well as administrative costs and attorneys’ fees. Serco also contends that the Forest Service owes it additional compensation unrelated to the contract termination. Serco’s claim stems from a disagreement with the Forest Service over the terms of the contract. Finally, Forest Service officials also told us that the agency has incurred additional costs—exceeding its costs for fleet maintenance before the Serco contract—because maintenance work is now being performed by retail venders. Officials told us that the Forest Service is precluded from returning the work to the agency for in-house performance. Because these issues are still pending, we did not include the dollar amounts of Serco’s claims or the additional costs incurred by the Forest Service, and the Forest Service officials told us that they did not want to comment further on this issue. The government’s goal is to obtain high-quality services at a reasonable cost, regardless of whether these services are performed by the public or private sector. Competition between the public and private sectors is an important tool in reaching this goal. The 2001 President’s Management Agenda has reemphasized the importance of using this tool as a means to deliver the best value to the American taxpayer. In keeping with the President’s Management Agenda, the Forest Service established a competitive sourcing program and began holding A-76 competitions. The agency now has plans to consider competing up to two thirds of its workforce against the private sector. This is a massive undertaking whose long-term success will depend on a realistic strategic plan, clear guidance to identify the key work activities that should be excluded from competition, and a strategy to assess the cumulative effect that outsourcing a large number of federal jobs could have on its firefighting capability. Unfortunately, the Forest Service has none of these in place. Because the Forest Service has just begun to implement its competitive sourcing program—having competed less than 5 percent of its workforce against the private sector—these problems have not yet had a significant impact on the Forest Service’s competitive sourcing program. However, as the Forest Service implements its Green Plan and greatly expands the scope of its competitive sourcing program, the problems we have identified could, in the long term, severely impact its ability to implement this program effectively—jeopardizing not just the overall success of the program, but the nation’s ability to fight fires and respond to other emergencies. More immediate is our concern that the Forest Service did not collect complete and reliable cost data related to its competitive sourcing program during fiscal years 2004 through 2006. As a result, the agency did not know how much it had spent on competitive sourcing activities and, consequently, whether it had complied with statutory spending limitations. We are also concerned about the usefulness of the cost savings that the Forest Service reported to Congress. The Forest Service could not provide us with sufficient data to verify the accuracy of the reported savings and excluded substantial costs from the savings calculations. Although the agency followed OMB guidance in calculating savings, we believe the guidance provides the Forest Service with the latitude to include the other costs we identified—some of which were substantial. Including these costs would have provided Congress with a more realistic picture of the extent to which the Forest Service’s competitive sourcing program is saving the American taxpayers’ money. To improve the Forest Service’s management of its competitive sourcing program, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following five actions: Revise the Green Plan to establish an implementation schedule that takes into account resource limitations. Develop clear guidance on when and how to identify inherently governmental and core-commercial activities so they are excluded from competitive sourcing competitions. Develop a strategy for assessing the cumulative effect of competitive sourcing competitions on the Forest Service’s firefighting and emergency response capabilities. Collect complete and reliable cost data on competitive sourcing activities to ensure that the Forest Service is able to comply with the appropriations acts’ spending limitations. Ensure that the savings reported to Congress are a realistic measure of the actual savings resulting from A-76 competitions by (1) verifying the accuracy of the data and (2) including costs such as planning costs and transition costs when calculating the savings. We provided a draft of this report to the U.S. Department of Agriculture for review and comment. The Forest Service responded. It generally agreed with our recommendations but had concerns about some of the specific findings and conclusions in the report. We present the agency’s concerns and our responses to them in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Agriculture, and the Chief of the Forest Service. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report discusses the extent to which the U.S. Department of Agriculture’s (USDA) Forest Service has (1) plans and guidance to help implement its competitive sourcing program effectively and (2) sufficient cost data to ensure that it complied with its competitive sourcing statutory spending limitations and accurately reported its competitive sourcing savings to Congress for fiscal years 2004 through 2006. To address these two objectives, we focused on Forest Service competitive sourcing activities between fiscal years 2004 and 2006. For the first objective, we compared the Forest Service’s Green Plan, including draft versions of the plan, with previous work that GAO has performed that identified the elements of effective strategic plans. We also reviewed the guidance that the Office of Management and Budget (OMB) and USDA provided to the Forest Service on how to construct its Green Plan. Furthermore, we examined how the Forest Service scoped the three competitions that it completed during this time frame. Specifically, we examined how the Forest Service ensured that it did not include inherently governmental and core-commercial activities in the competitions, and how it assessed the impact that competitions could have on the nation’s ability to fight wildland fires and respond to other emergencies. Finally, we examined the guidance currently available to Forest Service employees responsible for scoping competitive sourcing competitions and interviewed Forest Service officials familiar with the agency’s competitive sourcing activities. To respond to the second objective, we examined the appropriations acts for fiscal years 2004 through 2006 to determine which costs incurred by the Forest Service during those fiscal years were subject to the spending limitations. Appendix II explains this analysis in greater detail. We then asked the Forest Service to provide us with cost data for activities associated with its competitive sourcing program, including cost data we had determined were subject to the spending limitations. We assessed the reliability of these data and found them unreliable as a measure of the costs and savings associated with the program. Specifically, our review of cost data from the USDA’s Federal Financial Information System (FFIS) and subsequent discussions with Forest Service officials, including the Chief Financial Officer, confirmed that the Forest Service had not issued formal guidance on which costs where to be charged to competitive sourcing job codes in FFIS, and that there was a lack of institutionalized policies and procedures to guide employees in tracking the time and costs associated with competitive sourcing activities. As a result, Forest Service officials, including the Chief Financial Officer, agreed that it was likely that employees used job codes inconsistently and that the accounting of competitive sourcing expenditures between fiscal years 2004 and 2006 were unreliable and inaccurate. We also asked the Forest Service for data to substantiate the annual savings resulting from competitive sourcing competitions it reported to Congress in fiscal years 2004 through 2006. For the three competitions completed during this time frame, Forest Service officials either could not provide us with the methodology used to calculate the savings, or could not provide us with all of the data used to calculate the savings. As a result, we were unable to replicate the Forest Service’s savings calculations, and thus we determined the reported numbers were unreliable because data reporting strategies were absent or inconsistent, there were no internal processes for verifying the accuracy of the data, and there was no documentation with which we could verify the accuracy of the data. Finally, in the course of our work, we identified categories of costs that were not included in the Forest Service’s savings calculations. We include examples of these costs in this report to show their magnitude relative to the savings reported to Congress; however, we did not independently verify the accuracy of these costs. We conducted our work between October 2006 and January 2008 in accordance with generally accepted government auditing standards. Appendix II: Limitations on Use of Appropriations for “Competitive Sourcing Studies and Related Activities” The Forest Service relies on both federal employees and private contractors to perform commercial activities—activities performed by the government that are also regularly performed in the commercial market place—such as data entry, photocopying, and other administrative support services. To determine whether to convert the performance of such activities from federal employees to private contractors, or vice versa, the Forest Service follows the competitive process set out in Office of Management and Budget (OMB) Circular No. A-76, Performance of Commercial Activities (May 29, 2003), as revised, which establishes executive branch policy for the competition of commercial activities. The Forest Service is also subject to the requirements of the Federal Activities Inventory Reform Act of 1998 (FAIR Act), which mandates agencies to develop inventories of their activities, and the President’s Management Agenda (PMA), launched in 2001, which includes a governmentwide “competitive sourcing” initiative to increase and to make more efficient and effective agencies’ use of the circular’s competitive process. To implement these requirements, the Forest Service established a competitive sourcing program that is administered by the Competitive Sourcing Program Office. A provision in each of the annual appropriations acts funding the Forest Service for fiscal years 2004 through 2006 limited the amount of appropriated funds available to the Forest Service for “competitive sourcing studies and related activities” to between $2 million and $5 million (spending limitations). Five members of the U. S. Senate requested that we determine which costs incurred by the Forest Service in fiscal years 2004 through 2006 were subject to the spending limitations. In our view, these spending limitations apply to all costs under the Forest Service’s competitive sourcing program, except for costs incurred to carry out the FAIR Act. OMB Circular No. A-76 includes, in appendix B, detailed guidance on the competitive process a federal agency should follow to determine whether an activity should be performed by a public or private source. The process is divided into three stages: (1) a precompetition preliminary planning stage, consisting, at minimum, of nine specified steps; (2) a competition stage that starts with a formal public announcement of a standard or streamlined competition between private and public sources and ends with announcing a performance decision; and (3) a postcompetition accountability stage, which mandates such actions as transferring performance of the activities to the winning entity, maintaining a database to track the execution of competitions, monitoring performance, and submitting reports to OMB. To avoid the mistakes it made during earlier competitions, the Forest Service has supplemented this process since 2004 by first conducting studies to determine the feasibility of carrying out this competitive process (feasibility studies) for each activity. In 1998, Congress enacted the FAIR Act, which requires federal agencies to submit to OMB their annual inventories of commercial activities and, after a period of review and consultation with OMB, transmit the final inventories to Congress and make them available to the public. Appendix A to OMB Circular No. A-76 (Inventory Process) implements the FAIR Act. In August 2001, the Administration launched the PMA as its reform agenda for improving management and performance in the federal government. The PMA includes a competitive sourcing initiative to simplify and improve the process for choosing private or public sources. To implement the PMA competitive sourcing initiative, the Forest Service stated in its fiscal year 2004 budget justification that it planned to conduct competitions for activities associated with a total of 11,000 full-time employees or their equivalent (FTEs), and that competitive sourcing costs were expected to rise. Concerned about how the Forest Service had implemented the PMA competitive sourcing initiative, the House and Senate appropriations committees included language in the appropriations bills for fiscal year 2004 that led to the enactment of the following spending limitation: “Of the funds appropriated by this Act, not more than $5,000,000 may be used in fiscal year 2004 for competitive sourcing studies and related activities by the Forest Service.” The Forest Service’s appropriations acts for fiscal years 2005 and 2006 included spending limitations with identical language, except that the maximum amount available for this purpose was $2 million and $3 million, respectively. At issue here is determining the range of Forest Service activities that are covered by the appropriations acts’ spending limitations on “competitive sourcing studies and related activities.” Construing the meaning of a statutory provision starts with the statutory language. The appropriations acts for fiscal years 2004 through 2006 define a “competitive sourcing study” for purposes of the spending limitations as “a study on subjecting work performed by Federal Government employees or private contractors to public- private competition or on converting the Federal Government employees or the work performed by such employees to private contractor performance under the Office of Management and Budget Circular A–76 or any other administrative regulation, directive, or policy.” Unfortunately, by defining a “competitive sourcing study” as a “study,” albeit in the context of private versus public source performance, the appropriations acts do not clearly indicate the breadth of Forest Service activities that fall within its meaning. At the very minimum, however, the phrase includes the competition stage (the second stage) of the OMB Circular No. A-76 process because it represents the formal structured process of soliciting and evaluating proposals from public and private sources. The Forest Service takes the position that the annual spending limitations apply only to costs attributable to the competition stage. We disagree, as does the U. S. Department of Agriculture’s General Counsel, who provides legal assistance to the Forest Service. Even if we were to read the word “study” as narrowly as the Forest Service apparently does, the additional words “and related activities” in this context make clear that the statutory language includes more than simply the competitions. The statutes do not define “related activities” or set out specific criteria for identifying them. While a broad range of activities are “related,” in the sense of having some logical connection to competitive sourcing studies, we believe Congress intended to focus on those activities that the Forest Service performs as part of its implementation of the President’s competitive sourcing initiative. In other words, “related activities” are those activities the Forest Service performs if it is considering whether to conduct a public-private competition. Under the Forest Service’s competitive sourcing program, no competition is conducted until the completion of a feasibility study, the objective of which is to evaluate whether an activity should be subject to the A-76 competitive sourcing process. Likewise, no competition is conducted under OMB Circular No. A-76 until the completion of the precompetition stage (the first stage), which also includes tasks such as conducting market research and developing an acquisition plan. Feasibility studies and precompetition planning activities are thus prerequisites to a competition and, therefore, in our view, “related activities.” Similarly, the postcompetition stage is a required consequence of conducting a competition. The Forest Service’s Competitive Sourcing Program Office supports and manages these and other activities in the Forest Service’s competitive sourcing program. Because these activities further the Forest Service’s objective of conducting competitions under the competitive sourcing initiative, we view them as “related activities” subject to the spending limitation. While it can be argued that the words “and related activities” include activities the Forest Service performs to carry out the FAIR Act, such as preparing inventories and reporting to Congress, we think the better view is that these costs are not encompassed in this limitation. Congress by law expressly requires these activities, even absent any other competitive sourcing program activities. Moreover, complying with the FAIR Act does not further the Forest Service’s progress on the PMA competitive sourcing initiative, and it was the Forest Service’s cost to implement this initiative to which the congressional committees were reacting. Our interpretation is consistent with the interpretation provided to us by the USDA Office of General Counsel. Accordingly, we conclude that the spending limitations on “competitive sourcing studies and related activities” included in the appropriations acts funding the Forest Service for fiscal years 2004 through 2006 apply to all costs attributable to the Forest Service’s competitive sourcing program, including the following: conducting feasibility studies to evaluate which activities should be subject to the competitive sourcing process set out in OMB Circular No. A-76; engaging in the competitive sourcing process set out in OMB Circular No. A-76, including activities related to precompetition planning, competitions between sources, and postcompetition accountability; and managing the agency’s competitive sourcing program. The appropriations acts spending limitations, however, do not apply to costs incurred for complying with the FAIR Act, which the Forest Service must do even if it applies no efforts to competitive sourcing. The following are GAO’s comments on the Chief of the Forest Service letter dated December 31, 2007. 1. We believe that the Forest Service’s comments misstate our position regarding its guidance for ensuring that key work activities are excluded from A-76 competitions. Our report does not state that the Forest Service has no guidance, but rather that it lacks clear guidance. Without clear guidance, we do not believe that the Forest Service’s current process can offer sufficient safeguards to ensure that key work activities are excluded from A-76 competitions, especially in light of the agency’s plan to examine the activities of two thirds of its workforce. For example, regarding the FAIR Act inventory, we found significant fluctuations in the percentages of inherently governmental activities in the inventory data for fiscal years 2004 through 2006, suggesting that the Forest Service had difficulty classifying positions when preparing the inventory. In addition, Forest Service officials told us that the inventory data represents only a “rough snapshot” of the inherently governmental and core-commercial activities within the Forest Service, and that much additional work must be done to identify specifics activities suitable for competitions. Finally, the Forest Service officials responsible for the three completed competitions—IT infrastructure, road maintenance, and fleet maintenance—told us that they developed their own methodologies to classify inherently governmental and core-commercial activities because they did not find the agency’s FAIR Act inventory data useful and received little guidance on how to identity inherently governmental and core-commercial activities. Nevertheless, these officials were confident that they successfully identified and exempted from competition inherently governmental and core-commercial activities because of, among other reasons, the clearly commercial nature of the activities. However, the three completed competitions may not be representative of future competitions, particularly if the Forest Service proceeds with its plan to examine the work activity of two thirds of its workforce. 2. As we state in our report, OMB guidance on how to calculate competitive sourcing savings reported to Congress does not specify all of the costs that should be included in the calculations, thus providing the Forest Service with some discretion on which costs to include. Although the Forest Service followed OMB guidance in calculating savings, we believe the guidance provides the Forest Service with the latitude to include the other costs we identified—some of which substantially reduce or even exceed the savings reported to Congress. Including these costs would have provided Congress with a more realistic picture of the extent to which the Forest Service’s competitive sourcing program is saving the American taxpayers’ money. 3. At the Forest Service’s request, we excluded from our report much of the information we received related to termination of the Serco contract because negotiations between the Forest Service and Serco are ongoing. However, excluding all mention of the types of possible additional costs would provide the false impression that the savings already reported to Congress resulting from the fleet maintenance competition have been realized, when in fact they are being called into question. 4. Although the Forest Service does not believe it exceeded the spending limitations, even under the broader interpretation, we found that it does not have the data to substantiate this claim. Specifically, as we state in our report, the Forest Service did not collect complete and reliable cost data related to its competitive sourcing program during fiscal years 2004 through 2006 because it did not have a cost accounting system sufficient to track costs related to competitive sourcing. Furthermore, in consultation with Forest Service officials, we agreed that it was not feasible to reconstruct cost data for competitive sourcing activities for these years because doing so would require a significant amount of resources and would not likely provide reliable data. Because the Forest Service was unable to provide us with complete and reliable cost data and it was not feasible to reconstruct the data, neither the Forest Service nor GAO can determine with an appropriate degree of certainty if the Forest Service exceeded the appropriations acts’ spending limitations. In addition, we acknowledge in our report that the Forest Service has made efforts to improve its policies and guidance on how to establish job codes and how employees are to use them for tracking purposes, including a directive specifying that the cost of performing some precompetition planning activities be charged to competitive sourcing job codes. In addition to the individual named above, Andrea Wamstad Brown, Assistant Director; F. Abe Dymond; Charles T. Egan; Lauren S. Fassler; Peter Grinnell; David Perkins; Aaron Jay Shiffrin; and Carol Herrnstadt Shulman made key contributions to this report. | Competitive sourcing is aimed at promoting competition between federal employees and the private sector as a way to improve government operations. Key work activities--those that are either inherently governmental or core to the agency's mission--are generally exempt from competitions. In fiscal year 2004, Congress began placing spending limitations on the Forest Service's competitive sourcing program because of concerns about how the program was managed. Also, like other agencies, the Forest Service must report annually to Congress on the savings achieved from any competitions it conducted. GAO was asked to determine the extent to which the Forest Service has (1) plans and guidance to help implement its competitive sourcing program effectively and (2) sufficient cost data to ensure that it complied with its spending limitations and accurately reported its savings to Congress for fiscal years 2004 through 2006. To answer these objectives, GAO examined the agency's strategic plan, guidance, and available cost data for competitive sourcing and interviewed key agency officials. The U.S. Department of Agriculture's Forest Service lacks a realistic strategic plan and adequate guidance to help ensure that it can effectively implement its competitive sourcing program. For example, the Forest Service's current strategic plan is unrealistic because it does not take into account the likely availability of personnel and funding resources needed to implement the plan. Furthermore, the Forest Service lacks sufficient guidance on identifying key work activities that should be excluded from competitions. Although Forest Service officials do not believe that inappropriate work activities have been included in competitions that it has held, without clear guidance the Forest Service remains at risk of doing so. The agency also lacks a strategy on how to assess the cumulative effect that competitions could have on its ability to fight wildland fires and respond to other emergencies. Outsourcing a large number of federal jobs to the private sector could, over time, reduce the number of available responders. For fiscal years 2004 through 2006, the Forest Service lacked sufficiently complete and reliable cost data to (1) demonstrate its compliance with statutory spending limitations on its competitive sourcing activities and (2) accurately report competitive sourcing savings to Congress. Regarding compliance with spending limitations, the Forest Service did not collect cost data on all activities related to competitive sourcing because it believed that some costs were not subject to the limitations. For example, the Forest Service did not collect data on employees' salaries related to studying the feasibility of conducting a competition--a key component of its competitive sourcing process. GAO has interpreted the statutory spending limitations to generally apply to all costs attributable to the Forest Service's competitive sourcing program. Moreover, because the Forest Service's cost data used to determine compliance with statutory spending limitations were not reliable, the Forest Service cannot know if it exceeded the limitations. Regarding the savings achieved from its competitions, the Forest Service reported to Congress a savings totaling over $38 million between fiscal years 2004 and 2006. However, the Forest Service could not provide GAO with sufficient data or the methodology it used to calculate savings derived from competitions. In addition, GAO found that the Forest Service did not consider certain costs, which were substantial, in its savings calculations. As a result, Congress may not have an accurate measure of the savings from the Forest Service's competitive sourcing competitions during this period. |
Currently there is no universally accepted or official definition of the gig economy or gig workers. The characteristics of gig workers that we focus on in this report (self-employed individuals who perform single projects or tasks on demand for pay) share some characteristics with self- employment as it is understood by other federal agencies, but our characterization does not directly align with those agencies’ views. Although other definitions have been used to measure the number of gig workers, our focus was on the non-occupational skills and training needed by workers who participate in gig work. We characterized gig workers as being self-employed and who participate in gig work part-time or full-time. However, not all self-employed individuals are gig workers. For example, we did not consider self-employed individuals who own their own storefront business, such as a restaurant owner, as gig workers because they do not generally work on a project basis. However, our characterization would include a caterer who works on a project basis with multiple clients. At the same time, however, we excluded project- based workers who are not self-employed, such as those who are employees of a staffing agency. As the centerpiece of the federal government’s workforce system, the purposes of WIOA include the following: (1) provide individuals, particularly those with barriers to employment, increased access to and opportunities for employment, education, training, and support services to succeed in the labor market; (2) provide America’s workers with the skills and credentials necessary to secure and advance in employment with family-sustaining wages; and (3) provide activities through the state and local workforce development systems to increase the employment, retention, earnings, and economic self-sufficiency of participants; among other purposes. Programs administered by DOL and Education provide services such as job search assistance, career counseling, occupational skills training, classroom training, and on-the-job training. In addition, WIOA provides for a workforce system that is accessible to all job seekers to make it easier for them to access the services they need to obtain skills and employment. Basic career services include eligibility determinations, initial skill assessments, and program referrals. WIOA also provides for state workforce development boards (state boards) to help oversee a system of over 550 local workforce development boards (local boards) that, in turn, deliver services through a network of over 2,400 American Job Centers. Under WIOA, state and local boards have the flexibility to respond to the needs of their local labor markets, which they do, in part, through the analysis of local labor market information. Federal funding under WIOA for core programs is allocated to states using statutory formulas that, in part, generally reflect state and local unemployment data. According to DOL, in program year 2017, total appropriated funding for activities related to Adult and Dislocated Worker programs and Wagner-Peyser Employment Services was more than $2.7 billion. Local boards may also leverage other funding sources, such as DOL discretionary grants and state and local government funding, among other sources. WIOA included new performance measures for states and local areas generally related to participants who exit WIOA programs. These include the percentage of participants employed in unsubsidized employment in the second and fourth quarters after program exit, and median earnings in unsubsidized employment in the second quarter after exit, among other measures. Generally, performance outcome targets are negotiated between DOL’s regional offices and state workforce boards and take into consideration the characteristics of the job seekers being served as well as the local labor market conditions. Both states and local areas are subject to certain consequences for failing to achieve their negotiated targets. To support performance outcomes, states must use, consistent with state law, quarterly wage records to verify participants’ employment and earnings. States typically satisfy this requirement by using their Unemployment Insurance (UI) wage record data. Earnings from self- employment, however, are generally not included in state UI wage records because self-employed workers are not considered to be in employment that is covered by the unemployment insurance system. In these cases, boards are allowed to use supplemental wage information— for example, case management notes, administrative records, and surveys of participants, among other things—to support employment and earnings outcomes. DOL has long encouraged workforce boards to provide entrepreneurship and self-employment training by funding several projects and grants to demonstrate the role of the workforce development system in this area (see table 1). DOL also provides technical assistance to workforce boards through online training, other training events, and its WorkforceGPS website. WorkforceGPS is an interactive online platform designed to build the capacity of the public workforce system through knowledge sharing. It offers resources, peer-to-peer connections, and supplements other technical assistance efforts. SBA also has programs that provide support and training to self- employed workers, including the following: Small Business Development Centers (SBDCs) – SBDCs, with more than 900 service delivery points, provide training and technical assistance to current or prospective business owners. SCORE Association – SCORE is a nationwide, nonprofit organization of working and retired business executives who donate time to counsel and provide workshops for small business owners. Women’s Business Centers (WBCs) – WBCs are a national network that provides educational resources to help women start and grow successful small businesses. Because there is no universally accepted definition of gig work, characteristics of individuals engaging in this work and the types of work they perform depend on how this population is studied. Each of the three quantitative studies we reviewed defined their population of workers, type of work performed, and time frames of work performed differently. Therefore, each study provides a snapshot of this emerging area, and their results are not directly comparable (see table 2). According to the studies we reviewed, up to about 40 percent of workers engaging in online gig or informal work were 34 years old or younger. The JPMorgan Chase Institute (JPMorgan Chase) study found that 43 percent of account holders participating in online labor platforms were 18-34 years old. The Pew Research Center study found that an estimated 42 percent of gig workers were 18-29 years old, whereas 19 percent of participants were 50 years old or older. Similarly, the Federal Reserve Board (Federal Reserve) found that an estimated 41 percent of those completing tasks through online platforms were under age 30. The JPMorgan Chase and Pew Research Center studies found online gig workers to have relatively lower incomes. Specifically, the JPMorgan Chase study found that their bank account owners who earned income through online labor platforms from October 2012 through September 2015 had lower median monthly incomes ($2,514) than the JPMorgan Chase study population labor force at large ($3,351). The Pew Research Center study had similar results with an estimated 49 percent of online gig workers making a family income of $30,000 or less annually, whereas only 14 percent of those made $75,000 or more. On the other hand, the Federal Reserve estimated that almost half (about 45 percent) of those completing online tasks through websites had an annual family income greater than $75,000. Some individuals who participate in gig work are also engaged in other employment or are students. According to the Pew Research Center study, an estimated two-thirds of online gig workers indicated that they were employed either full- or part-time in another position, and 23 percent were enrolled as either full- or part-time students. The Federal Reserve study estimated that approximately half of those performing online tasks through websites reported that they were also paid employees, with about an additional 10 percent reporting that they were self-employed. According to the studies we reviewed, gig work is performed in a range of occupations, and at various skill levels. The Pew Research Center study reported that the types of work found through online platforms varied between physical tasks, simple online tasks, or relatively complex tasks. For example, according to the Pew Research Center, the estimated 8 percent of Americans who earned money in the last year through online job platforms performed at least one of the following tasks: online tasks through digital job platforms, such as data entry or taking surveys (an estimated 5 percent); ride-hailing services (an estimated 2 percent); and shopping for or delivering household items, or by cleaning or doing laundry for a client (an estimated 1 percent each); and other types of work taken on through online platforms (an estimated 2 percent). This work ranged from relatively basic tasks, such as moving furniture or working as a parking lot attendant, to more highly- specialized work, such as providing legal services, manuscript editing, or IT consulting. According to three researchers we interviewed, recent attention has been paid to online gig work, but a larger number of individuals are participating in offline gig work. The Federal Reserve examined characteristics of survey respondents who engaged in various types of online and offline gig work. Online gig work included completing tasks that were identified or mediated through online platforms. Offline gig work included house cleaning, house painting, yard or landscaping work; babysitting and/or child care services; and personal services such as picking up dry cleaning, providing moving assistance, and dog walking; among other services. Workers’ motivations for engaging in gig work included filling gaps in income and accommodation of work schedule preferences, according to the two studies for which we obtained survey results. The JPMorgan Chase study reviewed the bank deposits of individuals in the months that they were actively participating in online platform work. This analysis found that earnings from online labor platforms tended to offset dips in other sources of income; therefore, the researchers suggested that individuals used earnings from the online labor platforms to substitute for reductions in other sources of earnings and in periods where workers were between jobs. Although JPMorgan Chase found that online gig work sometimes substitutes for other income sources, it also noted that these deposits contributed a sizeable portion of their income—on average, 33 percent of total monthly income in months they were active on the platform—but were secondary to other income sources deposited to their accounts. The study noted that although the number of people participating in online gig work has increased, workers’ reliance on income from this work has remained stable over time both in terms of the fraction of months when workers participated and total income earned. The Pew Research Center study found that providing workers with something to do in their spare time and filling in gaps in income were top motivations for online gig workers. About 42 percent of gig workers said they use online platforms for fun or something to do in their spare time, whereas about 37 percent said this work helps fill in gaps or fluctuations in income. The Pew Research Center study also estimated that income earned from online platforms was essential or important to approximately half of workers, and these workers tended to have lower levels of household income and education. Of the estimated 8 percent of survey respondents who said that they earned money in the last year from online platforms, approximately 56 percent reported the income was essential or important to their overall financial situation. Of those, an estimated 45 percent said that they use online platforms because they need to be able to control their schedule due to school, child care, or other obligations. About a quarter also said they use online platforms because it is fun or for something to do (about 28 percent), because there is a lack of other jobs where they live (about 25 percent), or to gain work experience they can take to other jobs (about 24 percent). Another study, conducted by the Institute for the Future (IFTF), provides a different perspective on the characteristics of online gig workers using a qualitative analysis of interview responses to develop profiles of gig workers and their motivation. From those interviews, IFTF developed seven archetypes to describe different types of gig workers, such as workers wanting to build their own business and maximizing online platforms to do so; those navigating a life transition; or those optimizing income on a day-to-day basis (see app. II). The different stakeholder groups we interviewed described many of the same benefits to participating in gig work. All stakeholder groups mentioned the following: flexibility (e.g., to work around other responsibilities, such as childcare); autonomy (e.g., ability to set own hours and be “own boss”); income (e.g., supplemental or to fill in between jobs); and the ability to help build a business, a resume, or experience. Most stakeholder groups also mentioned the following: low barrier to entry; and the ability to pursue a passion. All of the gig companies we interviewed said flexibility or autonomy, or both, were benefits. Specifically, one company official said gig workers have the opportunity to be self-starters, determining when and where they want to work. Another company official said that the income provides workers the freedom to leave another job or make ends meet if they are laid off. Workers in three of our discussion groups also mentioned the benefits of flexibility and autonomy. Specific benefits mentioned by workers included being able to work on their own schedule, having control on pricing their services, and working from where they want. Flexibility and Autonomy Are Benefits of Gig Work “I have a young family and need the extra cash. I find using an online platform more convenient than word of mouth. would like to use opportunity to become entrepreneur. I want to be my own boss 24 hours a day.” According to local workforce board officials we interviewed, the flexibility and autonomy of gig work may be especially beneficial for some types of workers. For example, officials from 8 of the 11 workforce boards said workers who might benefit most from gig work include those who need flexible schedules, such as students who are going to school or are enrolled in training, and those with care-taking responsibilities. Gig work can also help workers develop job-seeking skills, provide work experience, or fill in gaps in resumes, according to officials from 10 of the 11 local workforce boards we interviewed. Specifically, a local board official said traditional companies do not like to see employment gaps; therefore, the board encourages its clients to pursue consulting or volunteer work so that there are no gaps on their resumes. The official added that gig work through short-term contracts is often viewed by traditional employers as valid employment and could provide the necessary work experience to help individuals secure more permanent, full-time work if desired. Moreover, another board official said that certain types of gig work could help workers enhance their capabilities—for example by helping them better understand how a business runs and become more entrepreneurial with better self-employment skills. According to 9 of 11 local workforce board officials we interviewed, gig work could also be beneficial for people for whom traditional work may present challenges, such as individuals with disabilities, ex-offenders, low-income workers, or those who are unemployed or underemployed. For example, officials from one local board said gig work benefits harder- to-serve populations because it allows them to earn income while going to school and by working from home. The different stakeholder groups we interviewed also described many of the same downsides to participating in gig work. All stakeholder groups mentioned the following: lack of financial security; lack of benefits; and increased risk that arises from increased liability and high rate of failure in self-employment, among other reasons. Many stakeholders also mentioned the following: lack of stability; and challenges to running a business that include not understanding the responsibilities of being self-employed or the length of time it can take for a new business to become profitable, or not finding the necessary capital. Officials from 10 of the 11 local workforce boards we interviewed said that a lack of financial security was a downside to gig work. Specifically, workers may have difficulty earning enough to achieve self-sufficiency, have unpredictable or low income, or experience a high rate of failure with self-employment. Further, officials from one board that works with short- term contractors in the information technology field said it can be challenging for U.S. workers to make sufficient pay from work obtained through professional online platforms because workers face global competition from those in other countries who are willing to do the same work for less pay. Similarly, lack of financial security was a concern among workers in three of our four discussion groups who said low or unpredictable income was a downside, and this factor was also mentioned by two of the gig companies we interviewed. workers said they have performed work using an online platform for which they did not receive payment. All stakeholder groups we interviewed said the lack of employer-provided benefits—such as health insurance, unemployment insurance, vacation pay, and sick pay, among other benefits—was a downside to gig work. Specifically, one official who provides training to entrepreneurs said it is difficult for gig workers to project how much their health care will cost and consequently how much they need to work to cover those costs. “Clients get savvy and cancel on the work. clock in and out on your mobile device but if a customer complains the company will cancel payment from that job. Do a 4-hour job and then not get paid because customer complained that some task did not get done.” — Gig worker who arranged to perform housekeeping services through an online gig company Lack of Benefits for Gig Workers “Lack of health insurance possibly keeps workers who want to be in the on-demand economy in a job they do not like. This lack of available health insurance hinders the entrepreneurial spirit and has an effect on workers’ risk tolerance.” Four of the six stakeholder groups we interviewed said a lack of knowledge about running a business was a challenge to working in the gig economy. This knowledge is important because according to one stakeholder we interviewed, drivers for ridesharing companies are building their own business, but some do not realize the consequences of engaging in this type of work. Further, officials from one local workforce board said because barriers to engaging in gig work are low, workers may enter into this type of work before knowing how to manage their self- employment or understanding its ramifications. Officials at two other local workforce boards also said gig workers may not understand that they may face a longer period of time than expected to become successful entrepreneurs or face challenges obtaining capital needed for their business. As previously mentioned, gig work can benefit some harder-to-serve populations; however, there may be additional downsides for gig workers who lack technical skills needed to successfully execute the work or those who are low income or urgently in need of a job, according to many stakeholders. Specifically, officials from one local workforce board said there is a difference between gig work for low-skilled workers and higher- skilled workers. They said gig work is not as viable in the long run for workers who do not have the necessary technical skills. As a result, they could remain caught in lower-skilled gig work even though their ultimate goal might be to develop a career. An official from a gig company that provides professional services said workers are less likely to be successful in gig work without a specialized skill that is in demand. In addition, an official from an organization that offers information for workers providing ridesharing services said that type of gig work helps workers get by, but does not provide them with future pay raises or a career. Stakeholders we interviewed said that gig workers need various self- employment skills. In particular, stakeholders identified soft skills, which include ability to communicate well, and other traits such as entrepreneurial spirit, tolerance of risk and uncertainty, common sense, and ethics, among others. In addition to soft skills, stakeholders said that gig workers need business skills. These skills help them manage the functions that employers generally provide for their employees, according to one workforce board official. Business skills mentioned by stakeholders include the following: marketing/branding/having skills in the “business of you;” financial literacy/management including how to obtain benefits, and estimate costs and price services; digital literacy; and understanding legal rights and obligations including how to write and read contracts. Some of the stakeholders we interviewed said that gig workers need to understand their tax responsibilities. In more traditional employment, employers are generally responsible for withholding employment taxes— such as income tax and Social Security and Medicare taxes—for their employees. According to the Internal Revenue Service, however, gig workers, by virtue of being self-employed, must track their income and expenses, determine if they must pay quarterly estimated taxes, and know how to file their annual return. However, not understanding the responsibilities of their self-employed status could mean that gig workers will not be compliant with tax law. In the past, we reported a lower level of tax compliance by self-employed individuals than by employees. Even workers who understand that they need to pay their own taxes but then fail to plan appropriately could face paying a large amount in taxes at the end of the year. One official at a gig company said that someone earning $60,000 to $80,000 a year, with no taxes withheld, may find it challenging to pay those taxes when they are due. In addition, some gig workers might not be aware of these responsibilities because they consider themselves employees of the gig company, according to two stakeholders. For example, a gig company official said that some of its gig workers have asked to speak to the company’s human resources manager when they have an issue even though the company considers them to be self-employed business owners. The official said the company has to remind these workers that they are not represented by the company’s human resources department. Further, the Pew Research Center study found that an estimated 26 percent of gig workers who used online platforms considered themselves to be employees of the platform they used to find work. Moreover, the study found that workers who reported that income from gig work was ‘important’ or ‘essential’ were much more likely to view themselves as employees. Some stakeholders also said marketing was an important skill. According to a gig company official, marketing helps workers grow their business to the point where they can spend more time engaged in income-generating work than trying to find additional clients. Gig workers need to understand they are selling their skills, or, in other words, are in the “business of you,” and how they present themselves online can affect their opportunities, said one official from an organization that helps low-income workers navigate the gig economy. Accurate cost estimating and pricing were also considered to be important. A gig company official said that pricing is a skill that involves including all expenses, while leaving room for profit and accounting for some risk. Not all workers may be suited for gig work even when provided training in these skills, according to officials from one workforce board. Using a DOL Workforce Innovation Fund grant, this board provided a program to help workers become self-employed and engage in gig work. Board officials said about 30 to 50 percent of the participants leave the program because they decide that this type of work is not for them. These officials said they consider this a positive outcome because the training helps participants recognize the challenges of self-employment before taking steps that might have long-term consequences, such as using their life savings to start a business. The 11 local workforce boards we interviewed all served gig workers in some way. In some cases, local workforce boards directly provided services to gig workers, and in other cases they indirectly provided services to gig workers through efforts that more broadly targeted workers in specific sectors or who were engaged or interested in self-employment (see fig. 1). Direct services ranged from recruitment to career services, including providing job coaching and workshops on finding gig work and developing self-employment skills. Overall, officials from 9 of the 11 boards described these efforts as responses to perceived needs in their local labor markets. Among the selected boards that offered workshops, several covered similar information related to self-employment. The most common topics included (1) marketing strategies, such as the use of social media; (2) financial management, including pricing and tax implications; and (3) the basics of contracting. In addition, one of New York City’s workshops provided information on the legal aspects of participating in gig work, and Gainesville’s provided information on obtaining work through online platforms. Selected boards also provided indirect support services that were not specifically targeted to gig workers but served populations that could include them. For example, Miami’s construction sector initiative did not specifically train workers for gig work, but the skills it supported could be applied to gig work in the construction industry, according to officials. The Seattle board provided self-employment workshops on starting or growing a small business and marketing that were not specifically designed for gig workers but could help them obtain gig work, or involved skills that are transferable to gig work, according to board officials. Furthermore, Seattle has a tool that can help individuals determine how much income is needed to be self-sufficient in their local area, which could also be helpful for individuals engaging in gig work. The selected boards’ activities varied in the degree to which they focused on those enrolled in WIOA programs. Some board activities, such as recruitment events for ride-sharing services in Chicago and Dallas, were open to the community, including WIOA participants. In Chicago, the workforce board recruited job seekers for a ridesharing company, and some of them then used this work as an income source while they received WIOA services, scheduling it around their WIOA training activities, according to board officials. In Northern Virginia, by contrast, all participants in the board’s self-employment initiative were enrolled in WIOA. Boards varied in their practices for listing gig work at job centers. Officials from five local boards said that their job centers could list opportunities with gig companies, but they were not always sure if the job centers did so. State and local board officials in two states, however, cited federal and state provisions as the reason why they required a traditional employer-employee relationship to list jobs at their job centers. DOL officials said that the department does not set requirements for the types of jobs that states can post in their job centers, other than certain nondiscrimination requirements. They said that states might impose their own limitations, but most states aim to increase the number of businesses listing jobs with their job centers. While officials from all local boards that we interviewed said they provided services to gig workers either directly or indirectly, some nonetheless raised concerns about placing job seekers in gig work, questioning the appropriateness of this type of work under WIOA. Specifically, officials from some local boards said they saw their primary mission under WIOA as helping job seekers with barriers to employment find traditional work rather than task- or project-based work that may lack benefits or job security. Additionally, an official from one local board said that using WIOA training funding for services for gig workers would not be a good use of taxpayer dollars because the ultimate goal for job seekers is permanent employment. In addition, officials from five local boards noted that their staff may not be prepared to share information about gig work opportunities or may be uncomfortable doing so. Because of these concerns, officials from one state and one local board said that serving self-employed workers would require a change in “mindset” on the part of the workforce system as local boards may be cautious about offering such services. Officials from other boards, some of which were not included in our analysis of board services for gig workers, expressed interest in studying the gig economy. For example, in November 2016, the San Diego board published a report examining opportunities and challenges associated with the gig economy, including skill and training needs and the potential role workforce boards could play. Furthermore, state boards are also interested in issues facing gig workers. In response to this interest, the National Governor’s Association has convened a group of states interested in the gig economy, including officials from several state boards, according to an association official. The SBA also provides self-employment services for workers, including those who engage in gig work. SBA serves the self-employed through its SCORE mentoring program, Small Business Development Centers, and Women’s Business Center program. A SCORE association official said that the association is developing personal branding webinars that could appeal to gig workers, reflecting a potentially growing need for services. Officials at two Small Business Development Centers and a Women’s Business Center said that the centers provide workshops, counseling, and referrals at no or low cost and are serving gig workers. The workshops cover topics such as: business plan development; marketing; financial management, including taxes; and social media. In addition to governmental programs, information on self-employment skills for gig workers is provided through self-employment workers’ associations and some gig companies. Specifically, officials from an association for self-employed workers said that they provide their members with information on non-occupational skills. They said that they also host experts on topics such as how to manage episodic income, file taxes, market their services, and negotiate with clients. Officials from another group serving self-employed workers said their efforts mostly focus on filing taxes and the information is provided through webinars and in-person seminars. Concern over the tax challenges of self-employed workers prompted the group to partner with a university-based tax policy center to study the issue. Three of the six online gig companies we interviewed also said that they provide some information on non-occupational skills to their gig workers. For example, an official at one company said that their online platform has a learning center to help gig workers offering services through the platform understand common business issues, such as marketing. This learning center also includes a forum where gig workers can post questions, share advice, and network with each other. Officials at an online platform company that allows professional gig workers to access and perform projects said that they provide information on pricing and tax filing as well as a suite of business tools to help these gig workers track tax payments and number of hours billed. Company officials said that they provide this service to address gaps in information that hindered these workers in the past, such as documenting a steady income for the purposes of obtaining a mortgage. An official at a ridesharing company said that it has partnered with other organizations to provide information to help drivers, for example, by helping them understand their tax filing obligations and save for retirement. Nongovernmental organizations have also played other roles in supporting training for gig workers. For example, in New York City’s initiative, partners such as the Writer’s Guild of America, East and Brooklyn Workforce Innovations helped recruit writers for the board’s training programs. These programs integrated workshops on financial management, marketing and sales, as well as the legal aspects of gig work for workers in the media and entertainment sector, according to local board officials. Several federal agencies responsible for data on the workforce and the economy have ongoing efforts to collect and study data about gig work as part of their respective missions. These efforts include: Bureau of Labor Statistics (BLS): For the first time since 2005, BLS conducted a survey of contingent workers in May 2017 in an attempt to better understand workers engaging in alternative forms of work. In addition to the questions asked in previous years, the survey added four new questions on whether workers performed short, paid tasks in the previous week that were arranged through an online platform. While the 2017 survey data are expected to provide a more refined picture of such work arrangements than was previously possible, BLS officials said that those data will not be available at the state and local level because of the small sample size. In addition, BLS has a Career Outlook website on gig work that provides information about the prevalence of gig workers and their occupations, among other information. Department of Commerce: The Commerce Department also issued a report on online platforms that allow gig workers to access projects or tasks. The report used publicly available data to assess the size and scope of platform transactions and examine the potential effect on consumers and workers. Census Bureau: Within the Commerce Department, Census Bureau (Census) officials collaborated with researchers from the University of Maryland to study the gig economy by comparing Internal Revenue Service (IRS) self-employment tax data and BLS self-employment household survey data from 1996 through 2012. According to Census officials, they are undertaking this research to better understand why IRS data suggest long-term growth in self- employment while BLS data suggest little or no growth. Federal Reserve Board: As discussed previously, the Federal Reserve Board studied online and offline informal work that took place over a 6-month period to examine participant motives and attitudes. Federal Reserve officials indicated they will continue to annually monitor the enterprising and informal work activities of the U.S. adult population in the Survey of Household Economics and Decisionmaking (SHED) publication and public data posting. Despite these efforts to better understand gig work at the national level, selected state and local workforce board officials said that they lacked local labor market data on gig workers and that it would be helpful if DOL shared information about other boards’ efforts to serve these workers. Officials from all of the state boards we interviewed said that they lacked labor market information on gig workers. An official with one local workforce board said that getting good, reliable data about who is looking for gig work is a challenge. If boards do not have these data, it could be difficult for them to determine the prevalence of gig workers in the local labor market and to design services for them. Officials from the local workforce boards that undertook specific efforts to serve gig workers said that they did so because they perceived that gig workers were important to the local labor market (e.g., technology workers in San Francisco and media and entertainment workers in New York City) or because the local area faced high levels of unemployment and under-employment (e.g., Gainesville, Florida). Officials from most local boards we interviewed said that their job centers had not necessarily seen much interest in gig work, which, in their opinion, could be a reflection of the fact that clients are typically seeking traditional employment, are unaware of the gig economy, or may not see job centers as the appropriate place to obtain gig work, among other reasons. DOL officials that said gig work may be part of “in-demand” jobs in multiple sectors and BLS is considering how to quantify this work; however, to date, data in this area are limited. Given these challenges, officials at all state and local boards we interviewed said that it would be helpful if DOL shared information about other boards’ efforts to serve these workers. DOL disseminates information on promising practices, but boards may not be readily able to find information related to gig workers. Federal internal control standards state that agency management should analyze information related to achieving agency goals and communicate that information to external parties. To communicate with the public workforce system and develop its capacity to implement innovative approaches, DOL uses a portal, known as WorkforceGPS, which includes information from program evaluations and communities of practice for those interested in specific topics, among other resources. However, finding information in WorkforceGPS that is relevant to gig workers requires multiple searches and does not consistently yield relevant results. When we conducted a search of the portal, we found relevant documents under terms such as “self-employment” and “entrepreneurship,” but did not consistently find those same documents under search terms such as “gig,” “on-demand,” and other related terms. Given that online gig work is an emerging area for researchers and policymakers and there is no universally-accepted definition or term used to describe it, some workforce boards may not associate gig work with self-employment and know to search for information under that term unless a specific reference is provided. Linking “gig” and related terms to self-employment may be particularly important because DOL officials indicated that recent and soon-to-be issued evaluations from self- employment initiatives funded through the Workforce Innovation Fund grants may yield information relevant to gig workers. Furthermore, although WorkforceGPS includes several communities of practice, it does not currently include one on gig workers. Helping workforce boards easily find and share promising practices relevant to gig workers would allow them to fully and efficiently be able to help individuals who want to engage in this type of work. Documenting employment and earnings outcomes for gig workers may be challenging for workforce boards. When participants are enrolled in WIOA programs, workforce boards are required to document and report participants’ employment and earnings outcomes. However, unlike Unemployment Insurance (UI) wage record data used to verify outcomes for those placed in traditional employment under WIOA, supplemental wage information, used to verify these outcomes for gig workers, may be challenging for boards to obtain. verify outcomes. Officials from seven local boards that provided services for gig workers and three state boards also said that the requirements to document employment and earnings outcomes are a disincentive to serving those workers. In addition, officials from three state boards said that, because of the challenges in obtaining supplemental information, outcomes of self-employed workers, including gig workers, are not included in the performance reports that their states forward to DOL, with officials from one state specifically pointing to cost and data reliability issues. American Job Center Director in San Francisco and local workforce board officials in Seattle and Dallas. | GAO 17 561. In December 2016, DOL issued additional guidance stating that under WIOA, states may continue to support outcomes using supplemental wage information. However, if states use supplemental wage information to verify employment, they must also use it to verify earnings outcomes, a change from prior DOL policy. DOL officials said they did not believe the new supplemental wage information requirements would limit boards in serving self-employed workers. Rather, using supplemental wage information to verify employment and earnings outcomes holds boards accountable while allowing them to receive credit for assisting self-employed workers, according to DOL officials. They also said that workforce boards asked for updated guidance on using supplemental information on serving self-employed workers, which would include gig workers. In June 2017, DOL issued guidance on the use of supplemental wage information, which stated that worksheets, signed and attested to by program participants, is one acceptable type of documentation for self- employed workers. While gig work is not a new phenomenon, the advent of online platforms has made this type of self-employment more readily accessible, possibly for the first time, for many individuals who are seeking new career opportunities or supplemental income. On the one hand, this newfound access to gig work can help individuals across many industries, skill levels, and motivations seek and earn extra sources of income. On the other hand, some of these individuals may not be aware of its risks, such as financial insecurity and lack of benefits, and responsibilities, such as tax implications. Easier online access to gig work is relatively new, but federal agencies such as DOL and SBA have had ongoing programs designed to assist people with the challenges of self-employment, such as marketing, pricing, and tax implications. In recent years, DOL has funded several grants and programs to gather information specific to improving self- employment programs. However, DOL’s system for receiving and sharing data with workforce boards, WorkforceGPS, does not consistently link relevant resources for these workers to terms that are currently being used to describe the gig economy, such as “gig.” Although there is no universally accepted term or definition for this type of work, WorkforceGPS’s organization of information under topical headers does not capture the connection between gig work and self-employment, making this information less readily available. Therefore, local boards that want or need to help gig workers may have limited knowledge of how or where to find federal resources or may not know that they exist. Further, boards could benefit from other boards’ experiences in this area. Cross- referencing common terms for gig work with existing promising practices related to self-employment or establishing a community of practice could help boards share relevant information. This is especially important now that evaluations from DOL’s Workforce Innovation Fund grants on self- employment have been recently released and could be instructive. In addition, while helping individuals obtain gig work is not the mainstay of the nation’s workforce system, some local boards have nonetheless attempted to serve these job seekers as the gig economy continues to evolve and in instances where such workers are a salient feature of the local labor market. For these particular boards, promising practices on documenting employment outcomes, for example, could lessen the challenges they face in reporting performance outcomes for gig workers served. The Assistant Secretary, Employment and Training Administration, should take steps to help workforce boards readily find and share information on promising practices related to serving gig workers by, for example, cross referencing promising practices on self-employment and other relevant practices in WorkforceGPS with terms commonly used to describe the gig economy, by creating a community of practice on this topic, or other mechanisms, as appropriate. (Recommendation 1) We provided a draft of this product to the Department of Labor (DOL) for comment. In its comments, reproduced in appendix III, DOL agreed with our recommendation. The department stated that it will continue to explore the dynamics of the gig economy and make resources available to the workforce system, employers, workers, researchers, policymakers, and others. DOL also provided technical comments, which we incorporated as appropriate. We also provided a draft of this product to the Departments of Commerce and Education and the Small Business Administration, but they did not have comments. In addition, we provided relevant report sections to the authors of the studies included in our report for their technical comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees; Secretaries of the Departments of Commerce, Education, and Labor and the Administrator of the Small Business Administration; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We examined (1) what is known about the characteristics of gig workers and the work they perform, including its benefits and downsides; (2) the non-occupational skills and training stakeholders indicate are needed by gig workers and how they are provided; and (3) the challenges, if any, that select federal agencies and workforce development boards cite in providing supports for gig workers. To address our research questions, we: 1. reviewed relevant federal laws, regulations and guidance; 2. conducted a review of relevant literature; 3. reviewed DOL’s technical assistance website and federal internal control standards that address communication with external parties; and 4. conducted interviews with federal agencies, state agencies, local workforce boards, gig workers, gig companies, and researchers and other stakeholders. We conducted this performance audit from February 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We reviewed the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act (WIOA) to identify changes in performance accountability requirements between the two laws. We also reviewed regulations from the Departments of Labor (DOL) and Education implementing WIOA and DOL Training and Employment Guidance Letters related to performance measures and self- employment, to determine outcome information reporting requirements and the guidance DOL has provided to assist boards in submitting documentation. To address our first and second research questions, we conducted a literature review to identify key government, industry, and academic studies examining the gig workforce, which we characterized for the purposes of this report as single projects or tasks that a self-employed individual performs on demand for pay. Our characterization included both online and offline gig work. We searched relevant databases, such as ProQuest Research Library, Academic OneFile, Scopus, and SocSci Search to identify reports, dissertations, web references, working papers, and journal and magazine articles studies from 2010 to the present. In order to identify a wide range of studies, we used search terms including: gig economy, 1099 economy, on-demand economy, sharing economy, and informal work. Our literature searches generated 383 references. In addition to the sources identified through literature searches, we also reviewed studies recommended by federal agencies, experts and internal and external stakeholders. We excluded from our review a total of 330 studies that were published prior to 2010 or did not pertain to our scope, such as those that primarily (1) examined worker classification issues or worker benefits and protections; (2) did not examine gig workers who were self- employed or included workers providing goods or capital assets where the information on workers providing labor services could not be broken out separately from those providing goods or capital assets; (3) included non-U.S. workers; or (4) focused on too narrow of a subset of the gig worker population (i.e. immigrants or workers only in a particular geographic region or occupation). We conducted detailed reviews of the nine studies that met these initial screening criteria. Our reviews involved an assessment of each study’s research methodology, including its data quality, research design, and analytic techniques, as well as a summary of each study’s relevant findings and conclusions. We also assessed the extent to which each study’s data and methods were appropriate to support its findings that we included in our report. In addition, we conducted interviews with the studies’ authors, as necessary, to gain a better understanding of their methodology and findings. Through this process, we identified and reviewed four key studies that met our topical and methodological criteria. To address our third objective, in March 2017 we searched DOL’s technical assistance website portal, WorkforceGPS, using terms related to gig work and reviewed federal internal control standards to assess how information related to gig workers was being communicated to workforce boards. Specifically, we searched the WorkforceGPS portal for terms related to gig work to determine if those searches would result in documents that could help workforce boards serve gig workers (e.g., documents related to self-employment). We searched on the following terms: “gig,” “on-demand,” “freelance,” “independent contractor,” “self- employed,” “self-employment,” “contractor,” “unincorporated self- employed,” “project-based,” “nonemployer,” “sharing economy,” “platform economy,” “digital economy,” “1099 economy,” and “entrepreneurship.” We selected these terms because they reflect common terms used within literature and by researchers and other stakeholders we interviewed to describe gig work. We reviewed the results of these searches and documented those that did not return any documents. We also reviewed the communities of practice list within WorkforceGPS. To address all objectives, we conducted interviews with officials at DOL, including officials of the Employment and Training Administration— headquarters and regional staff—and the Bureau of Labor Statistics; the Department of Education; the Department of Commerce, including the Census Bureau; and the Small Business Administration. We also interviewed officials of the Federal Reserve Board about related research efforts. State and Local Workforce Development Boards We conducted semi-structured interviews with officials at 11 local workforce development boards in 9 states to identify efforts to serve gig workers (see table 3). Local boards were selected based on a range of criteria, including information collected during initial interviews with DOL and local board officials and industry groups, a private sector report with information on cities with the largest percentages of workers with income from online platforms, and other relevant documents, including our past work on local workforce boards. Of the 11 local workforce boards we interviewed, we visited five—Chicago, Illinois; Gainesville, Florida; San Francisco and Sunnyvale, California; and Vienna, Virginia. Overall, in selecting the boards that we visited in person, we considered factors such as the potential volume of gig workers in the local area; the range of board services provided to gig workers; the opportunity to learn about different forms of gig work (i.e., both online and offline gig work); the role that DOL grants may have played in supporting boards’ efforts to serve gig workers; and opportunities to interview gig workers and gig companies. In these locations, we also interviewed American Job Center directors to examine issues related to gig workers at the service delivery level. In our interviews with local boards, we asked about relevant services for gig workers, non-occupational skills and training, and the benefits and downsides of gig work. Our findings from these interviews cannot be generalized to all state or local workforce boards. In addition, we interviewed another workforce board, the San Diego Workforce Partnership, about a report it prepared on gig workers, Gig Economy: Special Report (San Diego, California: Nov. 2016). That interview was limited to the report’s methodology and recommendations and did not address issues related to the skills and training needs of gig workers or the benefits and downsides of gig work. In addition, in almost all states where we interviewed local workforce boards, we conducted phone interviews with state workforce boards, state workforce agencies, or both. In our interviews with state agencies, we asked about issues associated with reporting performance outcomes for gig workers and sources of labor market information related to gig workers, among other things. In four of the locations we visited—Chicago, Illinois; Gainesville, Florida; and San Francisco and Sunnyvale, California—we held discussion groups with gig workers to obtain their perspectives on gig work. The discussion groups included gig workers—a total of 15 workers across all groups—who had obtained work through online platforms, as well as those who provided gig services offline. The services they offered varied from housecleaning, household repair, delivery, and ridesharing, to professional services such as consulting and software development. In each location, we provided participant selection criteria to local board officials who then recruited workers for the groups. Specifically, we requested that the workers be participating in or seeking gig work and be current or recent clients of the workforce board. The discussion groups, which we conducted from July through October 2016, involved structured small-group discussions designed to gain more in-depth information about specific issues that could not easily be obtained from another method, such as a survey or individual interviews. Our discussions included multiple groups with varying characteristics but some similarity on one or two homogeneous characteristics, in this case, experience in the gig economy. Additionally, we conducted interviews with select companies that hire gig workers. These companies included both online platform companies and companies and organizations that hired gig workers offline. We selected online platform companies from a private-sector database of such companies and interviewed the database creator to assess the process used to compile it. We interviewed online platform companies that reflected a range of services that gig workers provide, such as in the areas of transportation, food and goods delivery, and professional services. The offline companies and organizations we interviewed included those in food services, construction, a manufacturing incubator, and non-profits that serve ex-offenders and workers with disabilities. We interviewed gig companies about the non-occupational skills and training needed by gig workers, and the benefits and downsides of gig work. We also conducted interviews with 30 non-federal researchers and other stakeholders. We identified these interviewees through our literature review and recommendations from experts and internal stakeholders. We interviewed these researchers about studies they had conducted on gig workers and about gig work in general, specifically the non-occupational skills and training needed by gig workers, and the benefits and downsides of gig for workers. The individual researchers and research organizations we interviewed included: Alan B. Krueger, Professor of Economics and Public Affairs, Princeton Andrew Reamer, Research Professor, The George Washington Annette Bernhardt, Senior Researcher, Center for Labor Research and Education, University of California, Berkeley; Aspen Institute, Future of Work Initiative; Lawrence Katz, Professor of Economics, Harvard University; Marina Gorbis, Executive Director, Institute for the Future; and Penn Schoen Berland. We also interviewed other stakeholders to obtain a range of perspectives on gig work, and to supplement the information obtained through our gig worker discussion groups by obtaining the perspectives of other entities, some of which represent, advocate for, and share information with gig workers. The other stakeholders we interviewed included: American Association of Community Colleges; Center for Regional Economic Competitiveness; Economic Policy Institute; Institute for Work and the Economy; Labor Market Information Institute; National Association for the Self-Employed; National Association of State Workforce Agencies; National Association of Workforce Boards; National Employment Law Project; National Governors Association; Renaissance Entrepreneurship Center, Women’s Business Center, Small Business Development Center of San Francisco; The Gig Work Project; The Rideshare Guy blog; The Workers Lab; Walter & Elise Haas Fund; and Washington State University, Washington Small Business Development Center. In addition to the contact named above Andrew Sherrill (Director), Clarita Mrena (Assistant Director), Meeta Engle (Assistant Director), Amy Anderson, Christopher Morehouse, Derry Henrick, Amy Sweet, Holly Dye, Philip Farah, Alexander Galuten, Kirsten Lauber, Serena Lo, Anna Maria Ortiz, and Mimi Nguyen made significant contributions to this report. In addition, key support was provided by Nora Boretti, Elizabeth Curda, Julianne Cutts, Clifton Douglas, Adam Gomez, Charlie Jeszeck, Yvonne Jones, Michael Kniss, Benjamin Licht, Karen O’Conor, William Shear, Almeta Spencer, Tom Short, and Ariel Vega. Contingent Workforce: Size, Characteristics, Earnings, and Benefits. GAO-15-168R. Washington, D.C.: April 20, 2015. Workforce Investment Act: Local Areas Face Challenges Helping Employers Fill Some Types of Skilled Jobs. GAO-14-19. Washington, D.C.: December 2, 2013. Entrepreneurial Assistance: Opportunities Exist to Improve Programs’ Collaboration, Data-Tracking, and Performance Management. GAO-12-819. Washington, D.C.: August 23, 2012. Workforce Investment Act: Innovative Collaborations between Workforce Boards and Employers Helped Meet Local Needs. GAO-12-97. Washington, D.C.: January 19, 2012. | In 2015, GAO reported that millions of workers do not have standard work arrangements. Some barriers to self-employed gig work have been reduced by online platforms, and while the public workforce system is accessible to all job seekers, it is unclear how the system is helping gig workers obtain the necessary skills and training to be successful. GAO was asked to review the skill and training supports needed by gig workers. GAO examined (1) what is known about the characteristics of gig workers and the work they perform, including its benefits and downsides, (2) the non-occupational skills and training that stakeholders indicate are needed by gig workers and how they are provided, and (3) the challenges that selected federal agencies and workforce development boards cite in providing supports for gig workers. GAO conducted a literature review and interviewed officials at federal agencies and a nongeneralizable sample of 8 state and 11 local workforce boards—locations selected based on the likelihood of a large number of gig workers, among other factors—and gig company officials, gig workers, researchers, and other stakeholders. GAO also reviewed relevant federal laws, regulations, and guidance. Studies GAO reviewed suggest that workers who engage in on-demand, or “gig” work, differ in their characteristics and types of work performed, but each of these defined these workers differently. There is no universally accepted or official definition of gig workers; but, for its report, GAO has identified their characteristics as follows: self-employed individuals providing labor services and completing single projects or tasks on demand for pay. Gig work can be obtained or performed either offline or online. According to the three quantitative studies GAO reviewed, up to about 40 percent of workers earning money through online gig work (i.e., applications or websites that connect workers with customers) were 34 years old or younger. They worked in a variety of occupations ranging from providing legal services to moving furniture. According to stakeholders GAO interviewed, benefits of gig work included flexibility in scheduling and autonomy, while downsides included a lack of financial security and benefits, such as health and unemployment insurance. According to stakeholders, gig workers need soft skills and business skills, many of which can be provided by the Department of Labor's (DOL) existing programs. Soft skills include customer service, time management, and self-motivation, and business skills include marketing and financial literacy and management. In addition, gig workers need an understanding of legal matters (e.g., contracts) associated with gig work. The nation's public workforce system, including workforce boards, which DOL oversees, and other partners can provide some of these self-employment skills based on local area needs. Officials at the local workforce boards GAO interviewed said they served gig workers either directly, for example, through recruitment events and by providing career services, or indirectly, such as through general self-employment services. For instance, the Chicago board recruited drivers for a ridesharing company and the San Francisco board helped gig workers in the media and visual arts develop their portfolios and build their networks. Officials from selected state and local workforce boards cited two broad challenges in providing supports for gig workers: a lack of information on promising practices related to gig workers and difficulties in reporting their employment-related outcomes. Officials from all 19 state and local boards GAO interviewed expressed interest in other boards' efforts to serve gig workers. DOL shares promising practices through its searchable online portal, but has not fully linked “gig” and related terms to relevant information on self-employment. Helping boards easily find and share promising practices relevant to gig workers would allow boards that want to help gig workers to do so more fully and efficiently. State and local board officials also explained that it is challenging to verify employment and earnings outcomes for the self-employed as they are required to do under the Workforce Innovation and Opportunity Act (WIOA). Because these outcomes for gig workers may be difficult to verify, a board's performance under WIOA may be negatively affected and result in penalties. Consequently, DOL officials said workforce boards had asked for guidance on collecting outcome information. In June 2017, DOL issued clarifying guidance to boards on how to collect information that can be used to report outcomes required under WIOA. GAO recommends that DOL take steps to help workforce boards find and share information on promising practices related to serving gig workers. DOL agreed with GAO's recommendation. |
The President’s national strategy for homeland security and the Homeland Security Act of 2002 provide for securing our national borders against terrorists. Terrorist and criminal watch lists are important tools for accomplishing this end. Simply stated, watch lists can be viewed as automated databases that are supported by certain analytical capabilities. To understand the current state of watch lists, and the possibilities for improving them, it is useful to view them within the context of such information technology management disciplines as database management and enterprise architecture management. Since the September 11th terrorist attacks, homeland security—including securing our nation’s borders—has become a critical issue. To mobilize and organize our nation to secure the homeland from attack, the administration issued, in July 2002, a federal strategy for homeland security. Subsequently, the Congress passed and the President signed the Homeland Security Act, which established DHS in January 2003. Among other things, the strategy provides for performance of six mission areas, each aligned with a strategic objective, and identifies major initiatives associated with these mission areas. One of the mission areas is border and transportation security. For the border and transportation security mission area, the strategy and the act specify several objectives, including ensuring the integrity of our borders and preventing the entry of unwanted persons into our country. To accomplish this, the strategy provides for, among other things, reform of immigration services, large-scale modernization of border crossings, and consolidation of federal watch lists. It also acknowledges that accomplishing these goals will require overhauling the border security process. This will be no small task, given that the United States shares a 5,525 mile border with Canada and a 1,989 mile border with Mexico and has 95,000 miles of shoreline. Moreover, each year, more than 500 million people legally enter our country, 330 million of them noncitizens. More than 85 percent enter via land borders, often as daily commuters. Our nation’s current border security process for controlling the entry and exit of individuals consists of four primary functions: (1) issuing visas, (2) controlling entries, (3) managing stays, and (4) controlling exits. The federal agencies involved in these functions include the Department of State’s Bureau of Consular Affairs and its Bureau of Intelligence and Research, as well as the Justice Department’s Immigration and Naturalization Service (INS), the Treasury Department’s U.S. Customs Service (Customs), and the Transportation Department’s Transportation Security Administration (TSA). The process begins at the State Department’s overseas consular posts, where consular officers are to adjudicate visa applications for foreign nationals who wish to enter the United States. In doing so, consular officials review visa applications, and sometimes interview applicants, prior to issuing a visa. One objective of this adjudication process is to bar from entry any foreign national who is known or suspected to have engaged in terrorist activity, is likely to engage in such activity, or is a member or supporter of a known terrorist organization. Foreign nationals (and any other persons attempting to enter the United States, such as U.S. citizens) are to be screened for admission into the United States by INS or Customs inspectors. Generally, this consists of questioning the person and reviewing entry documents. Since October 2002, males aged 16 or over from certain countries (for example, Iran, Iraq, Syria, and the Sudan) are also required to provide their name and U.S. address and to be photographed and fingerprinted. In addition, airline officials use information provided by TSA to screen individuals attempting to travel by air. As discussed in the next section, requirements for checking a person against a watch list differ somewhat, depending upon whether the person arrives at a land-, air-, or seaport. After foreign nationals are successfully screened and admitted, they are not actively monitored unless they are suspected of illegal activity and come under the scrutiny of a law enforcement agency, such as the Department of Justice’s Federal Bureau of Investigation (FBI). Also, when foreign nationals depart the country, they are not screened unless they are males aged 16 years or over from certain countries referenced above, or are leaving by air. According to TSA, all passengers on departing flights are screened prior to boarding the plane. Figure 1 is a simplified overview of the border entry/exit process. Watch lists are important tools that are used by federal agencies to help secure our nation’s borders. These lists share a common purpose—to provide decisionmakers with information about individuals who are known or suspected terrorists and criminals, so that these individuals can either be prevented from entering the country, apprehended while in the country, or apprehended as they attempt to exit the country. As shown in figure 2, which builds on figure 1 by adding watch list icons and associating them with the agencies that maintain the respective lists, watch lists collectively support nine federal agencies in performing the four primary functions in the border security process. Specifically: When a person applies for a visa to enter the United States, State Department consular officials are to check that person against one or more watch lists before granting a visa. When a person attempts to enter the United States by air or sea, INS or Customs officials are required to check that person against watch lists before the person is allowed to enter the country. In addition, when a person attempts to enter the United States by air, INS or Custom officials check him or her against watch lists provided by TSA prior to allowing him or her to board the plane. Persons arriving at land borders may be checked, but there is no requirement to do so. The exception, as previously discussed, is for males aged 16 or over from certain countries, who are required to be checked. Once a watch list identifies a person as a known or suspected terrorist, INS, Customs, or airline officials are to contact the appropriate law enforcement or intelligence organization (for example, the FBI), and a decision will be made regarding the person’s entry and the agency’s monitoring of the person while he or she is in the country. When a person exits the country by plane, airline officials are to check that person against watch lists. In performing these roles, the agencies use information from multiple watch lists. For example, U.S. National Central Bureau for Interpol officials told us that they provide information to the agencies involved in entry control, exit control, and stay management. In addition to highlighting the importance of watch lists for border security, the President’s national strategy cites problems with these lists, including limited sharing. According to the July 2002 strategy, in the aftermath of the September 11th attacks it became clear that vital watch list information stored in numerous and disparate federal databases as not available to the right people at the right time. In particular, federal agencies that maintained information about terrorists and other criminals had not consistently shared it. The strategy attributed these sharing limitations to legal, cultural, and technical barriers that resulted in the watch lists being developed in different ways, for different purposes, and in isolation from one another. To address these limitations, the strategy calls for integrating and reducing variations in watch lists and overcoming barriers to sharing the lists. It also calls for developing an enterprise architecture for border security and transportation (see next section for a description of an enterprise architecture). More specifically, the strategy provides for developing a consolidated watch list that would bring together the information on known or suspected terrorists contained in federal agencies’ respective lists. If properly developed, enterprise architectures provide clear and comprehensive pictures of an entity, whether it is an organization (for example, a federal department, agency, or bureau) or a functional or mission area that cuts across more than one organization (for example, grant management, homeland security, or border and transportation security). These architectures are recognized as essential tools for effectively and efficiently engineering business operations and the systems and databases needed to support these operations. More specifically, enterprise architectures are systematically derived and captured blueprints or descriptions—in useful models, diagrams, and narrative—of the mode of operation for a given enterprise. This mode of operation is described in both (1) logical terms, such as interrelated business processes and business rules, information needs and flows, data models, work locations, and users, and (2) technical terms, such as hardware, software, data, communications, and security attributes and performance standards. They provide these perspectives both for the enterprise’s current, or “as is,” environment and for its target, or “to be,” environment, as well as a transition plan for moving from the “as is” to the “to be” environment. Using enterprise architectures is a basic tenet of effective IT management, embodied in federal guidance and commercial best practices. When developed and used properly, these architectures define both business operations and the technology that supports these operations in a way that optimizes interdependencies and interrelationships. They provide a common frame of reference to guide and constrain decisions about the content of information asset investments in a way that can ensure that the right information is available to those who need it, when they need it. As discussed in the previous section, enterprise architectures facilitate delivery of the right information to the right people at the right time. To this end, these architectures include data models, or logical representations of data types and their relationships, which are used to engineer physical data “stores,” or repositories. When engineered properly, these data stores are structured in a way that effectively and efficiently supports both shared and unique enterprise applications, functions, and operations. The structure of these data stores, whether they are paper records or automated databases, can take many forms, employing varying degrees of centralization and standardization. Associated with the structures being employed are opportunities and limitations to effective and efficient information exchange and use. Generally, these structures can be viewed along a continuum. At one extreme, databases can be nonstandard, both in terms of metadata and the technologies that manage the data, and they can be decentralized, meaning that they were built in isolation from one another to support isolated or separate, “stovepiped” applications, functions, and operations. In this case, integrating the databases to permit information exchange requires the development of unique, and potentially complex and costly, point-to-point interfaces (hardware and software) that translate the data or bridge incompatibilities in the technology. Further, the sheer number of databases involved can exponentially increase the number of relationships, and thus interfaces, that have to be built and maintained. Structuring databases in this way can quickly evolve into an overly complex, unnecessarily inefficient, and potentially ineffective way to support mission operations. (See fig. 3 for a simplified diagram conceptually depicting this approach to structuring databases.) At the other extreme, databases can be structured to recognize that various enterprise applications, functions, and operations have a need for the same data or sets of data, even though they may need to use them in different ways to support different mission applications, functions, and operations. If engineered properly, these database structures allow for greater use of standards, in terms of both data definitions and technology, and are more centralized, although the option exists to create subsidiary databases— known as data warehouses and data marts—to permit more uniquely configured and decentralized data sources to support specific and unique mission needs. Further, since the core data in these subsidiary databases are received from a corporate database(s), the need for interfaces to translate data or connect incompatible technologies is greatly reduced. Structuring databases in this way can minimize complexity and maximize efficiency and mission effectiveness. (See fig. 4 for a simplified diagram conceptually depicting this approach to structuring databases.) Terrorist watch lists are developed, maintained, or used by federal, state, and local government entities, as well as by private-sector entities, to secure our nation’s borders. Twelve such lists are currently maintained by federal agencies. These lists contain various types of data, from biographical data—such as a person’s name and date of birth—to biometric data—such as fingerprints. Nine federal agencies, which prior to the establishment of DHS spanned five different cabinet-level departments, currently maintain 12 terrorist and criminal watch lists. These lists are also used by at least 50 federal, state, and local agencies. The above-mentioned departments are the Departments of State, Treasury, Transportation, Justice, and Defense. Table 1 shows the departments, the associated nine agencies that maintain watch lists, and the 12 watch lists. The 12 watch lists support the federal agencies involved in the border security process. Figure 5, which builds on figure 2, provides a graphical representation identifying the name of each of the lists and relating them to the agencies that maintain the lists and are involved in performing the four border security functions: issuing visas, controlling entries, managing stays, and controlling exits. The 12 watch lists do not all contain the same types of data, although some types are included in all of the lists. At the same time, some types of data are included in only a few of the lists. More specifically, all of the lists include the name and date of birth; 11 include other biographical information (for example, passport number and any known aliases); 9 include criminal history (for example, warrants and arrests); 8 include biometric data (for example, fingerprints); 3 include immigration data (for example, visa type, travel dates, departure country, destination country, country visited, arrival dates, departure dates, and purpose of travel); and 2 include financial data (for example, large currency transactions). Figure 6 shows the data types that are included in each watch list. Effective sharing of information from watch lists and of other types of data among multiple agencies can be facilitated by agencies’ development and use of well-coordinated and aligned policies and procedures that define the rules governing this sharing. One effective way to implement such policies and procedures is to prepare and execute written watch list exchange agreements or memorandums of understanding. These agreements would specify answers to such questions as what data are to be shared with whom, and how and when they are to be shared. Not all of the nine agencies have policies and procedures governing the sharing of watch lists. In particular, two of the agencies reported that they did not have any policies and procedures on watch list sharing. In addition, of the seven that reported having such policies and procedures, one did not require any written agreements. Further, the policies and procedures of the seven have varied. For example, one agency’s policies included guidance on sharing with other federal agencies as well as with state and local governments, but another’s addressed sharing only with other federal agencies. In addition, each agency had different policies and procedures on memorandums of understanding, ranging from one agency’s not specifying any requirements to others’ specifying in detail that such agreements should include how, when, and where data would be shared with other parties. The variation in policies and procedures governing the sharing of information from watch lists can be attributed to the fact that each agency has developed its own policies and procedures in response to its own specific needs. In addition, the agencies reported that they received no direction from the Office of Homeland Security identifying the needs of the government as a whole in this area. As a result, federal agencies do not have a consistent and uniform approach to sharing watch list information. The President’s homeland security strategy and recent legislation call for increased sharing of watch lists, not only among federal agencies, but also among federal, state, and local government entities and between government and private-sector organizations. Currently, sharing of watch list data is occurring, but the extent to which it occurs varies, depending on the entities involved. Further, these sharing activities are not supported by systems with common architectures. This is because agencies have developed their respective watch lists, and have managed their use, in isolation from each other, and in recognition of each agency’s unique legal, cultural, and technological environments. The result is inconsistent and limited sharing. According to the President’s homeland security strategy, watch list data sharing has to occur horizontally among federal agencies as well as vertically among federal, state, and local governments in order for the country to effectively combat terrorism. In addition, recent federal homeland security legislation, including the Homeland Security Act, USA PATRIOT ACT of 2001, and the Enhanced Border Security and Visa Entry Reform Act of 2002 require, among other things, increased sharing of homeland security information both among federal agencies and across all levels of government. The degree to which watch list data are being shared is not consistent with the President’s strategy and recent legislative direction on increased data sharing. Specifically, while federal agencies report that they are generally sharing watch list data with each other, they also report that sharing with organizations outside of the federal government is limited. That is, five of the nine agencies reported that they shared data from their lists with state and local agencies, and three reported that they shared data with private industry. Figure 7 visually summarizes the extent to which federal agencies share watch list data with each level of government (federal, state, and local) and with the private sector. As noted above, federal agencies are sharing either all or some of their watch list data with each other. However, this sharing is the result of each agency’s having developed and implemented its own interfaces with other federal agencies’ watch lists. The consequence is the kind of overly complex, unnecessarily inefficient, and potentially ineffective network that is associated with unstructured and nonstandard database environments. In particular, this environment consists of nine agencies—with 12 watch lists—that collectively maintain at least 17 interfaces; one agency’s watch list alone has at least 4 interfaces. A simplified representation of the number of watch list interfaces and the complexity of the watch list environment is provided in figure 8. A key reason for the varying extent of watch list sharing is the cultural differences among the government agencies and private-sector organizations involved in securing U.S. borders. According to the President’s strategy, cultural differences often prevent agencies from exchanging or integrating information. We also recently reported that differences in agencies’ cultures has been and remains one of the principal impediments to integrating and sharing information from watch lists and other information. Historically, legal requirements have also been impediments to sharing, but recent legislation has begun addressing this barrier. Specifically, the President’s strategy and our past work have reported on legal requirements, such as security, privacy, and other civil liberty protections, that restrict effective information sharing. To address this problem, Congress has recently passed legislation that has significantly changed the legal framework for information sharing, which, when fully implemented, should diminish the effect of existing legal barriers. In particular, Congress has enacted legislation providing for agencies to have increased access to other agencies’ information and directing more data sharing among agencies. For example, section 701 of the USA PATRIOT ACT broadened the goals of regional law enforcement’s information sharing to cover terrorist activities. The Enhanced Border Security and Visa Entry Reform Act expanded law enforcement and intelligence information sharing about aliens seeking to enter or stay in the United States. Most recently, the Homeland Security Act provides the newly created DHS with wide access to information held by federal agencies relating to “threats of terrorism” against the United States. Section 891 expresses the “sense of Congress” that “Federal, state, and local entities should share homeland security information to the maximum extent practicable.” Further, section 892 of the Act requires the President to prescribe and implement procedures for the sharing of “homeland security information” among federal agencies and with state and local agencies, and section 895 requires the sharing of grand jury information. The President’s homeland security strategy stresses the importance of information sharing and identifies, among other things, the lack of a common systems architecture—and the resultant incompatible watch list systems and data—as an impediment to systems’ interoperating effectively and efficiently. To address this impediment, the strategy proposes developing a “system of systems” that would allow greater information sharing across federal agencies as well as among federal agencies, state and local governments, private industry, and citizens. In order for systems to work more effectively and efficiently, each system’s key components have to meet certain criteria. In particular, their operating systems and applications have to conform to certain standards that are in the public domain, their databases have to be built according to explicitly defined and documented data schemas and data models, and their networks have to be connected. More specifically, critical system components would have to adhere to common standards, such as open systems standards, to ensure that different systems interoperate. One source for open system standards is the International Organization for Standardization. Also, these systems’ data would have to have common— or at least mutually understood—data definitions so that data could, at a minimum, be received and processed, and potentially aggregated and analyzed. Such data definitions are usually captured in a data dictionary. Further, these systems would have to be connected to each other via a telecommunications network or networks. When system components and data do not meet such standards, additional measures have to be employed, such as acquiring or building and maintaining unique system interfaces (hardware and software) or using manual workarounds. These measures introduce additional costs and reduce efficiency and effectiveness. The 12 automated watch list systems do not meet all of these criteria (see table 2). For example, they use three different types of operating systems, each of which stores data and files differently. Overcoming these differences requires the use of software utilities to bridge the differences between systems. Without such utilities, for example, a Windows-based system cannot read data from a diskette formatted by a UNIX-based system. Also, nine of the systems do not have software applications that comply with open system standards. In these cases, agencies may have had to invest time and resources in designing, developing, and maintaining unique interfaces so that the systems can exchange data. Further, five of the systems’ databases do not have a data dictionary, and of the remaining seven systems that do have data dictionaries, at least one is not sharing its dictionary with other agencies. Without both the existence and sharing of these data dictionaries, meaningful understanding of data received from another agency could require an added investment of time and resources to interpret and understand what the received data mean. Moreover, aggregation and analysis of the data received with the data from other watch lists may require still further investment of time and resources to restructure and reformat the data in a common way. Last, seven of the systems are not connected to a network outside of their agencies or departments. Our experience has shown that without network connectivity, watch list data sharing among agencies can occur only through manual intervention. According to several of these agencies, the manual workarounds are labor-intensive and time-consuming, and they limit the timeliness of the data provided. For example, data from the TIPOFF system are shared directly with the National Automated Immigration Lookout System through a regular update on diskette. Those data are then transferred from the National Automated Immigration Lookout System to the Interagency Border Inspection System. The President’ s strategy attributes these differences to the agencies’ building their own systems to meet agency-specific mission needs, goals, and policies, without knowledge of the information needs and policies of the government as a whole. As noted and depicted in figure 6, this approach has resulted in an overly complex, unnecessarily inefficient, and potentially ineffective federal watch list sharing environment. As addressed in the preceding sections of this report, federal watch lists share a common purpose and support the border security mission. Nevertheless, the federal government has developed, maintains, and— along with state and local governments and private entities—uses 12 separate watch lists, some of which contain the same types of data. However, this proliferation of systems, combined with the varying policies and procedures that govern the sharing of each, as well as the architectural differences among the automated lists, create strong arguments for list consolidation. The advantages of doing so include faster access, reduced duplication, and increased consistency, which can reduce costs and improve data reliability. Most of the agencies that have developed and maintain watch lists did not identify consolidation opportunities. Of the nine federal agencies that operate and maintain watch lists, seven reported that the current state and configuration of federal watch lists meet their mission needs, and that they are satisfied with the level of watch list sharing. However, two agencies supported efforts to consolidate these lists. The State Department’s Bureau of Consular Affairs and the Justice Department’s U.S. Marshals Service agreed that some degree of watch list consolidation would be beneficial and would improve information sharing. Both cited as advantages of consolidation the saving of staff time and financial resources by limiting the number of labor-intensive and time-consuming data transfers, and one also cited the reduction in duplication of data that could be realized by decreasing the number of agencies that maintain lists. The President’s strategy also recognizes that watch list consolidation opportunities exist and need to be exploited. More specifically, the strategy states that the events of September 11th raised concerns regarding the effectiveness of having multiple watch lists and the lack of integration and sharing among them. To address these problems, the strategy calls for integrating the numerous and disparate systems that support watch lists as a way to reduce the variations in watch lists and remove barriers to sharing them. To implement the strategy, Office of Homeland Security officials have stated in public settings that they were developing an enterprise architecture for border and transportation security, which is one of the six key mission areas of the newly created DHS. They also reported the following initial projects under this architecture effort: (1) developing a consolidated watch list that brings together information on known or suspected terrorists in the federal agencies’ watch lists, and (2) establishing common metadata or data definitions for electronic watch lists and other information that is relevant to homeland security. However, the Office of Homeland Security did not respond to our inquiries about this effort, and thus we could not determine the substance, status, and schedule of any watch list consolidation activities. Since then, the DHS Chief Information Officer told us that DHS has assumed responsibility for these efforts. Our nation’s success in achieving its homeland security mission depends in large part on its ability to get the right information to the right people at the right time. Terrorist and criminal watch lists make up one category of such information. To date, the federal watch list environment has been characterized by a proliferation of systems, among which information sharing is occurring in some cases but not in others. This is inconsistent with the most recent congressional and presidential direction. Our experience has shown that even when sharing is occurring, costly and overly complex measures have had to be taken to facilitate it. Cultural and technological barriers stand in the way of a more integrated, normalized set of watch lists, and agencies’ legal authorities and individuals’ civil liberties are also relevant considerations. To improve on the current situation, central leadership—spanning not only the many federal agencies engaged in maintaining and using watch lists, but also the state and local government and the private-sector list users—is crucial to introducing an appropriate level of watch list standardization and consolidation while still enforcing relevant laws and allowing agencies to (1) operate appropriately within their unique mission environments and (2) fulfill their unique mission needs. Currently, the degree to which such leadership is occurring, and the substance and status of consolidation and standardization efforts under way, are unclear. In our view, it is imperative that Congress be kept fully informed of the nature and progress of such efforts. To promote better integration and sharing of watch lists, we recommend that DHS’s Secretary, in collaboration with the heads of the departments and agencies that have and use watch lists, lead an effort to consolidate and standardize the federal government’s watch list structures and policies. To determine and implement the appropriate level of watch list consolidation and standardization, we further recommend that this collaborative effort include 1. updating the watch list information provided in this report, as needed, and using this information to develop an architectural understanding of our nation’s current or “as is” watch list environment; 2. defining the requirements of our nation’s target or “to be” watch list architectural environment, including requirements that address any agency-unique needs that can be justified, such as national security issues and civil liberty protections; 3. basing the target architecture on achievement of the mission goals and objectives contained in the President’s homeland security strategy and on congressional direction, as well as on opportunities to leverage state and local government and private-sector information sources; 4. developing a near-term strategy for implementing the target architecture that provides for the integration of existing watch lists, as well as a longer-term strategy that provides for migrating to a more consolidated and standardized set of watch lists; 5. ensuring that these strategies provide for defining and adopting more standard policies and procedures for watch list sharing and addressing any legal issues affecting, and cultural barriers to, greater watch list sharing; and 6. developing and implementing the strategies within the context of the ongoing enterprise architecture efforts of each of the collaborating departments and agencies. In addition, we recommend that the Secretary report to Congress by September 30, 2003, and every 6 months thereafter, on the status and progress of these efforts, as well as on any legislative action needed to accomplish them. In commenting on a draft this report, three of the six departments provided either written (Justice and State) or oral (DHS) comments. The remaining three departments (Defense, Transportation, and Treasury) said that they had reviewed the draft but had no comments. The Office of Homeland Security was also provided with a draft but said that it would not comment. The departments that provided comments generally agreed with our findings and recommendations. They also (1) provided technical comments, which we have incorporated as appropriate in the report, and (2) offered department-unique comments, which are summarized and evaluated below. In his oral comments, DHS’s Chief Information Officer stated that the department now has responsibility for watch list consolidation. Additionally, the Chief Information Officer generally described DHS’s plans for watch list consolidation and agreed that our recommendations were consistent with the steps he described. In light of DHS’s assumption of responsibility for watch list consolidation, we have modified our recommendations to direct them to the DHS Secretary. In its written comments, Justice stated that, in addition to cultural differences, there are other reasons why agencies do not share watch list information, such as national security and civil liberty requirements, and that these requirements complicate the consolidation of watch list information. Justice also stated that, while it agrees that there is a need to establish a common watch list architecture to facilitate sharing, this need should not impede short-term efforts to improve sharing. We agree with Justice’s first point, which is why our recommendations provide for ensuring that all relevant requirements, which would include pertinent national security and civil liberty protections, are taken into consideration in developing our nation’s watch list architectural environment. To make this more explicit, we have modified our recommendations to specifically recognize national security and civil liberty requirements. We also agree with Justice’s second point, and thus our recommendations also provide for pursuing short-term, cost-effective initiatives to improve watch list sharing while the architecture is being developed. (Justice’s comments are reprinted in app. II.) In its written comments, State said that our report makes a number of valuable points concerning the benefits of watch list consolidation, enterprise architecture, and information sharing. However, State also said that our report (1) attributed watch list differences solely to varying agency cultures, (2) seemed to advocate a “one size fits all approach,” and (3) often makes the assumption that software and systems architecture differences necessarily obstruct information sharing. With respect to State’s first point, our report states clearly that watch list differences are attributable not only to varying cultural environments, but also to each agency’s unique mission needs and its legal and technical environments as well. Concerning State’s second point, our report does not advocate a “one size fits all” solution. Rather, our recommendation explicitly calls for DHS to lead a governmentwide effort to, among other things, determine the appropriate degree of watch list consolidation and standardization needed and to consider in this effort the differences in agencies’ missions and needs. Regarding State’s last point, our report does not state or assume that differences in software and system architecture categorically obstruct or preclude information sharing. Instead, we state that those differences requiring additional measures—such as building and maintaining unique system interfaces or using manual workarounds—introduce additional costs and reduce efficiency and effectiveness. (State’s comments are reprinted in app. III.) As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date on the report. At that time, we will send copies of the report to other congressional committees. We will also send copies to the Directors of the Offices of Homeland Security and Management and Budget, and the Secretaries of the Departments of Defense, Homeland Security, Justice, State, Transportation, and the Treasury. Copies will also be made available at our Web site at www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-3439. I can also be reached by E-mail at hiter@gao.gov. An additional GAO contact and staff acknowledgments are listed in appendix V. Our objectives were to identify (1) federal databases and systems that contain watch lists, the agencies that maintain and use these watch lists in protecting our nation’s borders, and the kinds of data these watch lists contain; (2) whether federal agencies’ sharing of watch list data is governed by policies and procedures; (3) whether watch lists are (a) being exchanged among federal agencies and between federal agencies and state, local, and private organizations and (b) supported by common system architectures (system hardware, software, and data characteristics); and (4) whether opportunities exist for consolidating watch lists. The scope of our work was based on the federal government’s agency structure before the formation of the Department of Homeland Security. We focused on the agencies that use or maintain watch lists in performing border security functions. We identified these departments and agencies through discussions with federal government officials knowledgeable about the U.S. border security mission area. The specific departments and agencies included in our scope were: U.S. National Central Bureau for Interpol Bureau of Intelligence and Research Air Force Office of Special Investigations Transportation Security Administration. To address our objectives, we surveyed each of the agencies cited above, using a data collection instrument. To develop this instrument, we reviewed, among other things, past GAO and other reports on watch lists and on the border security process, along with relevant guidance on such topics as systems interoperability, enterprise architecture management, database management, and information sharing. We used this research to develop a series of questions designed to obtain and aggregate information necessary to answer our objectives. We then incorporated these questions into the questionnaire (see app. IV for a copy of the questionnaire). We pretested the questionnaire at two federal agencies, made adjustments based on the pretest, and then transmitted it to the agencies cited above on July 29, 2002. Responses from agencies were received from August 2002 through October 2002. We did not independently verify agency responses. However, we did contact agency officials when necessary to clarify their responses. Next, we compiled the agencies’ responses to determine the number of watch lists being used, confirm the universe of agencies that have lists, and determine the number of organizations that use the lists and the kinds of data the lists contain. We also analyzed the agencies’ policies and procedures governing watch list sharing. In addition, we reviewed the survey responses to determine the degree of sharing among federal, state, local, and private-sector entities, and we compared the extent of sharing with the sharing goals contained in the President’s homeland security strategy and the Homeland Security Act of 2002. Moreover, we aggregated the agencies’ descriptions of their watch list systems architectures and analyzed them to identify similarities and differences. We also analyzed the architectural components of the watch list systems and compared them with the standards required for systems to interoperate and share data efficiently and effectively. Finally, we analyzed the agencies’ responses on watch list consolidation, to identify whether there were opportunities for consolidating watch lists and, if so, what the benefits were of doing so. Additionally, we reviewed the President’s homeland security strategy, homeland security legislation and agency budget requests, and other public documents to identify federal government efforts related to maintaining and sharing watch lists. We also attended conferences and other public events at which Office of Homeland Security officials spoke on homeland security enterprise architecture and watch list standardization and consolidation efforts. We attempted to meet with Office of Homeland Security officials, but they declined to meet with us. As a result, we submitted written questions to the Office of Homeland Security, but received no response. We conducted our work at the headquarters of the nine federal agencies identified above, in and around the Washington, D.C., metropolitan area, from July 2002 through March 2003, in accordance with generally accepted government auditing standards. The U.S. General Accounting Office (GAO), an we may have about your survey responses. Please note investigative agency of Congress, is studying federal that parts II, IV, and V should be answered for each agency “watch lists.” Our objectives are to identify: watch list developed, maintained, or used by your (1) databases and systems that contain watch lists of agency. Additional survey pages are provided at the domestic and international terrorists and criminals; end of the survey if you have more than one watch list. (2) agencies that maintain and use these databases and systems; (3) policies and procedures that govern the sharing of watch list data; (4) the kinds of data that are currently being exchanged among federal, state, and local governments and private sector firms and associations; (5) the architectural characteristics of watch list databases and systems; and (6) opportunities for consolidating these databases and systems. Fax: (202) 512-6450 to U.S. national security and welfare. Security Officer, at the GAO address given at the end and/or the databases and systems in which they reside of the survey. Ms. McGhee can be contacted at (202) could be productively consolidated. Please provide the 512-8116 if you have any questions or concerns. What is your agency’s definition of a “known or suspected domestic and/or international terrorist or criminal.” 4. Other (please specify): _________ 2. Is this list maintained electronically, manually (on paper), or by a combination of these methods? 2. Manually (on paper) only 3. Both electronically and manually 3. How many names are on this list as of August 1, 2002? __________ (number) each watch list developed and/or maintained 5. Describe how your agency determines the names by your agency. Name of Watch List: ______________________ determinations. If additional space is needed, add pages as necessary. Purpose of Watch List _____________________ 1. Is your watch list limited to terrorists, or does it include information on others? 2. Terrorists and others, such as criminals 4. Other (please specify): 6. What controls are in place to help ensure that the procedures for adding names to the watch list are consistently applied? 2. Is this list maintained electronically, manually (on paper), or by a combination of these methods? 2. Manually (on paper) only 3. Both electronically and manually 3. How many names are on this list as of August 1, 2002? __________ (number) A watch list—also referred to as lookout, target, or tip-off list—contains information on known and suspected domestic and international terrorists and criminals and is used by federal, state, and local agencies to identify, monitor, and apprehend these terrorists and criminals. 7. Describe how your agency determines the names that are removed from this watch list, including a 9. How often is this watch list updated? description of the criteria used to make such determinations. If additional space is needed, add pages as necessary. 8. Other (please specify): ________________ 10. For this list, what is the level of classification of data as specified by Executive Order 12958? 8. What controls are in place to help ensure that the procedures for deleting names from the watch list are consistently applied? 5. Other (please specify): 11. Does this watch list information allow individuals with false identities to be detected? 1. 2. 12. Does this watch list information allow individuals with false documents to be detected? 1. 2. Executive Order 12958 specifies how information related to national defense and foreign relations is to be maintained and protected against unauthorized disclosure. It provides a hierarchy of three levels, with different levels of protection depending on the sensitivity of the information. 13. Please tell us whether the list includes any of the 14. Do you share all or some of the information in this following items by placing a check () in the appropriate column. list with other federal, state, or local government agencies and/or others (e.g., private sector firms, associations, etc.)? Please check () yes or no for each type of organization. If you answered no to all of the categories above, please explain why you do not share this information with others, and then proceed to Part III. If additional space is needed, add pages as necessary. Other (please specify): Other (please specify): (e.g., student, Tourist, etc.) Other (please specify): Other (please specify): Other (please specify): Other Data Groups (please specify): _____________________________ 15. For each of the categories in question 14 that you answered yes to, please check all of the types of organizations you share data with: a. Federal Agencies: 3. Other (please specify): _____________ Please list the federal agencies you share data with. If additional space is needed, add pages as necessary. b. State Agencies: 3. Other (please specify): _____________ 3. Other (please specify): _____________ d. Private sector firms and associations: 3. Other (please specify): _____________ _______________________________ 16. Of the data items in your watch list, which ones do you share and with which organizations? For each item, please circle whether or not you share the item with the type of organization specified in the categories in the table below. Name of Country Issuing Passport/Visa Other (please specify): Other (please specify): Type of Visa Granted (e.g., student) data, please place a check () in the appropriate column(s) below. and Use Other Agencies’ Watch Lists Please provide the requested information for each watch list provided by another agency. If you do not receive others’ watch lists, please go directly to part V. telecommunications links (e.g., e-mail) Web Access (Hypertext Transfer Protocol (HTTP) or HTTP over Secure Socket Layer (HTTPS)) How does your agency use this watch list? Secure Community of Interest (such as Intel-Link) Other (please specify): 1. Does your agency receive and use watch list information on? 1. 2. Terrorists and others, such as 4. Does your agency have data sharing agreement(s) with the agencies you receive this list from? 3. 4. 1. 2. 2. By what mechanism(s) does your agency receive watch list information? 5. Check () the box showing how frequently you receive updated watch list information: 2. Manually (on paper) only (go to question 4) 3. Both electronically and manually 8. Other (please specify):____________ 6. Would receiving watch list information more 10. For this watch list, please check () the items frequently improve your agency’s ability to not provided and list the reason(s) the agency identify, monitor, and/or apprehend known and gave for not providing them. suspected terrorists and criminals? 1. 2. 7. Does this watch list information allow individuals with false identities to be detected? Other (please specify): 1. 2. 8. Does this watch list information allow individuals with false documents to be detected? 1. 2. 9. Does your agency receive all the data it requests from the agency providing this watch list? Other (please specify): 1. 2. Type of Visa Granted (e.g., student, tourist) If your answer is yes, please go directly to section V. If your answer is no, please proceed to question 10. Please provide the requested information for each watch list identified in parts II and IV. list resides on. In addition, where applicable, check () the standard your product is compliant with. Additional pages are provided in appendix III if you have more than one watch list. If your watch list does not reside in a computerized database or system, skip to part VI. give the agency name) Other (specify): protocols) Memory (bytes) protocols) Other (please specify): client-server) _____________________________________________ 3. For this watch list, please check () below any of 6. Is the database or system your list resides on the software infrastructure standards your system stand-alone or networked? or database is compliant with. If your system or database is compliant with a standard not listed, 1. Stand-alone only (go to question 8) please list it in the other category. (Check all that apply.) Yes () No () If Yes, List System(s) 5. Is the system on which your list resides built in compliance with open system standards? 1. 2. 8. What fields can you use to search for individuals? (Check all that apply.) If yes, please specify which standard(s) you used to develop and/or implement your system. 2. Biometric fields (e.g., fingerprints) 3. Date of birth fields . Other (please specify): ______________ A stand-alone database/system is one that is not directly connected to other systems or networks. 9. Does your system include a “fuzzy” searchcapability? 12. Is your agency using document type definitions (DTDs) or schemas for requesting watch list information from another agency? If you answered yes, please provide a copy structure of your data. of the DTD/schema for requesting watch list information. 10. For this watch list, please describe below what type of standards, schema, or specifications 13. Is your agency using DTD/schemas for your agency uses to define the format and responding to a watch list information request content of your watch list data elements or records. from another agency? If you answered yes, please provide a copy of the DTD/schema for requesting watch list information. 14. Is your agency using DTDs/schema for automatically updating watch list information? 11. Has your agency created a metadata template for describing a terrorist? If you answered yes, please provide a copy of the DTD/schema. 15. Has your agency developed and institutionalized a watch list data dictionary that describes the elements used in the DTDs/schemas? If you answered yes, please provide documents identifying the number of elements, name of each element, data type of each element, and meaning of each element. If you answered yes, please provide a copy of the data dictionary. A search for data that finds answers that come close to the data being searched for. It can get results when the exact spelling is not known or help users obtain information that is loosely related to a topic. Metadata is definitional data that provides information about or documentation of data managed within an application or environment. For example, metadata would document data about data elements or attributes, such as the element name, size, and type. For example, state motor vehicle administrators use the American Association of Motor Vehicle Administrators’ XML Driver History Query System Specifications. A DTD or schema is a file that describes the structure of a document and defines how markup tabs should be interpreted. 16. Is your agency sharing its data dictionary with 20. Does your watch list database contain any of other agencies? the following security controls? (Check all that apply.) 1. 2. If you answered yes, please provide the 3. Vulnerability Assessments or Reviews names of the agencies you share with below. 5. Intrusion (Actual or attempted) Detection 6. Maintaining audit trails of all access to and 7. Investigation of suspicious access or Revision of access control policies and 4. Other (please specify): _______________________________________ 24. What controls are in place to help ensure data reliability? _______________________________________ 2. Additional fill-in boxes for question 6. 1. Additional fill-in boxes for question 5. 3. Additional fill-in boxes for question 7. 4. Additional fill-in boxes for question 8. Gary Mountjoy, (202) 512-6367. In addition to the individual named above, Elizabeth Bernard, Neil Doherty, Joanne Fiorino, Will Holloway, Tonia Johnson, Anh Le, Kevin Tarmann, and Angela Watson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | Terrorist and criminal watch list systems--sometimes referred to as watchout, lookout, target, or tip-off systems--are important tools in controlling and protecting our nation's borders. The events of September 11, 2001, and other incidents since then, have highlighted the need to share these watch lists. In light of the importance of border security, GAO was asked to identify federal databases and systems that contain watch lists, the agencies that maintain and use them in protecting our nation's borders, the kind of data they contain, whether federal agencies are sharing information from these lists with each other and with state and local governments and private organizations, the structural characteristics of those lists that are automated, and whether opportunities exist to consolidate these watch lists. Generally, the federal government's approach to using watch lists in performing its border security mission is decentralized and nonstandard, largely because these lists were developed in response to individual agencies' unique missions, including their respective legal, cultural, and systems environments. Specifically, nine federal agencies--which prior to the creation of the Department of Homeland Security (DHS) spanned the Departments of Defense, Justice, State, Transportation, and the Treasury--develop and maintain 12 watch lists. These lists include overlapping but not identical sets of data, and different policies and procedures govern whether and how these data are shared with others. As a general rule, this sharing is more likely to occur among federal agencies than between federal agencies and either state and local government agencies or private entities. Further, the extent to which such sharing is accomplished electronically is constrained by fundamental differences in the watch lists' systems architecture (that is, the hardware, software, network, and data characteristics of the systems). Two agencies identified opportunities to standardize and consolidate these lists, which GAO believes would improve information sharing. The President's homeland security strategy further recognizes the need to address the proliferation of these lists. While the Office of Homeland Security was reportedly pursuing consolidation as part of an effort to develop a border and transportation security blueprint, referred to as an enterprise architecture, the DHS Chief Information Officer told us that the department had recently taken responsibility for the blueprint. However, we were not provided enough information to evaluate these efforts. |
In recent years, reservists have regularly been called on to augment the capabilities of the active-duty forces. The Army is increasingly relying on its reserve forces to provide assistance with military conflicts and peacekeeping missions. As of April 2003, approximately 148,000 reservists from the Army National Guard and the U.S. Army Reserve were mobilized to active duty positions. In addition, other reservists are serving throughout the world in peacekeeping missions. The involvement of reservists in military operations of all sizes, from small humanitarian missions to major theater wars, will likely continue under the military’s current war-fighting strategy and its peacetime support operations. The Army has designated some Army National Guard and U.S. Army Reserve units and individuals as early-deploying reservists to ensure that forces are available to respond rapidly to an unexpected event or for any other need. Usually, those designated as early-deploying reservists would be the first troops mobilized if two major ground wars were underway concurrently. The units and individual reservists designated as early- deploying reservists change as the missions or war plans change. The Army estimates that of its 560,000 reservists, approximately 90,000 are reservists who have been individually categorized as early-deploying reservists or are reservists who are assigned to Army National Guard and U.S. Army Reserve units that have been designated as early-deploying units. The Army must comply with the following six statutory requirements that are designed to help ensure the medical and dental readiness of its early- deploying reservists. All reservists including early-deployers are required to have a 5-year physical examination, and complete an annual certificate of physical condition. All early-deploying reservists are also required to have a biennial physical examination if over age 40, an annual medical screening, an annual dental screening, and dental treatment. Army regulations state that the 5- and 2-year physical examinations are designed to provide the information needed to identify health risks, suggest lifestyle modifications, and initiate treatment of illnesses. While the two examinations are similar, the biennial examination for early- deploying reservists over age 40 contains additional age-specific screenings such as a prostate examination, a prostate-specific antigen test, and a fasting lipid profile that includes testing for total cholesterol, low- density lipoproteins, and high-density lipoproteins. The Army pays for these examinations. The examinations are also used to assign early-deploying reservists a physical profile rating, ranging from P1 to P4, in six assessment areas: (a) Physical capacity, (b) Upper extremities, (c) Lower extremities, (d) Hearing-ears, (e) Vision-eyes, and (f) Psychiatric. (See app. I for the Army’s Physical Profile Rating Guide.) According to the Army, P1 represents a non-duty-limiting condition, meaning that the individual is fit for duty and possesses no physical or psychiatric impairments. P2 means a condition may exist; however, it is not duty-limiting. P3 or P4 means that the individual has a duty-limiting condition in one of the six assessment areas. P4 means the individual functions below the P3 level. A rating of either P3 or P4 puts the reservist in a nondeployable status or may result in the changing of the reservist’s job classification. Army regulations that implement the statutory certification requirement provide that all reservists—including early-deploying reservists—certify their physical condition annually on a two-page certification form. Army early-deploying reservists must report doctor or dentist visits since their last examination, describe current medical or dental problems, and disclose any medications they are currently taking. In addition, the Army is required to conduct an annual medical screening for all early-deploying reservists. According to Army regulations, the Army is to meet the annual medical screening requirement by reviewing the medical certificate required of each early-deploying reservist. Further, Army early-deploying reservists are required to undergo, at the Army’s expense, an annual dental examination. The Army is also required to provide and pay for the dental treatment needed to bring an early- deploying reservist’s dental status up to deployment standards—either dental class 1 or 2. Reservists in dental class 3 and 4 are not deployable. Class 3 reservists could have dental emergencies in the next 12 months, and reservists in class 4 have not had the required annual dental examination. The Army has not consistently carried out the requirements that early- deploying reservists undergo 5- or 2-year physical examinations, and the required dental examination. In addition, the Army has not required early- deploying reservists to complete the annual medical certificate of their health condition, which provides the basis for the required annual medical screening. Accordingly, the Army does not have sufficient health information on early-deploying reservists. Furthermore, the Army does not have the ability to maintain information from medical and dental records and annual medical certificates at the aggregate or individual level, and therefore does not know the overall health status of its early- deploying reservists. We found that the Army has not consistently met the statutory requirements to provide early-deploying reservists physical examinations at 5- or 2-year intervals. At the seven Army early-deploying reserve units we visited, about 66 percent of the medical records were available for our review. Based on our review of these records, 13 percent of the reservists did not have a current 5-year physical examination on file. Further, our review of the available records found that approximately 68 percent of early-deploying reservists over age 40 did not have a record of a current biennial examination. Army early-deploying reservists are required by statute to complete an annual medical certificate of their health status, and regulations require the Army to review the form to satisfy the annual screening requirement. In performing our review of the records on hand, we found that none of the units we visited required that its reservists complete the annual medical certificate, and consequently, none of them were available for review. Furthermore, Army officials stated that reservists at most other units have not filled out the certification form and that enforcement of this requirement was poor. The Army is also statutorily required to provide early-deploying reservists with an annual dental examination to establish whether reservists meet the dental standards for deployment. At the seven early-deploying units we visited, we found that about 49 percent of the reservists whose records were available for review did not have a record of a current dental examination. The Army’s two automated information systems for monitoring reservists’ health do not maintain important medical and dental information for early- deploying reservists—including information on the early-deploying reservists’ overall health status, information from the annual medical certificate form, dental classifications, and the date of dental examinations. In one system, the Regional Level Application Software, the records provide information on the dates of the 5-year physical examination and the physical profile ratings. In the other system, the Medical Occupational Database System, the records provide information on HIV status, immunizations, and DNA specimens. Neither system allows the Army to review medical and dental information for entire units at an aggregate level. The Army is aware of the information shortcomings of these systems and acknowledges that having sufficient, accurate, and current information on the health status of reservists is critical for monitoring combat readiness. According to Army officials, in 2003 the Army plans to expand the Medical Occupational Database System to provide access to current, accurate, and relevant medical and dental information at the aggregate and individual level for all of its reservists— including early-deploying reservists. According to Army officials, this information will be readily available to the U.S. Army Reserve Command. Once available, the Army can use this information to determine which early-deploying reservists meet the Army’s health care standards and are ready for deployment. Medical experts recommend physical and dental examinations as an effective means of assessing health. For some people, the frequency and content of physical examinations vary according to the specific demands of their job. Because Army early-deploying reservists need to be healthy to fulfill their professional responsibilities, periodic examinations are useful for assessing whether they can perform their assigned duties. Furthermore, the estimated annual cost to conduct periodic examinations—about $140—is relatively modest compared to the thousands of dollars the Army spends for salaries and training of early- deploying reservists—an investment that may be lost if reservists can not perform their assigned duties. Such information is also needed by VA to adjudicate disability claims and to provide health benefits. Physical and dental examinations are geared towards assessing and improving the overall health of the general population. The U.S. Preventive Services Task Force and many other medical organizations no longer recommend annual physical examinations for adults—preferring instead a more selective approach to detecting and preventing health problems. In 1996, the task force reported that while visits with primary care clinicians are important, performing the same interventions annually on all patients is not the most clinically effective approach to disease prevention. Consistent with its finding, the task force recommended that the frequency and content of periodic health examinations should be based on the unique health risks of individual patients. Today, many health associations and organizations are recommending periodic health examinations that incorporate age-specific screenings, such as cholesterol screenings for men (beginning at age 35) and women (beginning at age 45) every 5 years, and clinical breast examinations every 3 to 5 years for women between the ages of 19 and 39. Further, oral health care experts emphasize the importance of regular 6- to 12-month dental examinations. Both the private and public sectors have established a fixed schedule of physical examinations for certain occupations to help ensure that workers are healthy enough to meet the specific demands of their jobs. For example, the Federal Aviation Administration requires commercial pilots to undergo a physical examination once every 6 months. U.S. National Park Service personnel who perform physically demanding duties have a physical examination once every other year for those under age 40, and on an annual basis for those over age 40. Additionally, guidelines published by the National Fire Protection Association recommend that firefighters have an annual physical examination regardless of age. In the case of Army early-deploying reservists, the goal of the physical and dental examinations is to help ensure that the reservists are fit enough to be deployed rapidly and perform their assigned jobs. Furthermore, the Army recognizes that some jobs are more demanding than others and require more frequent examinations. For example, the Army requires that aviators undergo a physical examination once a year, while marine divers and parachutists have physical examinations once every 3 years. While governing statutes and regulations require physical examinations at specific intervals, the Army has raised concerns about the appropriate frequency for them. In a 1999 report to the Congress, the Offices of the Assistant Secretaries of Defense for Health Affairs and Reserve Affairs stated that while there were no data to support the benefits of conducting periodic physical examinations, DOD was reluctant to recommend a change to the statutory requirements. The report stated that additional research needs to be undertaken to identify and develop a more cost- effective, focused health assessment tool for use in conducting physical exams for reservists—in order to ensure the medical readiness of reserve forces. However, as of February 2003, DOD had not conducted this research. For its early-deploying reservists, the Army conducts and pays for physical and dental examinations and selected dental treatments at military treatment facilities or pays civilian physicians and dentists to provide these services. The Army could not provide us with information on the cost to provide these services at military hospitals or clinics primarily because it does not have a cost accounting system that records or generates cost data for each patient. However, the Army was able to provide us with information on the amount it pays civilian providers for these examinations under the Federal Strategic Health Care Alliance program (FEDS_HEAL )—an alliance of private physicians and dentists and other physicians and dentists who work for VA and HHS’s Division of Federal Occupational Health. FEDS_HEAL is a program that allows Army early-deploying reservists to obtain required physical and dental examinations and dental treatment from local providers. Using FEDS_HEAL contract cost information, we estimate the average cost of the examinations to be about $140 per early-deploying reservist per year. We developed the estimate over one 5- year period by calculating the annual cost for those early-deploying reservists requiring a physical examination once every 5 years, calculating the cost for those requiring a physical examination once every 2 years, and calculating the cost for those requiring an initial dental examination and subsequent yearly dental examinations. The FEDS_HEAL cost for each physical examination for those under 40 is about $291, and for those over 40 is about $370. The Army estimates that the cost of annual dental examinations under the program to be about $80 for new patients and $40 for returning patients. The Army estimates that it would cost from $400 to $900 per reservist to bring those who need treatment from dental class 3 to dental class 2. For the Army, there is likely value in conducting periodic examinations because the average cost to provide physical and dental examinations per early-deploying reservist—about $140 annually over a 5-year period—is relatively low compared to the potential benefits associated with such examinations. These examinations could help protect the Army’s investment in its early-deploying reservists by increasing the likelihood that more reservists will be deployable. This likelihood is increased when the Army uses examinations to identify early-deploying reservists who do not meet the Army’s health standards and are thus not fit for duty. The Army can then intervene by treating, reassigning, or dismissing these reservists with duty-limiting conditions—before their mobilization and before the Army needs to rely on the reservists’ skills or occupations. Furthermore, by identifying duty-limiting conditions or the risks for developing them, periodic examinations give early-deploying reservists the opportunity to seek medical care for their conditions—prior to mobilization. Periodic examinations may provide another benefit to the Army. If the Army does not know the health condition of its early-deploying reservists, and if it expects some of them to be unfit and incapable of performing their duties, the Army may be required to maintain a larger number of reservists than it would otherwise need in order to fulfill its military and humanitarian missions. While data are not available to estimate these benefits, the benefit associated with reducing the number of reservists the Army needs to maintain for any given objective could be large enough to more than offset the cost of the examinations and treatments. The proportion of reservists whom the Army maintains but who cannot be deployed because of their health may be significant. For instance, according to a 1998 U.S. Army Medical Command study, a “significant number” of Army reservists could not be deployed for medical reasons during mobilization for the Persian Gulf War (1990-1991). Further, according to a study by the Tri-Service Center for Oral Health Studies at the Uniformed Services University of the Health Sciences, an estimated 25 percent of Army reservists who were mobilized in response to the events of September 11, 2001, were in dental class 3 and were thus undeployable. In fact, our analysis of the available current dental examinations at the seven early-deploying units showed a similar percentage of reservists—22 percent—who were in dental class 3. With each undeployable reservist, the Army loses, at least temporarily, a significant investment that is large compared to the cost of examining and treating these reservists. The annual salary for an Army early-deploying reservist in fiscal year 2001 ranged from $2,200 to $19,000. The Army spends additional amounts to train and equip each reservist and, in some cases, provides allowances for subsistence and housing. Additionally, for each reservist it mobilizes, the Army spends about $800. If it does not examine all of its early-deploying reservists, the Army risks losing its investment because it will train, support, and mobilize reservists who might not be deployed because of their health. Both VBA and VHA need health assessment data obtained by the Army to adjudicate disability claims and provide medial care. In general, a reservist who is disabled while on active duty, or on inactive duty for training, is eligible for service-connected disability compensation, and can file a claim at one of VBA’s 57 regional offices. To provide such disability compensation, VBA needs to determine that each claimed disability exists, and that each was caused or aggravated by the veteran’s military service. The evidence needed to prove service connection includes records of service to identify when the veteran served and records of medical treatment provided while the veteran was in military service. More timely and accurate health information collection by the Army and the other military services can help VBA provide disabled reservists with more timely and accurate decisions on their claims for disability compensation. Complete and accurate health data can also help VHA provide medical care to reservists who become eligible for veterans benefits. Army reservists have been increasingly called upon to serve in a variety of operations, including peacekeeping missions and the current war on terrorism. Given this responsibility, periodic health examinations are important to help ensure that Army early-deploying reservists are fit for deployment and can be deployed rapidly to meet humanitarian and wartime needs. However, the Army has not fully complied with statutory requirements to assess and monitor the medical and dental status of early- deploying reservists. Consequently, the Army does not know how many of them can perform their assigned duties and are ready for deployment. The Army will realize benefits by fully complying with the statutory requirements. The information gained from periodic physical and dental examinations, coupled with age-specific screenings and information provided by early-deploying reservists on an annual basis in their medical certificates, will assist the Army in identifying potential duty-limiting medical and dental problems within its reserve forces. This information will help ensure that early-deploying reservists are ready for their deployment duties. Given the importance of maintaining a ready force, the benefits associated with the relatively low annual cost of about $140 per reservist to conduct these examinations outweighs the thousands of dollars spent in salary and training costs that are lost when an early- deploying reservist is not fit for duty. The Army’s planned expansion, in 2003, of an automated health care information system is critical for capturing the key medical and dental information needed to monitor the health status of early-deploying reservists. Once collected, the Army will have additional information to conduct the research suggested by DOD’s Offices of Health Affairs and Reserve Affairs to determine the most effective approach, which could include the frequency of physical examinations, for determining whether early-deploying reservists are healthy, can perform their assigned duties, and can be rapidly deployed. While our work focused on the Army’s efforts to assess the health status of its early-deploying reservists, it also has implications for veterans. Implementing our recommendations that DOD comply with the statutory requirements, which DOD has agreed to, will also be of benefit to VA. VA’s ability to perform its missions to provide medical care to veterans and compensate them for their service-connected disabilities could be hampered if the Army’s medical surveillance system contains inadequate or incomplete information. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other members of the subcommittee may have. For further information regarding this testimony, please contact Marjorie E. Kanof at (202) 512-7101. Michael T. Blair, Jr., Aditi S. Archer, Richard J. Wade, and Gregory D. Whitney also contributed to this statement. Upper extremities Strength, range of motion, and general efficiency of upper arm, shoulder girdle, and upper back, including cervical and thoracic vertebrae. Lower extremities Strength, range of movement, and efficiency of feet, legs, lower back, and pelvic girdle. Hearing-ears Auditory sensitivity and organic disease of the ears. Vision-eyes Visual acuity and organic disease of the eyes and lids. No loss of digits or limitation of motion; no demonstrable abnormality; able to do hand-to- hand fighting. No loss of digits or limitation of motion; no demonstrable abnormality; able to perform long marches, stand over long periods, and run. Audiometer average level for each ear not more than 25 dB at 500, 1000, or 2000 Hz with no individual level greater than 30 dB. Not over 45 dB at 4000 Hz. Uncorrected vision acuity 20/200 correctable to 20/20 in each eye. Psychiatric Type, severity, and duration of the psychiatric symptoms or disorder existing at the time the profile is determined. Amount of external precipitating stress. Predispositions as determined by the basic personality makeup, intelligence, performance, and history of past psychiatric disorder impairment of functional capacity. No psychiatric pathology; may have history of transient personality disorder. Upper extremities Slightly limited mobility of joints, muscular weakness, or other musculo- skeletal defects that do not prevent hand-to- hand fighting and do not disqualify for prolonged effort. Lower extremities Slightly limited mobility of joints, muscular weakness, or other musculo- skeletal defects that do not prevent moderate marching, climbing, timed walking, or prolonged effort. Vision-eyes Distant visual acuity correctable to not worse than 20/40 and 20/70, or 20/30 and 20/100, or 20/20 and 20/400. Psychiatric May have history of recovery from an acute psychotic reaction due to external or toxic causes unrelated to alcohol or drug addiction. Defects or impairments that require significant restriction of use. Defects or impairments that require significant restriction of use. Hearing-ears Audiometer average level for each ear at 500, 1000, or 2000 Hz, not more than 30 dB, with no individual level greater than 35 dB at these frequencies, and level not more than 55 dB at 4000 Hz; or audiometer level 30 dB at 500 Hz, 25 dB at 1000 and 2000 Hz, and 35 dB at 4000 Hz in better ear. (Poorer ear may be deaf.) Speech reception threshold in best ear not greater than 30 dB HL measured with or without hearing aid, or chronic ear disease. Uncorrected distant visual acuity of any degree that is correctable to not less than 20/40 in the better eye. Functional level below P3. Functional level below P3. Functional level below P3. Functional level below P3. Satisfactory remission from an acute psychotic or neurotic episode that permits utilization under specific conditions (assignment when outpatient psychiatric treatment is available or certain duties can be avoided). Functional level below P3. Defense Health Care: Army Needs to Assess the Health Status of All Early-Deploying Reservists. GAO-03-347. Washington, D.C.: April 15, 2003. Military Personnel: Preliminary Observations Related to Income, Benefits, and Employer Support for Reservists During Mobilizations. GAO-03-549T. Washington, D.C.: March 19, 2003. Defense Health Care: Most Reservists Have Civilian Health Coverage but More Assistance Is Needed When TRICARE Is Used. GAO-02-829. Washington, D.C.: September 6, 2002. Reserve Forces: DOD Actions Needed to Better Manage Relations between Reservists and Their Employers. GAO-02-608. Washington, D.C.: June 13, 2002. Veterans’ Benefits: Despite Recent Improvements, Meeting Claims Processing Goals Will Be Challenging. GAO-02-645T. Washington, D.C.: April 26, 2002. VA and Defense Health Care: Military Medical Surveillance Policies in Place, But Implementation Challenges Remain. GAO-02-478T. Washington, D.C.: February 27, 2002. Reserve Forces: Cost, Funding, and Use of Army Reserve Components in Peacekeeping Operations. GAO/NSAID-98-190R. Washington, D.C.: May 15, 1998. Defense Health Program: Future Costs Are Likely to Be Greater than Estimated. GAO/NSIAD-97-83BR. Washington, D.C.: February 21, 1997. Wartime Medical Care: DOD Is Addressing Capability Shortfalls, but Challenges Remain. GAO/NSIAD-96-224. Washington, D.C.: September 25, 1996. Reserve Forces: DOD Policies Do Not Ensure That Personnel Meet Medical and Physical Fitness Standards. GAO/NSIAD-94-36. Washington, D.C.: March 23, 1994. Operation Desert Storm: Problems With Air Force Medical Readiness. GAO-03-549T. Washington, D.C.: December 30, 1993. Defense Health Care: Physical Exams and Dental Care Following the Persian Gulf War. GAO/HRD-93-5. Washington, D.C.: October 15, 1992. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | During the 1990-91 Persian Gulf War, health problems prevented the deployment of a significant number of Army reservists. As required by the National Defense Authorization Act for Fiscal Year 2002, GAO reported on the Army's efforts to assess the health status of its early-deploying reservists (Defense Health Care: Army Needs to Assess the Health Status of All Early-Deploying Reservists ( GAO-03-437 , Apr. 15, 2003)). GAO was asked to testify on its findings on the Army's health status assessments efforts and the implications of those assessments for the Department of Veterans Affairs (VA). Specifically, GAO was asked to determine if the Army is collecting and maintaining information on reservists' health and review the value and advisability of providing examinations. For its report, GAO reviewed medical records at seven Army early-deploying reserve units to determine the number of required examinations that have been conducted and obtained expert opinion on the value of periodic examinations. The Army has not consistently carried out the statutory requirements for monitoring the health and dental status of its early-deploying reservists. As a result, the Army does not have sufficient information to know how many reservists can perform their assigned duties and are ready for deployment. At reserve units GAO visited, approximately 66 percent of the medical records were available for review. At those locations, GAO found that about 13 percent of the 5-year physical examinations had not been performed, about 49 percent of early-deploying reservists lacked current dental examinations, and none of the annual medical certificates required of reservists were completed by them and reviewed by the units. Medical experts recommend periodic physical and dental examinations as an effective means of assessing health. Army early-deploying reservists need to be healthy to meet the specific demands of their occupations; examinations and other health screenings can be used to identify those who cannot perform their assigned duties. Without adequate examinations, the Army may train, support, and mobilize reservists who are unfit for duty. DOD concurred with GAO's recommendations to comply with statutory requirements to conduct medical and dental examinations and provide dental treatment. VA's ability to perform its missions to provide medical care to veterans and compensate them for their service-connected disabilities could be hampered if the Army's medical surveillance system contains inadequate or incomplete information. |
An EMP is a burst of high power electromagnetic radiation resulting from the detonation of nuclear and non-nuclear devices that are designed to intentionally disrupt or destroy electronic equipment. EMP events may be further categorized into a number of different types, based on their specific source of initiation. The threat focused on primarily by the EMP Commission is the high-altitude EMP (HEMP). A HEMP event is caused by the detonation of a nuclear device at a high-altitude, about 40 to 400 kilometers, above the Earth’s atmosphere. A HEMP attack is not intended to cause direct physical impacts at the Earth’s surface, such as injury or damage directly from heat or blast, but instead interacts with the atmosphere to create an intense electromagnetic energy field that can overload computer circuitry and could cause significant damage to critical electrical infrastructure. In addition to manmade EMPs, naturally occurring solar weather events can also cause related electromagnetic impacts that can adversely affect components of the commercial electric grid. This type of event is commonly referred to as a geomagnetic disturbance (GMD). In 1989, a GMD caused wide-scale impacts on the Hydro-Quebec power system in Canada which caused the electric grid to collapse within 92 seconds and left six million customers without power for 9 hours. As noted in Presidential Policy Directive 21 (PPD-21), energy sector infrastructure is uniquely critical due to the enabling functions it provides to other critical infrastructure sectors. Given this interdependency, an EMP or major GMD event that disrupts the electric grid could also result in potential cascading impacts on fuel distribution, transportation systems, food and water supplies, and communications and equipment for emergency services, as well as other communication systems which utilize the civilian infrastructure. PPD-21 also recognizes that DHS has numerous responsibilities to protect critical infrastructure, including such things as analyzing threats to, vulnerabilities of, and potential consequences from all hazards on critical infrastructure. Within DHS, the National Protection and Programs Directorate (NPPD) is responsible for working with public and industry infrastructure partners and leads the coordinated national effort to mitigate risk to the nation’s infrastructure through the development and implementation of the infrastructure protection program. NPPD has two principal offices with responsibilities to facilitate protection of critical infrastructure that could be at risk from EMP and GMD events—the Office of Infrastructure Protection (IP) and the Office of Cyber Security and Communications (CS&C). In addition, DHS’s Federal Emergency Management Agency (FEMA) and Science and Technology Directorate (S&T) have roles related to addressing potential impacts to the electric grid, which could include EMP and GMD threats. DOE also has a significant role as the sector-specific agency for the energy sector, which includes critical infrastructure and key resources related to electricity. For example, DOE is responsible for developing an Energy Sector Specific Plan—in collaboration with other stakeholders, including DHS—that applies the NIPP risk management model to critical infrastructure and key resources within the sector. Within DOE, the Office of Electricity Delivery and Energy Reliability leads national efforts to increase the security and reliability of the energy infrastructure and facilitate recovery from disruptions to the energy supply. DOE national laboratories also provide research support and technical expertise to federal and industry stakeholders regarding EMP and GMD impacts. Other principal federal agencies working to address the threat of EMP and GMD include the Department of Defense (DOD) and the Federal Energy Regulatory Commission (FERC), as well as the National Oceanic and Atmospheric Administration (NOAA), and National Aeronautics and Space Administration (NASA). Electrical infrastructure is primarily operated by private industry which owns approximately 85 percent of the nation’s critical electrical infrastructure. Industry entities are represented, in part, through membership in industry associations such as the American Public Power Association and the Edison Electric Institute. The North American Electric Reliability Corporation (NERC) also serves as the delegated authority to regulate the protection and improvement of the reliability and security of the electrical infrastructure. As of July 2015, DHS reported taking several actions that could help address electromagnetic threats to the electric grid, but these efforts were conducted independently of the 2008 EMP Commission recommendations. Our preliminary analysis of DHS’s actions indicates that they generally fell under four categories of effort: (1) developing reports, (2) identifying mitigation efforts, (3) strategy development and planning, and (4) conducting training exercises. Since 2008, DHS has produced three reports that specifically address electromagnetic threats to the electric grid. Below is a summary of each report. Electromagnetic Pulse Impacts on Extra High Voltage Power Transformers. This 2010 report analyzed the potential impact of an EMP on extra high voltage transformers—focusing primarily on transformer equipment designs and identifying specific mitigation efforts such as blocking devices that minimize the impact of geomagnetically induced currents (GIC) on the electric grid. The report concluded that the similarity of EMP effects, regardless of source, indicates that geomagnetic storms provide a useful basis for transformer impact analysis and that selective installation of blocking devices would minimize the impacts of GIC on transformers, among other findings. Impacts of Severe Space Weather on the Electric Grid. This 2011 report assessed the impacts of space weather on the electric grid, seeking to understand how previous solar storms have affected some power grids, and what cost-effective mitigation efforts are available to protect the electric grid, among other topics. Some of the key findings and recommendations include the need for a rigorous risk assessment to determine how plausible a worse-case scenario may be and additional research to better understand how transformers may be impacted by electromagnetic threats. This report also recommended installation of blocking devices to minimize the impacts of GIC. Sector Resilience Report: Electric Power Delivery. This 2014 report summarizes an analysis of key electric power dependencies and interdependencies, such as communications, transportation, and other lifeline infrastructure systems. The report included an assessment of, and best practices for, improving infrastructure resilience such as: modeling to identify potential vulnerabilities, conducting a cost-benefit analysis of alternative, technology-based options, and installing protective measures and hardening at-risk equipment, among others. DHS identified two specific efforts implemented since 2008 that could help to mitigate electromagnetic impacts to the electric grid. They are: (1) Recovery Transformer Project (RecX), and (2) Cyber Emergency Response Team. RecX. In 2012, S&T partnered with industry to develop a prototype transformer that could significantly reduce the time to transport, install, and energize a transformer to aid recovery from power outages associated with transformer failures from several months to less than one week. S&T, along with industry partners, demonstrated the RecX prototype for 2.5 years, ending in September 2014. DHS reported that RecX proved to be successful in an operational environment and has the capacity to reduce the impact of power outages. Cyber Emergency Response Team. CS&C operates the Industrial Control Systems-Cyber Emergency Response Team to assist critical infrastructure owners in the 16 sectors, including the energy sector, to improve overall cybersecurity posture of their control systems.Industrial control systems are among the types of critical electrical infrastructure that could be impacted in the event of an EMP attack. DHS has taken actions to support the development of two key strategies and plans that could help to address electromagnetic threats. These include areas: 1) Power Outage Incident Annex, and 2) the National Space Weather Strategy. Power Outage Incident Annex. In 2014, FEMA began developing a Power Outage Incident Annex (incident annex) to provide incident- specific information, which supplements the National Response Framework. According to FEMA officials, the incident annex will describe the process and organizational constructs that the federal government will utilize to respond to and recover from loss of power resulting from deliberate acts of terrorism or natural disasters. Among other tasks, the incident annex is designed to identify key federal government capabilities and resources, prioritize core capabilities, and outline response and recovery resource requirements. FEMA officials reported that the incident annex is scheduled to be completed by October 2015. National Space Weather Strategy. In collaboration with the White House Office of Science and Technology Policy and NOAA, DHS has been working since 2014 to help develop a National Space Weather Strategy. As a co-chair of the Space Weather Operations, Research and Mitigation Task Force, DHS is in the process of developing a strategy to achieve several goals, including efforts to establish benchmarks for space weather events, improve protection and mitigation efforts, and improve assessment, modeling, and prediction of impacts on critical infrastructure, among other goals. According to officials at S&T, a draft of the National Space Weather Strategy is currently being updated to incorporate stakeholder comments and is scheduled to be completed in September 2015. DHS has also conducted two training exercises that could help address the potential impact of power outages caused by electromagnetic events, GridEx II and Eagle Horizon. GridEx II. In November 2013, DHS, along with the Federal Bureau of Investigation, DOE, and other relevant government agencies, participated in an industry-wide exercise assessing the readiness of the electricity industry to respond to a physical or cyber attack on the bulk power system. The key goals of GridEx II were to review existing command, control, and communication plans and tools, incorporate lessons learned from a previous exercise, and to identify potential improvements in cyber and physical security plans and programs. Upon completing the exercises, participants identified key lessons learned, which included the need for enhanced information sharing, and clarification of roles and responsibilities during a physical or cyber attack. Eagle Horizon. Since 2004, FEMA has conducted a mandatory, annual continuity exercise for all federal executive branch departments and agencies to ensure the preservation and continuing performance of essential functions. Key objectives of the training exercise include: assessing the implementation of continuity plans, demonstrating communication capabilities, and examining broader national continuity capabilities with state, local, and private sector partners. For our ongoing review, DHS did not identify its actions as specifically responsive to the EMP Commission’s recommendations; nonetheless, some of the actions DHS has taken since 2008 could help to mitigate some electromagnetic impacts to the electric grid. For example, the three identified reports provide some insights on how the electric grid may be impacted by electromagnetic threats. Additionally, the RecX project provided a functional prototype that may facilitate industry efforts to further develop more mobile transformers and assist with recovery efforts in the event of an electromagnetic attack on the electric grid. Similarly, DHS planning efforts to develop the power outage incident annex and space weather strategy are also steps that could help to mitigate the negative effects of an electromagnetic threat to the electric grid by improving critical planning and response efforts. While DHS has taken several positive steps to address electromagnetic threats to the electric grid since the EMP Commission issued its recommendations in 2008, our preliminary analysis indicates that these actions may fall short of the expectations for DHS regarding overall responsibilities to oversee and coordinate national efforts to protect critical electrical infrastructure, consistent with PPD-21 and the NIPP. For example, DHS’s efforts to clearly identify agency roles and responsibilities to date have been limited. Specifically, DHS has had difficulty identifying the relevant DHS components, officials, or ongoing internal DHS activities with an EMP nexus. For example, DHS officials were unable to determine internally which component would serve as the lead—S&T or NPPD—in regards to addressing EMP threats. In addition, NPPD has not yet identified its specific roles and activities in addressing electromagnetic threats even though it has been identified by the DHS Office of Policy as the proposed risk analysis “owner” relative to space weather threats. We recognize that DHS does not have a statutory obligation to address the specific recommendations of the EMP Commission and many of these recommendations were also directed to DOE. Nevertheless, we believe that implementation of them could help mitigate electromagnetic impacts to the electric grid, such as helping to assure the protection of high-value transmission assets. Moreover, PPD-21 articulates DHS’s roles and responsibilities to safeguard the nation’s critical infrastructure, which are consistent with such recommendations. For example, PPD-21 states that DHS, in carrying out its responsibilities under the Homeland Security Act of 2002, as amended, is to, among other things, evaluate national capabilities, opportunities, and challenges in protecting critical infrastructure; analyze threats to, vulnerabilities of, and potential consequences from all hazards on critical infrastructure; identify security and resilience functions that are necessary for effective stakeholder engagement with all critical infrastructure sectors; integrate and coordinate federal cross-sector security and resilience activities; and identify and analyze key interdependencies among critical infrastructure sectors. Moreover, PPD-21 calls for DHS to specifically consider sector dependencies on energy and communications systems, and identify pre- event and mitigation measures or alternate capabilities during disruptions to those systems in updating the NIPP. To date, our preliminary analysis suggests that DHS has not fully addressed some key responsibilities related to effectively preparing for and responding to electromagnetic threats to the electric grid, in conjunction with DOE as the sector-specific agency for the energy sector, which is responsible for critical electrical infrastructure. Specifically, DHS did not identify any efforts it conducted to support the identification of key electrical infrastructure assets or assess cross-sector dependencies on these assets, for which DHS would be expected to play a key role. According to officials within NPPD and the DHS Office of Policy, factors such as competing priorities and a focus on all hazards may contribute to limited efforts being taken by DHS to specifically address electromagnetic threats. We will continue to assess the extent to which DHS’s efforts align with the EMP Commission recommendations as well as the extent to which DHS’s current and planned actions align with its own risk management framework, as identified in the NIPP, as we complete our work. We will report our final results later this year. Our preliminary analysis indicates that since the EMP Commission issued its recommendations in 2008, DHS has coordinated with federal and industry stakeholders to address some, but not all risks to the electric grid. Specifically, DHS has not fully coordinated with stakeholders in certain areas such as identifying critical assets or collecting information necessary to assess electromagnetic risks. Our preliminary work has identified eight projects in which DHS coordinated with other federal agencies or industry to help protect the electric grid. These projects encompass a range of different protective efforts, including the development of plans to address long term power outages, participation in exercises, and research and development activities which address the resiliency of electrical infrastructure (See Appendix II for a list of projects we identified.) Four of the eight projects we identified were initiated within the past 2 years and three specifically address the risks associated with an EMP or GMD event. The three EMP or GMD-related projects include 1) participation in a White House Task Force to support development of an interagency space weather action plan; 2) collaboration with NASA to develop precise, localized forecasts that can help utilities better respond to solar weather events; and 3) development of EMP protection guidelines for critical equipment, facilities, and communications/data centers. In addition to the specific projects identified above, DHS also coordinates with sector stakeholders through the Energy Sector Government Coordinating Council (EGCC)—which it co-chairs with DOE—and the Electricity Subsector Coordinating Council (ESCC) through the Critical Infrastructure Partnership Advisory Council. While federal officials generally identified that EMP and GMD issues have been discussed via these groups in recent years, they noted that the EMP threat has not been an area of particular focus. Although DHS participation in the identified projects is a positive step to help mitigate some potential impacts of electromagnetic threats, our preliminary work suggests that DHS has not fully coordinated with stakeholders in other areas to help facilitate EMP and GMD protective efforts. Specifically, our preliminary analysis indicates that DHS has not fully coordinated with stakeholders to address electromagnetic threats to the electric grid in the following areas: Providing threat information. DHS has not identified any efforts to specifically provide EMP-related threat information to industry stakeholders. Industry officials we spoke with generally stated that they do not have sufficient threat information to determine the extent to which specific actions should be taken to mitigate the effects of an EMP event. Whereas industry officials reported having a greater understanding of the potential likelihood of a major GMD caused by solar weather, they noted that applicable EMP threat briefings by DOD or DHS could help them to better justify to their management or stockholders the level of investment required to take protective actions. According to the Quadrennial Energy Review, incomplete or ambiguous threat information may lead to inconsistency in physical security among grid owners, inefficient spending on security measures, or deployment of security measures against the wrong threat. This concern generally aligns with previous work related to cyber threats in which we reported that federal partners’ efforts to share information did not consistently meet industry’s expectations, in part, due to restrictions on the threat information that can be shared with industry partners. generally concurred with our prior recommendations directed at strengthening its partnership and information-sharing efforts, and has since taken steps to enhance its information sharing activities, including granting security clearances, and establishing a secure mechanism to share cyber threat information. We will continue to assess DHS’s actions regarding providing threat information on EMP as part of our ongoing work. Identifying key infrastructure assets. Our preliminary analysis indicates that DHS and DOE have not taken action to identify the most critical substations and transformers on the electric grid. According to the NIPP risk management framework, such information is important to better understand system dependencies and cascading impacts, as well as help determine priorities for collecting additional information on specific asset vulnerabilities or potential mitigation actions. GAO, Critical Infrastructure Protection, Observations on Key Factors in DHS’s Implementation of its Partnership Approach, GAO-14-464T (Washington, D.C.: March 6, 2014). and other federal agencies regarding the identification of key infrastructure assets. Collecting risk information. DHS has not fully leveraged existing programs or utilized collaboration opportunities with federal partners to collect additional vulnerability and consequence information related to potential impacts to the electric grid. For example, DHS-IP has not fully leveraged the Infrastructure Survey Tool and Regional Resiliency Assessment Program (RRAP) to help collect additional information related to infrastructure vulnerabilities and impacts related to electromagnetic threats. As we have concluded previously, coordination with other federal partners may also help ensure an integrated approach to vulnerability assessment activities.fully leveraged other agency efforts such as DOD’s Defense Critical Infrastructure Protection program which could provide useful information about potential consequences of electric grid failure. According to the NIPP, to assess risk effectively, critical infrastructure partners—including owners and operators, sector councils, and government agencies—need timely, reliable, and actionable information regarding threats, vulnerabilities, and consequences. As part of our ongoing work, we will continue to assess actions by DHS and other federal agencies regarding the collection of applicable risk information. For example, DHS has also not Engaging with industry to identify research priorities and funding mechanisms. Enhanced collaboration among federal and industry partners is critical to help identify and address key research gaps and priorities, and leverage available funding mechanisms. Our preliminary analysis identified two areas—assessing transformer impacts and development of mitigation tools—where DHS has not fully pursued opportunities to collaborate with federal and industry stakeholders on research, testing and identifying funding sources that could help facilitate efforts to address electromagnetic threats to the electric grid. With respect to transformer impacts, industry and government officials identified the need for additional modeling and assessment as the most critical research gap. For example, the 2012 NERC GMD Task Force found that modeling the effects of GIC flows on transformers during a GMD event is not sufficiently developed. Stakeholders also noted that additional action is needed for evaluating and testing equipment that could help mitigate electromagnetic impacts to key infrastructure assets. Specifically, stakeholders identified that there are limited sites available for large-scale testing, and opportunities may exist to further leverage DOE research laboratories and other federal resources, including potential funding mechanisms. In our ongoing review, we will continue to evaluate federal and industry actions to determine where specific coordination efforts could be improved and we will report the final results later this year. Chairman Johnson, Ranking Member Carper and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. interdependencies and interactions, along with the effects of various EMP attack scenarios. In particular, the Commission recommended that such research include a strong component of interdependency modeling. Funding could be directed through a number of avenues, including the Department of Homeland Security (DHS) and National Science Foundation. 2. Expand activities to address the vulnerability of Supervisory Control and Data Acquisition (SCADA) systems to other forms of electronic assault, such as EMP. 3. It is vital that DHS, as early as practicable, make clear its authority and responsibility to respond to an EMP attack and delineate the responsibilities and functioning interfaces with all other governmental institutions with individual jurisdictions over the broad and diverse electric power system. This is necessary for private industry and individuals to act to carry out the necessary protections assigned to them and to sort out liability and funding responsibility. 4. DHS particularly needs to interact with the Federal Energy Regulatory Commission (FERC), North American Electric Reliability Corporation (NERC), state regulatory bodies, other governmental institutions at all levels, and industry in defining liability and funding relative to private and government facilities, such as independent power plants, to contribute their capability in a time of national need, yet not interfere with market creation and operation to the maximum extent practical. 5. DHS must establish the methods and systems that allow it to know, on a continuous basis, the state of the infrastructure, its topology, and key elements. Testing standards and measurable improvement metrics should be defined as early as possible and kept up to date. 6. Working closely with industry and private institutions, DHS should provide for the necessary capability to control the system in order to minimize self-destruction in the event of an EMP attack and to recover as rapidly and effectively as possible. For questions about this statement, please contact Chris Currie at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Dawn Hoff (Assistant Director), Chuck Bausell, Kendall Childers, Josh Diosomito, Ryan Lambert, Tom Lombardi, and John Rastler. Additional support was provided by Andrew Curry, Katherine Davis, Linda Miller, and Steven Putansu. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The threat posed by an electromagnetic pulse (EMP) or solar weather event could have a debilitating impact on the nation's critical electrical infrastructure, as well as other key assets that depend on electricity. These events could lead to power outages over broad geographic areas for extended durations. Addressing these risks requires collaboration among multiple government and industry stakeholders; with DHS in the lead role for overall infrastructure protection efforts, working in coordination with DOE. The EMP Commission, established by statute and comprised of subject matter experts, issued recommendations in 2008 addressing the preparation, protection and recovery of critical infrastructures against a possible EMP attack. The majority of these recommendations were made to DHS and DOE. This testimony is based on preliminary observations from GAO's ongoing review of DHS's efforts to address electromagnetic threats. Specifically, this testimony addresses the extent to which DHS has: (1) taken action to address recommendations from the 2008 EMP Commission Report and (2) coordinated with other principal federal agencies, such as DOE and industry stakeholders to mitigate risks to the electric grid from electromagnetic threats. GAO reviewed EMP Commission recommendations and DHS program documents, and interviewed relevant stakeholders who provided insights on key issues and coordination activities with the federal government to address these threats. As of July 2015, the Department of Homeland Security (DHS) reported taking several actions that could help address electromagnetic threats to the electric grid. GAO's preliminary analysis of DHS's actions indicates that they generally fell under four categories: (1) developing reports, (2) identifying mitigation efforts, (3) strategy development and planning, and (4) conducting exercises. For example: Impacts of Severe Space Weather on the Electric Grid . This 2011 report evaluated how previous solar storms have affected electric grids, and identified potential cost-effective mitigation equipment available to protect these grids, among other topics. RecX . In 2012, DHS Science &Technology partnered with industry to develop a prototype transformer that could significantly reduce the time to transport, install, and energize a transformer to aid recovery from power outages associated with transformer failures from several months to less than one week. DHS reported its actions were not taken in response to the 2008 recommendations of the Commission to Assess the Threat to the United States from Electromagnetic Pulse Attack (EMP Commission). GAO also recognizes that DHS does not have a statutory obligation to specifically address the recommendations, but implementation of them could help mitigate electromagnetic impacts to the electric grid, such as helping to assure the protection of high-value transmission assets. Moreover, GAO's preliminary work suggests that DHS, in conjunction with the Department of Energy (DOE), has not fully addressed a key critical infrastructure protection responsibility—identification of clear internal agency roles and responsibilities related to addressing electromagnetic threats. For example, although DHS recognized one component as the lead for assessing solar weather risks, the component has not yet identified any specific roles related to collecting or analyzing risk information. DHS has also coordinated with federal and industry stakeholders to address some, but not all risks to the electrical grid since the EMP Commission issued its recommendations. GAO preliminarily identified eight projects in which DHS coordinated with stakeholders to help protect the grid including developing plans to address long term power outages, participation in exercises, and research and development activities. Although these are positive steps, GAO's preliminary work indicates that DHS has not effectively coordinated with stakeholders to identify critical assets or collect necessary risk information, among other responsibilities. GAO will continue to assess the issues in this statement as it completes its work and will issue a report with the final results later this year. |
The United States $1 Coin Act of 1997 authorized the new dollar coin to replace the Susan B. Anthony dollar coin, which began production in 1979. Even though the Anthony coin was never widely circulated, it became clear by 1997 that the government’s supply of Anthony coins would soon be exhausted. In addition to giving the Mint authority to develop a new dollar coin, the act also specified that the coin be golden in color and have a distinctive edge and tactile and visual features to make it easier to distinguish from the quarter-dollar coin. To ensure that the new dollar coin would be recognized by vending machines and other coin-operated equipment designed for the Anthony dollar coin, the new dollar coin is the same size and has a similar electromagnetic signature that is similar to the Anthony dollar coin. The $1 Coin Act authorized the Secretary of the Treasury, in consultation with Congress, to select the design of the new coin. In May 1998, the Secretary established a Dollar Coin Advisory Committee to consider alternatives and recommend a design concept for the obverse (heads) side of the coin. The final design selected was an artist’s rendition of Sacagawea, a Shoshone interpreter who assisted the Lewis and Clark expedition of 1804-06 to the Pacific Ocean. The act also required the Secretary to create a marketing program to promote the use of the new dollar coin by commercial enterprises; mass transit authorities; and federal, state, and local government agencies. The Mint marketing program had three major components, including research to identify market opportunities, a national public awareness and education program that included a national advertising campaign, and a business marketing program that was designed to increase commercial use of the new dollar coin in targeted sectors. According to the Mint, the first shipments of the new dollars were sent to the Federal Reserve on January 18, 2000, and the Federal Reserve sent shipments to financial institutions beginning January 26, 2000. The Mint also shipped new dollar coins directly to Wal-Mart stores to support a large, nationwide promotion of the coin that began on January 30, 2000. While authorizing the production of a new dollar coin, the $1 Coin Act also provided that the dollar note should not be removed from circulation on the basis of provisions in the act. In authorizing the circulation of both the dollar note and dollar coin, the act did not establish a goal for the number of new dollar coins or establish a level of dollar coin circulation compared with the dollar note. The act also required the Secretary to conduct a study on the progress of new dollar coin marketing program and submit a report to Congress on the results of the study no later than March 31, 2001. The Mint submitted a March 30, 2001, report to Congress. In reports accompanying the 2002 Treasury and General Government Appropriations Bill, the Senate and House Committees on Appropriations expressed concern that the Mint’s 2001 report to Congress did not adequately describe the nature and extent to which the new dollar coin was being used in commerce. The House report directed the Mint to submit a new report by March 31, 2002. In addition, the Senate report accompanying the 2002 Treasury and General Government Appropriations Bill also expressed concern that it had not received information on the contracts and agreements secured between the Mint and nongovernment entities and public relations firms mentioned in the Mint’s March 30, 2001, report. The Mint submitted its second report on March 29, 2002. A Senate committee report and the Conference Report accompanying the 2002 Treasury and General Government Appropriations Bill further directed the Mint to submit a marketing plan to the Appropriations Committees and stipulated that the plan must be approved by the committees before the Mint could draw additional funds from the Mint Public Enterprise Fund to promote the new dollar coin. The Mint submitted its plan, the “Golden Dollar Coin Marketing Plan for Congress,” on April 24, 2002. In March 2002, coins of all denominations made up 5 percent, or $32.1 billion, of the $642 billion in currency and coins that were in circulation. The demand for coins from businesses and the general public fluctuates, and the Mint and the Federal Reserve monitor several factors, such as economic growth, coin collection activity, and Reserve Bank coin inventories, to determine the number of coins that will be produced and shipped to the Federal Reserve. The Mint receives orders for coins from Federal Reserve Banks on a monthly basis and normally ships coins directly to Reserve Bank offices. The Federal Reserve provides coins to over 11,000 of the 20,000 U.S. depository institutions, such as banks, savings and loans, and credit unions. Smaller banks that do not order their cash and coins directly from the Federal Reserve obtain cash services through many of the larger banks. In addition to Federal Reserve offices, Reserve Banks use over 100 coin terminals generally operated by armored carriers to store and distribute coins. Besides functioning as Federal Reserve coin terminal operators, the armored carriers wrap and deliver coins for a fee to banks and retail customers to meet public demand. Reserve Banks normally fill coin orders from banks by first paying out previously circulated coin until this inventory is depleted and then by using new coin inventories to meet demand. To support the introduction and promotion of the new dollar coin, the Federal Reserve departed from its normal policy and held all previously circulated Anthony dollar coins received by Reserve Banks and filled orders only with new dollar coins. However, in January 2002, Reserve Banks returned to their normal practice of filling orders with previously circulated coins. Nevertheless, Reserve Banks will continue filling requests for new dollar coins until their inventories of new dollar coins are depleted. On the basis of the public demand for the dollar coin, the Federal Reserve estimated that, at the end of April 2002, it had over a 1-year supply of dollar coins. Since the older Anthony and new dollar coin have a similar electronic signature and neither the Reserve Banks nor armored carriers have equipment to separate them, the supply of circulated coins consists largely of commingled Anthony and new dollar coins. The Federal Reserve estimated that, as of April 2002, 70 percent of the dollar coin inventory is commingled Anthony and new dollar coins and about 30 percent is new dollar coins. In its response to a March 2002 Treasury Office of Inspector General report, the Mint said it would temporarily suspend production of the new dollar coin on March 31, 2002, and reevaluate the need for producing coins for general circulation in the first quarter of fiscal year 2003. In our May 1990 report on proposals to introduce a new dollar coin in the United States, we noted that the government did not successfully manage the introduction of the Anthony dollar coin because the dollar note was not simultaneously eliminated, the coin too closely resembled the quarter, and the coin was not effectively promoted. We identified several key ingredients for a successful conversion, including a reasonable transition period, a well-designed dollar coin, a public awareness campaign, support from the administration and Congress, and withdrawal of the dollar note from circulation. We estimated in April 2000 that replacing the dollar note with a coin would save the government an average of $500 million a year, because coins last much longer than currency and there are lower government costs to distribute coins than currency. The new dollar coin is profitable on a per unit basis. While it costs the Mint about $0.12 to produce the coin, the government receives $1.00 of spending power for each coin, thereby leaving a margin of $0.88 per coin. The Mint spent at least $67.1 million to promote the new dollar coin from 1998 to 2001, including $62.3 million for four contracts involved with creating the marketing program and advertisements. Of the remaining $4.8 million, the Mint spent $0.4 million to conduct public relations events and programs to publicize the new dollar coin’s launch that distributed 1,251,000 coins; $4.4 million for 23 promotion partnerships with banking, entertainment, retail, grocery, and restaurant chains that distributed an estimated 132 million dollar coins; and $36,000 to conduct promotional events with transit systems that distributed 36,000 coins. Most of the $62.3 million in contracts for creating the marketing program and advertisements was used for a $40.5 million national advertising campaign featuring George Washington that was designed to build public awareness, generate acceptance, and encourage the new dollar coin’s use. The Mint also worked with contractors to stimulate the new dollar coin’s use in state and local government operations and used its own staff for marketing activities in federal government facilities. However, the Mint did not track the costs for the use of Mint staff for these efforts. Though initial public awareness generated by the advertising was strong, the new dollar coin, like the Anthony dollar coin, has failed to achieve widespread use. Federal Reserve data show a net payout of 558 million new dollar coins in 2000, the year the dollar coin was introduced. But, in 2001, demand and public interest in collecting the new dollar coin dropped, and the net payout decreased by 65 percent to 194 million coins and remained at lower levels in the first half of 2002. In May 2002, the Federal Reserve estimated an annualized figure of $120 million in new dollar coin net payout for 2002. The Mint has estimated that people use the dollar coin in 4 percent of dollar transactions, but Mint data from July 2001 show it to be about 1 percent. To create and execute the new dollar coin marketing program, the Mint contracted with outside firms for the three major components of the program: research, business marketing, and a public awareness campaign. The research component, designed to help identify target markets for the new dollar coin before its January 2000 launch, was conducted under a $1.5 million contract with Marketbridge, a marketing services company. To provide the Mint with the necessary market research data, Marketbridge first analyzed existing Anthony dollar coin use in various industry sectors. Marketbridge also analyzed each industry for potential new dollar coin use by looking at several factors, such as the size of the industry, the average transaction size, and the current coin equipment capability in that industry. Using this market analysis, Marketbridge determined that certain industry sectors, such as food and drink vending, postal machines, transit systems, and car washes, had the highest potential for new dollar coin use. Table 1 provides information on the Mint’s marketing program contractors. The Mint contracted with Double Eagle in April 1999 to perform business marketing activities that concentrated on outreach to businesses in certain industry sectors to increase the commercial use of the coin. Double Eagle focused its marketing efforts on businesses with a high potential for using the coin. To persuade these businesses, such as food and drink vending, transit, postal, car wash, and retail industries, to use the new dollar coin, Double Eagle conducted various business marketing activities, including personal sales visits and telephone calls to decision-makers, and attended conventions and meetings. The Double Eagle contract totaled $8 million. In June 2000, the Mint determined that it was not satisfied with Double Eagle’s progress and terminated the contract. In October 2000, the Mint contracted with Fleishman Hillard for $4 million, to take over the responsibilities for business marketing. The Mint also secured the services of the Fleishman Hillard communications firm in May 1999 to create and implement the public awareness and education campaign. Fleishman Hillard first conducted public opinion polls and focus groups before the new dollar coin’s launch in January 2000 to assess consumer attitudes and create and test the advertising campaign. In tests of potential advertising campaigns, focus group participants generally preferred the “Golden Dollar” to the Sacagawea or Millennium dollar coin. To budget ad dollars and to reach those more likely to use coin-operated technology, such as vending machines and public transit, the Mint established the primary target audience as 18- to 49-year-old adults who live in urban and suburban areas. The $40.5 million paid advertising campaign that was developed to communicate to this target audience included 11 weeks of ads on television nationwide and print, radio, transit, and Internet ads. The paid advertising campaign, which began in March 2000, accounted for approximately two- thirds of the contracted new dollar coin marketing program expenditures between 1998 and 2002. The media plan for the advertising campaign featuring an image of George Washington was designed to build positive awareness, generate acceptance, and encourage the coin’s use. The television ads reached an estimated 92 percent of the target audience an average of 15 times. The ad featured the image of George Washington from the dollar bill; however, the Mint reported that, according to Treasury officials, it could not point directly to the advantages of the dollar coin over the dollar bill in its television advertising campaign. One television ad proposal, for example, had a scene showing a dollar bill being rejected from a vending machine. According to Mint officials, that part of the ad was not approved and was never aired because some Treasury officials thought that it negatively portrayed the dollar bill. Current Mint officials said that a former Mint Director participated in the meeting in which the ad was discussed, and that they do not know which Treasury officials were at the meeting. Current Mint officials also said the policy to avoid direct comparisons of the dollar coin to the dollar bill was not a formal written policy. According to a current Treasury official, the $1 Coin Act authorizing the new dollar coin called for both the dollar coin and the dollar note to cocirculate and Treasury interprets that to mean that it should not favor the coin or the note. The Treasury official said that the Mint and the Bureau of Engraving and Printing are sister agencies that can create public awareness campaigns for new coins and notes without directly comparing the advantages and disadvantages of each. As part of the marketing program, the Mint and Fleishman Hillard also developed a public relations campaign to support the new dollar coin’s launch, which included a float in the Macy’s Day Parade in November 1999. The new dollar coin was also featured in promotions with Coinstar, a company that operates supermarket-based coin-counting machines; the Wheel of Fortune game show; and General Mills’s Cheerios. These promotions resulted in the distribution of 1,251,000 coins and cost $413,500, according to Mint data. The Mint also formed a retail partnership with Wal-Mart to distribute the dollar coin as change at its 2,900 Wal-Mart and Sam’s Club stores throughout the United States beginning in January 2000. In addition to the Wal-Mart agreement, between 2000 and 2001, the Mint created a number of promotion partnerships in many of the targeted industry sectors with potential dollar coin circulation. As table 2 indicates, the Mint formed 23 promotion partnerships to stimulate use of the new dollar coin. Most of the estimated 132 million dollar coins distributed during the promotions were to customers in the retail, banking, entertainment, restaurant, and grocery industries. In general, the Mint said it tried to achieve a ratio of 10 new dollar coins distributed for every dollar in marketing costs. The Mint reported that the promotions, on average, distributed 30 dollar coins for every dollar in marketing cost. However, the actual number of new dollar coins distributed may have been more or less than the number shown, because the Mint did not track the actual number of coins distributed by each promotion partner. The Mint also marketed to state, local, and federal governments to increase the use of the new dollar coin. For example, the Mint and Fleishman Hillard conducted promotional events to increase the use of the coin in the transit systems in New York, Chicago, Philadelphia, and San Diego. The promotional events included a giveaway of free new dollar coins to transit riders for fare card purchases and radio and newspaper coverage of the promotions. The transit promotions resulted in the distribution of about 36,000 new dollar coins to transit riders. According to the Mint, the transit promotions cost $36,000 in media and promotional items. In addition, as part of the Mint marketing effort targeting state and local governments, the Mint also worked with bridge and road authorities to increase the use of the new dollar coin in tollbooths and encouraged cities to convert parking meters to accept the coin. The Mint also conducted marketing events using its own staff to stimulate use in the federal government facilities’ retail operations, such as cafeterias. For example, the Mint conducted a new dollar coin day’s event at the Pentagon during which about 56,000 new dollar coins were distributed, but the Mint did not track the associated costs for the use of Mint staff. The total cost of the new dollar coin marketing contracts, 23 partnerships, and launch and transit promotions was $67.1 million, excluding costs associated with using Mint staff. The Mint’s new dollar coin marketing program raised public awareness of the new coin but did not produce long-term increases in circulation. Regular surveys conducted by Fleishman Hillard to monitor the impact of the new dollar coin marketing program indicated that the advertising campaign and other marketing activities considerably increased public awareness. According to the surveys, about 27 percent of the public was aware of a new dollar coin in July 1999, shortly after the final dollar coin design was announced. By July 2000, after the national advertising campaign, awareness had increased to 91 percent. A December 2001 poll, which is the latest available public opinion poll on new dollar coin awareness, showed that public awareness of the dollar coin remained relatively high, about 83 percent. As shown in figure 1, the demand for dollar coins as measured by net payout to banks from the Federal Reserve peaked during the year that the new dollar coin was introduced and has since decreased significantly. Net payout of the Anthony dollar coin from the Federal Reserve was $72 million in 1999, the year before the new dollar coin’s release. However, with the introduction of the new dollar coin in 2000, net payout and demand for dollar coins increased sharply to $558 million. But, in 2001, demand and public interest in collecting the new dollar coin dropped, and net payout decreased by 65 percent to $194 million and remained at lower levels in the first half of 2002. In May 2002, the Federal Reserve estimated an annualized figure of $120 million in new dollar coin net payout for 2002. As of January 2002, the Mint said that it had produced 1.4 billion new dollar coins and had about 300 million in inventory. According to the Mint, from January 2000 to December 2001, it released approximately 1.1 billion new dollar coins into circulation that generated approximately $968 million in seigniorage after subtracting costs. According to the Federal Reserve, it received approximately 980 million and paid out 964 million new dollar coins during this period. The Federal Reserve held about 248 million dollar coins in inventory as of December 2001. As indicated in table 3, the number of dollar coins shipped to the Federal Reserve peaked with the coin’s introduction in 2000 and dropped significantly during the following 2 fiscal years. The Mint faces a number of barriers in its efforts to increase public use of the new dollar coin, the most substantial of which is the widespread use of the dollar bill in everyday transactions and public resistance to start using the dollar coin. Encouraging people to switch to using the dollar coin is especially difficult because retailers will not stock the dollar coin until they see the public using it; the public is unlikely to use the coin until they see retailers stocking it; and banks and armored carriers are reluctant to invest in new equipment to handle the coin until there is wide demand for it. This interdependency of demand, which economists call the “network effect,” will be difficult to overcome. Other countries, such as Australia, Canada, and Japan and many European countries, have successfully introduced a similar denomination coin but only by phasing out the note of the same value. Other barriers that hinder wider circulation of the new dollar coin by the public include potentially negative public perceptions of a dollar coin after two failed introductions, insufficient public understanding of dollar coin savings to the government and other advantages of the dollar coin’s use, and the weight and bulk of the coin. For commercial users, additional barriers limit the coin’s use. Among these are commingling with the Anthony dollar coin, the coin’s unavailability at some banks, packaging concerns, and higher delivery fees. Problems unique to individual promotion partners also created barriers to the new dollar coin’s use. Our previous work and the early experience with the new dollar coin have shown that the most substantial barrier is public resistance to switch to using the dollar coin rather than the dollar bill in everyday transactions. To overcome this resistance, the Mint will have to persuade businesses, consumers, and suppliers to change at the same time. Increasing the coin’s use is especially difficult because of the network effects previously discussed, which will be difficult, if not impossible, to overcome with the dollar bill in circulation. Economists have noted that this phenomenon is not limited to dollar bills and coins. For example, researchers noted in a February 1998 Federal Reserve paper that network effects may help explain why the public, despite apparent advantages, was switching so slowly from paper-based forms of payment to electronic forms of payment. Network effects may also help explain the country’s slow adoption of high-definition television (HDTV). Until demand reaches a certain level, television stations are reluctant to make the investments in the new equipment that is necessary to transmit HDTV; consumers, in turn, are reluctant to purchase HDTV sets until more stations are transmitting HDTV signals. Similarly, until a sufficient number of new dollar coins are in circulation, retailers and other businesses that handle a lot of coins may not be willing to spend the time and money needed to carry them. We have reported public resistance to new dollar coins in previous studies. For example, in May 1990, we evaluated the acceptability of the dollar coin to replace the dollar note by reviewing survey data and interviewing the public and industry associations. In this study, we found public resistance to a dollar coin in the United States. Nearly all of the general public and private-sector respondents indicated that the dollar note would have to be eliminated for a dollar coin to circulate successfully. These respondents uniformly believed that if a dollar note and dollar coin were both available at the same time, the public would choose to use the note. For our May 1990 report, we also contacted officials in other industrialized countries and found that most of the countries that had introduced high- denomination coins faced public resistance to the change. Officials in these countries said that a high-denomination coin could not be introduced successfully unless the note of similar value was withdrawn. For example, officials in the United Kingdom said that as long as the equivalent note circulates, the public would resist new coins. Similarly, French officials said the public accepted their new coin only when the note was demonetized. Mint, Bureau of Engraving and Printing, and Treasury officials said, in our 1990 report, that the experience of many of the European countries in successfully replacing a note with a coin of similar value might not be a valid indicator of the prospects the United States would have in mandating a dollar coin. These officials said that because of basic differences in these countries, such as a parliamentary form of government that made it easier to impose unpopular changes on the public, a central banking system with more control over banks, and a smaller scale of coin and currency, it would be much harder for the United States to successfully replace a dollar coin with a dollar note. More recently, four of the European countries we reviewed in our 1990 report joined eight other European Union countries on January 1, 2002, and introduced 56 billion new euro coins into circulation, which included 1- euro and 2-euro coins and a 5-euro note. (For more information on the euro coins, see table 4.) In a March 1993 report on the dollar coin, we described Canada’s experience in introducing a dollar coin in June 1987. Canada stopped issuing the equivalent dollar note in June 1989. We reported that the public resisted the coin initially, but 3 years after the note was withdrawn, according to public opinion survey data, only 18 percent disapproved of the coin. Similarly, businesses and associations we surveyed in the grocery, transit, and vending industries said that the majority of public resistance lasted from 3 months to 2 years. Officials in Canada said that the decision to withdraw the dollar note from circulation was based on the experiences of other countries, including the United Kingdom and Australia, as well as on the failed introduction of the Anthony dollar coin in the United States. More recently, we analyzed the use of coins and notes in countries that make up the G-7 (see table 4) and found that the United States is unique in attempting to cocirculate a high-denomination coin and note of the same value. Consumers in Germany, France, and Italy have the choice of 1-euro and 2-euro coins, but there is not a note of equal value to compete with the coins. The lowest value euro note is the 5-euro note. Japan, the United Kingdom, and Canada have succeeded in introducing high-denomination coins by withdrawing the note of similar value. Another barrier to wider circulation is the potential negative public perception of the dollar coin because the government has tried and failed to introduce successfully both the Anthony and the new dollar coin. A March 2002 Treasury Inspector General report recommending that the Mint temporarily suspend production of the coin resulted in additional negative media stories. The Mint said that some of these reports incorrectly concluded that the Mint had ceased to produce all new dollar coins. Another obstacle is that the Mint, in its advertising, did not fully explain to the public dollar coin savings to the government. A December 2001 survey, the latest available, showed that the public would more strongly favor the dollar coin when the savings were explained. When asked if they would be in favor of replacing the dollar bill with the new dollar coin, 68 percent of the respondents who opposed such a plan said they would favor the replacement if doing so would save the government and taxpayers $500 million a year. Another barrier, an informal Treasury restriction on the Mint prohibiting it from comparing the advantages of the dollar coin directly with the dollar bill in consumer advertisements, hindered the Mint in explaining to consumers why they should switch to the dollar coin. One television ad proposal, for example, showed a person at a vending machine reacting to a dollar bill being rejected. According to Mint officials, that part of the ad was not approved and was never aired because some Treasury officials thought that it negatively portrayed the dollar bill. Current Mint officials said that they did not participate in the meeting in which the ad was discussed, and that although the policy to avoid direct comparisons to the dollar bill is not a formal written policy, they believe that the policy is still in effect. According to a current Treasury official, the $1 Coin Act authorizing the new dollar coin called for both the dollar coin and the dollar note to cocirculate, and Treasury interprets that to mean that it should not favor the coin or the note. A Treasury official said that the Mint and the Bureau of Engraving and Printing are sister agencies that can create public awareness campaigns for new coins and notes without directly comparing the advantages and disadvantages of each. The Mint faces another barrier in convincing the public that the durability and other benefits of the new dollar coin outweigh the ease of carrying the dollar bill. As we reported in 1990, focus groups recognized the durability of a dollar coin but cited negative aspects of the coin, such as the bulk in transporting the coin. We further noted that consumer associations said the coin would be bulky and would add weight to wallets and pockets. In the last available public survey conducted by the Mint in July 2001, 1-1/2 years after the new dollar coin was introduced, respondents said they were much more likely to use the dollar bill. They also said they were more likely to keep or save the dollar coin, show it to friends or family, or give it as a gift than spend it on everyday items. In addition to public resistance, the Mint also faces barriers in distributing the new dollar coin. Promotion partners and other commercial users reported that supplies of new dollar coins are commingled with Anthony dollar coins. This commingling of the Anthony and new dollar coin, which occurred more frequently in 2001 and 2002, adversely affected some promotions that prominently featured the new dollar coin. For example, in 2001, a national restaurant chain changed all of its menus to feature a menu item called the “Golden Dollar” pancake, but, in some cities, the restaurant chain had difficulty obtaining supplies of the coin to support the promotion. In some cases, the banks had a supply of dollar coins, but half of the coins were new golden dollar coins and half were silver-colored Anthony dollar coins. Commingling occurs when Anthony and new dollar coins are used in commerce and later are processed by Federal Reserve Banks and armored carriers. Machines used in the coin distribution system are not able to separate the two coins because they have a similar electromagnetic signature. Businesses also reported difficulty in obtaining a reliable supply of new dollar coins. For example, in its assessment of a large new dollar coin promotion with a national grocery chain, Marketbridge noted that the coin was not always available from armored carriers. Some of the distribution problems occurred because some armored carriers lack adequate equipment. According to the Mint, to handle high volumes of new dollar coins, Brinks, a large armored carrier, would have to invest $40,000 for coin-rolling machines in many of its 154 branch office locations around the country. Other armored carriers, according to the Mint, would also likely need to upgrade equipment to handle high volumes if the dollar coin became popular. Although the Wal-Mart promotion served to distribute over 90 million new dollar coins, there were also early reports of availability problems related to the promotion. In their discussions with the Mint in late 1999, banks asked the Mint to delay the launch of the new dollar coin until March 2000 because expected year 2000 problems would require the banks to concentrate on these problems in January and February, 2000. The Mint agreed to delay the launch of the new dollar coin until March. However, in December 1999, the Mint announced the partnership with Wal-Mart and began to distribute the coins to Wal-Mart in January 2000. The publicity surrounding this launch created public demand for the coin at banks throughout the country. Bank customers who requested the coin could not always find them, and soon the banks had a significant backlog of orders for the coin with the Federal Reserve. The Mint and the Federal Reserve, responding to delays in the new dollar coin’s distribution to banks, such as community banks, credit unions, and savings and loans, set up a temporary Direct Shipment Program beginning March 1, 2000. The program gave banks the ability to place orders on the Internet for up to 2,000 new dollar coins in rolls and have them shipped directly from the Mint. However, according to the Mint, only a small percentage of the banks that received a letter on the direct ship program had ordered coins a month after the program began. Bank officials said that these initial shortage problems were limited to the first few months of the new dollar coin’s launch in 2000. According to the Mint, some businesses were also reluctant to order dollar coins because they were charged higher delivery fees by the armored carriers. The armored carriers generally charge additional amounts to retailers and other businesses for delivery of dollar coins because they weigh more than paper dollars. For example, some carriers charged $2 per $1,000 box to deliver rolled dollar coins compared with $0.25 cents for the equivalent value in dollar bills. A Marketbridge report also noted that some businesses wanted a greater choice of coin packaging options and quantities. While large-volume coin- operated businesses, such as car washes, might want coins in large bags, and other businesses might want a full box of 1,000 coins, smaller businesses attempted to obtain coins wrapped in rolls of 25 dollar coins, but could not always find them. To make rolls of dollar coins more available and reduce the cost to businesses for obtaining coins, the Mint, from August to December, 2000, contracted with outside companies to have 282,240,000 dollar coins wrapped in rolls at a cost of $927,982. These Mint-wrapped rolls were to be provided to businesses by armored carriers and financial institutions without those businesses being charged for wrapping. Though the coin-wrapping contract increased the supply of new dollar coins in rolls, the Mint found that some businesses were still subject to other armored carrier fees such as for moving and storing the coin. Some Mint officials said that the wrapping of dollar coins in rolls by Mint contractors might have created more problems because the armored carriers were not forced to develop a rolling capability. Without proof of demand for the new dollar coin, armored carriers were reluctant to invest in new equipment to roll dollar coins, even when demand for the coin was high in the first half of 2000. Many of these distribution barriers were identified in a Marketbridge promotion progress report in August 2001. As table 5 indicates, of the distribution problems identified by Marketbridge, commingling and difficulty in finding coins were the most common by far. To evaluate the extent that these barriers affected promotion partners, we sent surveys to 10 large promotion partners that had agreements with the Mint to promote the new dollar coin. In our survey of these large promotion partners, we attempted to obtain information on the extent that barriers such as commingling hindered the success of their new dollar coin promotions. Seven promotion partners completed the survey. When asked the extent that commingling hindered the success of their promotions while they were in progress, 2 of the partners said to a very great extent, 1 said to a great extent, and 4 said to no extent. When asked if commingling hindered the use of the new dollar coin in their business after the promotion, when dollar coin use became less frequent, only 1 said to a very great extent, 1 said to a great extent, 3 said to no extent, and 2 did not know or said not applicable because the promotion was still in progress at the time of the survey. In contrast to public reports, only 2 promotion partners said that they had difficulty obtaining new dollar coins for their promotions. In general, when asked the extent that coin-wrapping fees and shipping costs hindered their promotions, most of the survey respondents said these problems had little or no effect. Our promotion partner survey also indicated that while the promotions distributed new dollar coins, it is unlikely that they had resulted in a long- term increase in the coin’s use. We asked the 7 firms how frequently customers used the new dollar coin to make purchases during the promotion. One of the promotion partners said very frequently, 1 said frequently, 3 said sometimes, and 2 said very infrequently. The survey indicated even lower levels of use after the promotion. When asked if customers were using the coin to make purchases after the promotion, 1 said sometimes, 3 said infrequently, 2 said very infrequently, and 1 did not know or said not applicable because the promotion was still in progress at the time of the survey. No promotion partner said customers were using the coin to make purchases frequently or very frequently after the promotion. A majority of promotion partners agreed that the dollar bill would need to be eliminated for the public and businesses to accept and regularly use the new dollar coin. In addition, in its promotion program assessments, Marketbridge found indications that use of the new dollar coin was not sustained in these businesses during and after the promotions. For example, in its assessment of a national grocery chain promotion, Marketbridge noted that the average number of dollar coins distributed decreased over time from 3,600 per month at the beginning of the promotion, to 1,400 per month 60 days into the promotion, to 600 per month toward the end of the promotion. About 8 months after the promotion began, the average number of dollar coins distributed had dropped to 340 per month. The Mint promoted the use of the new dollar coin in the government sector, but the results of these efforts were mixed. For example, the Mint contacted local transit authorities to increase awareness of the new dollar coin and increase the number of transit systems using it. As part of this transit marketing effort, the Mint, working with Fleishman Hillard in selected cities, created promotional events that included a giveaway of free new dollar coins to transit riders, radio promotions, media coverage, and attendance by local officials. For example, the Mint distributed about 12,000 new dollar coins to transit riders in New York; 12,000 coins in Chicago; 6,000 coins in Philadelphia; and 6,000 coins in San Diego. The Mint said that many of the largest transit systems retrofitted or purchased new equipment and have the capability to use the dollar coin. In April 2002, the U.S. Federal Transit Administration reported that 19 of the largest 20 transit system agencies accept the dollar coin in either their bus or rail systems. The Federal Transit Administration found that buses in the Washington Metropolitan Area Transit Authority, the fifth largest transit system, accepted dollar coins in buses but the subway system did not. The Federal Transit Administration also found that the Bay Area Rapid Transit system, the twelfth largest transit system, did not accept the dollar coin in either bus or rail. Mint officials said that they were not able to make progress in increasing the use of the dollar coin in these two transit systems. The Mint also worked with state and local bridge and road authorities to increase the use of the new dollar coin in tollbooths and encouraged cities to convert parking meters to accept the coin. (For more information on the use of the dollar coin in state and local government transit systems and tollbooths, see app. II.) According to Mint officials, the Mint used its own staff to conduct marketing events to stimulate the new dollar coin’s use in retail operations, such as cafeterias within federal facilities. For example, the Mint conducted a new dollar coin event at the Pentagon. According to the Mint, this promotion distributed about 56,000 new dollar coins. Mint officials also met with officials on military bases to discuss the dollar coin’s use, but these meetings did not result in formal promotional programs or increase new dollar coin circulation. As part of its federal government marketing efforts, the Mint also sought to increase the number of postal vending machines using the new dollar coin but had limited success. In December 1998, before the coin’s launch, a Mint contractor study noted that the U.S. Postal Service had approximately 11,000 stamp vending machines that distributed dollar coins. However, in April 2002, over 2 years since the introduction of the new dollar coin, the Postal Service still had only 12,000 of its 34,000 vending machines able to distribute the dollar coin as change. The Postal Service said that it was only able to upgrade an additional 1,000 vending machines between 1998 and 2002 because it lacked the funds to upgrade or replace the older machines. Despite the lack of progress, the Mint said that the Postal Service is still the largest distributor of dollar coins. In general, the Mint did not track the costs for the use of its own staff for marketing efforts to federal government agencies. In general, the Mint’s April 24, 2002, marketing plan for fiscal years 2002 and 2003 describes a program that is much smaller in scope than the marketing campaign used to launch the new dollar coin in 2000. The Mint plan provides a listing of most of the barriers to increasing new dollar coin use. In addition, the plan notes the importance of conducting research and gives a description of planned research regarding consumer resistance, distribution barriers, and sustaining use of the coin by businesses. Although the plan estimates that the dollar coin is used in 4 percent of dollar transactions, the plan does not lay out a specific market share or net payout goal for fiscal year 2003. As is consistent with previous studies, the Mint plan also notes that successfully achieving widespread use of the new dollar coin will be difficult if it cocirculates with the dollar bill. However, the plan does not discuss specifically how to address interdependent demand or network effects. The plan also notes that the recent negative media coverage of the new dollar coin will be a significant challenge for the Mint’s marketing communications and public relations programs, but the Mint does not explain in detail how it will counter this challenge. Although the plan notes potential government savings, it does not provide a strategy for explaining dollar coin government savings to the public or for directly comparing the advantages of the dollar coin with those of the dollar bill. A key element of the new marketing plan is a description of the barriers that hinder the distribution and circulation of the new dollar coin. Although the Mint’s plan identifies the key barriers in the distribution channel, such as the unavailability of coins, commingling, the lack of availability of new dollar coins in rolls, and additional fees charged by armored carriers, the marketing plan does not specifically outline how the Mint will address those barriers. Instead, the Mint calls for research on barriers in the first phase of the new plan that would be conducted in collaboration with the Federal Reserve Bank System, banks, armored carriers, and commercial users. Although the Mint plan notes that cocirculation with the dollar bill is a barrier, the Mint does not provide much detail on the nature and extent of the barrier or how it will attempt to overcome public resistance. In addition, the Mint does not fully describe previous attempts in other countries to cocirculate a high-denomination note and coin. The plan does not include any information on network effects or indicate how an understanding of the network effects in currency and coins and other payment systems could improve future marketing strategy. The Mint plan includes some future programs to market to consumers that are designed to increase public demand for the coin. However, the plan does not describe how these programs will help the Mint overcome specific barriers and increase new dollar coin circulation. For example, included in the plan section on increasing sustained circulation is a description of a licensing agreement with The Source International (TSI). The intent of the agreement is to build brand awareness for the Mint and the new dollar coin among National Association for Stock Car Auto Racing (NASCAR) fans. Under the TSI agreement, in the 2002 Cadillac Grand Prix in July and in one race each year from 2003 to 2008, TSI will have one car with an image of the new dollar coin on the hood and a Web site address for the Mint on the rear spoiler. The agreement also calls for TSI to sell die cast replica models of the new dollar coin racing car. The Mint also said the agreement would require no outlay of funds and the Mint will receive royalty payments from each new dollar coin model car sold. The Mint said that TSI will also make an attempt to have new dollar coins dispensed as change to spectators for cash purchases during each race. While the TSI agreement could increase new dollar coin brand recognition and awareness at one race each year and with model car sales, the plan does not describe how the agreement would contribute to an increase in the coin’s widespread use and circulation. The Mint also has an existing product licensing program that encourages the placement of products related to coin collection into the retail market. In addition, the Mint plans to work with the Department of the Army’s Corps of Engineers on new dollar coin promotion activities such as placing a new dollar coin image on brochures associated with the Lewis and Clark Bicentennial activities that will occur along the expedition’s route from 2003 to 2006. Although these programs could increase awareness of the new dollar coin among coin collectors or those visiting Corps of Engineer facilities, the Mint plan does not provide much detail on how these marketing programs would increase the public’s use of the coin in everyday transactions. Unlike the earlier new dollar coin marketing program, the new plan does not envision a large national advertising campaign directed at the public. The plan calls for research on public resistance to the new dollar coin before a full marketing program is implemented to stimulate consumer use of the new dollar coin. The Mint plan requests $0.5 to $1.0 million in fiscal year 2002 followed by $10 to $15 million in fiscal year 2003 for a program to maintain the new dollar coin’s presence in the marketplace. The plan calls for a continuation of ongoing promotions and, following research, the identification of key target markets before marketing activities are implemented. Mint officials said that transit and vending, in addition to governments, are likely markets to target. The Mint plan does include some plans for a public relations and a media outreach program to overcome negative consumer perception. However, the plan does not provide any specifics on how it will overcome recent negative media coverage or the public’s impression that the coin may have been discontinued. The plan does not address the advantages of including a description of dollar coin savings to the government in its marketing communications or discuss any restrictions on directly comparing the advantages of the dollar coin with those of the dollar bill. Mint officials said that the official policy is for cocirculation of both the dollar coin and note and that the Mint did not describe dollar coin savings to the government in its marketing because the savings could only occur if the dollar bill were withdrawn from circulation. Treasury said that it interprets cocirculation to mean that marketing programs for the coin or note should not directly compare the advantages of the dollar coin with those of the dollar note. A key element of the plan is an assessment by the Mint of the progress made in new dollar coin circulation. The Mint plan attempts to first establish the existing level of dollar coin circulation by comparing the number of new dollar coins in circulation with the number of dollar bills in circulation. The Mint marketing plan estimates that people used the dollar coin in about 4 percent of dollar transactions. To arrive at this number, the Mint took data from a public opinion poll and then estimated that about one-third of the 850 million coins distributed to the public, or about 300 million coins, were actually used and in circulation. The Mint’s estimate of 300 million coins in “circulation” was then calculated to be about 4 percent of the 7.5 billion in dollar bills in circulation. Other Mint surveys indicate that the new dollar coin market share as a percentage of all dollar transactions may be lower. For example, a July 2001 public opinion survey was conducted by the Mint to test the impact of the new dollar coin marketing program. Among other survey questions, respondents were asked the number of dollar bills and dollar coins they received and spent in the last few days. Respondents said that in July 2001, they received 25 and spent 24 dollar bills. In contrast, respondents said they received 0.2 new dollar coins and spent 0.3 new dollar coins. This equates to a new dollar coin share of about 1 percent of all dollar transactions. The Mint plan does not set goals in market share, net payout, or number of dollar coins for fiscal year 2003. The increase in the number of dollar coins shipped to the Federal Reserve from 1999 to 2000 did not have a measurable effect on the number of dollar notes in circulation. The Federal Reserve said that the number of dollar notes in circulation increased from 1999 to 2000 at the same time that dollar coin shipments were increasing and that changes in demand for dollar notes is normally due to fluctuations in economic activity. The Federal Reserve said that it was unlikely that the slight decrease in the number of dollar notes in circulation, from 7.65 billion in 2000 to 7.64 billion in 2001, could be attributed to the new dollar coin, and that the decline was more likely due to a drop in economic activity in 2001. The Mint plan includes some actions to promote the use of the new dollar coin in certain targeted markets. For example, the Mint plan indicates it will try to increase circulation of the new dollar coin in federal agencies and on military bases, but the plan does not explain the lack of success in increasing the use of the new dollar coin in federal agencies or provide specific objectives or programs for how the Mint will increase circulation. The plan also states that the Mint intends to honor existing agreements with several minor league baseball teams. The Mint lists the teams and notes that the agreements with minor league teams distributed over 1 million coins. Despite the noted potential in dollar coin distribution and past promotions with baseball teams, beyond honoring several existing agreements, the Mint does not include additional baseball team marketing activities in its future marketing plan. The Mint plan also states that it will explore opportunities in parking meters, toll roads, and transit systems but no data are given on how the Mint chose these as potential markets or how much each of these markets might yield in increased new dollar coin circulation or at what cost. Although the Mint through its contractors, previously evaluated and identified the markets with a high potential for new dollar coin use, the plan does not fully incorporate this information into its analysis. Another key element of the new marketing plan is a description of the barriers that hinder the distribution of the new dollar coin. The Mint has identified the key barriers in the distribution channel, which moves the dollar coin through the Federal Reserve Bank System, banks, and armored carriers to commercial users, such as retailers. It states that unavailability of the coins at banks, commingling, and the lack of availability of new dollar coins in the right mix of bags and rolls are obstacles to stimulating commercial use of the new dollar coin. In addition, the Mint identifies other distribution barriers, such as the additional fees charged by armored carriers for coins compared with fees charged for bills, and other perceived barriers, such as the lack of room in the cash drawer. To address distribution barriers, the plan calls for research. Research on barriers in the distribution of the new dollar coin would be conducted in collaboration with the Federal Reserve Bank System, banks, armored carriers, and commercial users. In addition, the plan calls for a collaborative study with the Federal Reserve on the feasibility of using machines to separate the Anthony and new dollar coins that are commingled. The plan states that the research on distribution barriers is intended to help the Mint validate its understanding of the barriers and identify ways to overcome them. However, the plan does not indicate what effect the removal of barriers would have on circulation. For example, if the Mint resolved the commingling problem, the plan does not indicate whether this would lead to an increase in use of the dollar coin. Table 6 shows a summary of the Mint’s plans to address the key barriers. As required by the $1 Coin Act of 1997, which authorized the new dollar coin, the Mint provided a report on the progress of the marketing of the new dollar coin on March 30, 2001. However, the Mint’s 2001 report did not provide details on the nature and extent to which the new dollar coin was being used in commerce; provide full information on contracts and agreements that the Mint had engaged in to market the new dollar coin, including the costs of the marketing campaign; or give a detailed description of the barriers that the Mint encountered. The 2002 report, which was directed by the House report accompanying the Treasury’s 2002 appropriations act, gave more information on demand for the coin, contracts and promotional agreements with nongovernment entities, marketing costs, and a fuller description of the distribution and other problems encountered. However, in the 2002 report, the Mint did not provide a comprehensive analysis of the nature and extent of coin use, describe the outcomes and progress made in increasing Federal Reserve net payout in all of the industry sectors in which marketing efforts were targeted, or establish measurable future goals for these sectors. The Mint’s March 2001 report provided an overview of the public awareness and business marketing programs during 2000, but the report did not fully describe the nature and extent of the new dollar coin’s use in commerce, all of the promotional efforts used by the Mint, or the barriers encountered. According to the 2001 report, in the first year of the coin’s production, the Mint produced over 1.2 billion new dollar coins and released into circulation over 700 million of these coins. However, there is no information in the 2001 report that shows how many of these coins were actually used in commerce. The 2001 report noted that the Mint collects information on a regular basis to quantify marketing program progress, but the report did not provide an analysis of this information. The Mint report also said that the growing list of promotional agreements is evidence that the public is using the new dollar coin, and that people are collecting the coin, but the 2001 report did not quantify the amount of sustained increases in circulation generated by the promotions. There is a very limited citing of survey data that might provide some insight into public acceptance and use in the 2001 report. The report cited survey data showing that, rather than using the coin, 66 percent of the public said that they were saving the coins. The report briefly noted that, in late 2000, 29 percent of the surveyed adults said that they would prefer to receive the coin in change instead of the dollar bill. The survey data on preference for receiving change is somewhat useful, but they do not give any indication of what portion of these people will use the coin once they have received it as change. The 2001 report also noted that a May 2000 survey showed that 57 percent of the public would likely use the new dollar coin for everyday transactions as the coin becomes commonly circulated. This is one of a number of indicators collected in the survey; however, this is information that was, at the time, almost a year old, and the survey is limited because it shows the likelihood of use when the coin at some point in the future becomes “commonly circulated.” The same Mint survey had information that indicated low public use of the coin, but these data were not included in the 2001 report. For example, the survey showed that in May 2000, 33 percent of those surveyed had received the coin and, of these, only 21 percent were at least somewhat likely to spend the new dollar coin on everyday items. The 2001 report provided an overview of the public awareness program, but it did not provide information describing the details of the use of contractors for advertising and public relations or the costs for these contracted activities. The report noted that the Mint had targeted eight industries and had entered into partnership agreements with commercial entities to promote the new dollar coin. However, although the report provided examples of these promotion agreements, it did not provide a comprehensive list of the promotion agreements, the promotion target for the number of new dollar coins to distribute, or the cost to the Mint for each promotion. The 2001 report provided a generally positive picture of the new dollar coin and made only a brief mention of the barriers to increasing its use. For example, the Mint noted the challenges presented by the failure of the Anthony dollar coin and also noted that some banks and financial institutions were reluctant to order the new dollar coin. The Senate committee report and the conference report for the Treasury’s fiscal year 2002 appropriations directed the Mint to submit a marketing plan concerning its promotional efforts relating to the new dollar coin to the Appropriations Committees and stipulated that the plan must be approved by the committees before the Mint could draw additional funds from the Mint Public Enterprise Fund to promote the dollar coin. The Mint’s March 2002 report provided an overview of the public awareness and business marketing programs, but there was still limited information on the nature and extent of the coin’s use in commerce. The Mint did provide an estimate of the percentage of new dollar coins used compared with the dollar bill. As previously noted, the Mint has stated that, on the basis of survey estimates and the number of new dollar coins in circulation, people use the dollar coin in 4 percent of dollar transactions. This figure is a very rough estimate, and it does not give a full evaluation of dollar coin use that can be used to make future decisions on marketing programs. The Mint, however, through contractors, collected information in public surveys on a regular basis that was not used in the 2002 report, and these survey data may have been more useful. For example, one of these surveys was conducted in May 2000 and May and July, 2001, to help the Mint assess a number of key questions, such as consumers’ likelihood for saving and using the dollar coin as well as the number of new dollar coins used relative to the dollar bill. Respondents to the July 2001 survey indicated that they used the dollar coin in about 1 percent of dollar transactions. The 2002 report gave a much more comprehensive description of the use of nongovernmental contractors for advertising and public relations and the costs for these contracted activities. Unlike the 2001 report, the 2002 report provided an appendix, which contained a list of promotion agreements with commercial entities in which the Mint engaged. In addition, the Mint provided, for each promotion agreement, the goal for the number of new dollar coins to distribute and the cost to the Mint to implement each promotion. As previously mentioned, the Mint did not consistently monitor the actual number of coins distributed, therefore, we were not able to substantiate the exact number of coins that were actually distributed. Unlike the 2001 report in which the Mint provided a generally positive picture of the new dollar coin, the 2002 report described in more detail the barriers to increasing use of the new dollar coin. For example, the Mint described commingling of the Anthony and new dollar coins as a significant barrier. But, the plan did not indicate what effect on circulation would result from resolving the commingling issue. The 2002 report also gave additional information on other reported barriers, such as the availability of rolls of new dollar coins, extra fees for delivery and handling, and public resistance to switch to using the new dollar coin instead of the dollar bill. However, the 2002 report did not provide information on how it will overcome these barriers or what effect on overall circulation could result from removing the barriers. Although the Mint’s $67.1 million marketing and advertising program to promote the new dollar coin to the public significantly raised awareness of the coin, it is not widely used by consumers in everyday transactions. In addition, the Mint does not have data showing that increased marketing and promotion efforts would have a long-term positive effect on dollar coin use as long as the coin cocirculates with the dollar note. While the Mint said it could assist armored carriers in purchasing equipment to roll dollar coins or pay for directly shipping to businesses new dollar coins that are not commingled with Anthony coins, this would not necessarily mean that the public would demand or use the coin. As a result, continuing to spend funds for these programs may not result in increased use of the new dollar coin. However, recognizing Congress’s desire for cocirculation of the dollar coin and the dollar note, it appears reasonable for the Mint to conduct planned research to further assess distribution barriers and determine the appropriate steps and costs that are necessary to resolve these barriers. Because the Mint does not know whether additional marketing is likely to increase use and past efforts have had limited effects, we recommend that aside from honoring existing promotion agreements and conducting planned research on public acceptance and distribution barriers, the Director of the Mint suspend further expenditures for marketing and promoting the new dollar coin until research is completed and the Mint can demonstrate that such efforts are likely to increase long-term coin circulation and/or are necessary to achieve Congress’s desire for cocirculation. We further recommend that the Mint revise the marketing plan it submitted to Congress to reflect such an approach and work with Congress to reach an agreement on an appropriate amount of funds to use for these activities. We provided copies of the draft of this report for comment to the Secretary of the Treasury; the Director of the Mint; and the Chairman, Board of Governors of the Federal Reserve System. On August 30, 2002, we received written comments from the Director of the Mint, which are reprinted in appendix III, and on August 23, 2002, we received written comments from the Director of the Division of Reserve Bank Operations, Federal Reserve Board, which are reprinted in appendix IV. The Secretary did not provide comments. The Director of the Mint said that the Mint generally concurred with the findings and recommendation in our report. The Mint Director also offered some additional comments on the barriers that we identified and how the Mint plans to address them. The Mint Director said the Mint agrees that there is no available evidence that the elimination of distribution barriers would have a long-term positive effect on dollar coin use. She said the Mint would examine several possible research approaches to study the removal of distribution barriers, but it would not invest substantial funds until it can support the expenditures. The Mint Director also said the Mint will conduct research to identify and further assess the barriers to the new dollar coin’s use in daily commerce. The Director said the Mint would incorporate the results of that research and an understanding of network effects into a revised new dollar coin marketing plan, as we recommended. The Mint Director also commented on the different approaches our report discussed that were used to calculate the level of new dollar coin circulation. The Director noted that the Mint’s 4 percent figure is based on the number of dollar coins issued as a percentage of the number of dollars issued and the other measure cited in our report is based on the number of new dollar coins used in financial transactions as a percentage of the number of dollars used in financial transactions. We believe that the latter estimate of 1 percent may be a more representative measure of the coin’s actual use by the public because it is based on a nationally representative public poll conducted for the Mint that asked questions specifically about the number of dollar coins and notes used in transactions in the past few days. Nevertheless, while we recognize that there are various measures with which to gauge the public’s use of the new dollar coin, neither approach cited in the report indicated widespread use. In addition, the Director said that a major deficiency of the Anthony dollar coin was that people avoided using it because they were unable to distinguish it from a quarter-dollar coin. In our past reports, we cited this as one barrier, but also reported that the continued circulation of the dollar bill was the most substantial barrier to the Anthony dollar coin’s use. The Director of the Division of Reserve Bank Operations, Federal Reserve Board, said that the Federal Reserve concurred with our recommendations. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs; the House Committee on Financial Services; and the Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations; the Secretary of the Treasury; the Chairman of the Board of Governors of the Federal Reserve System; and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Major contributors to this report were Brad Dubbs, John S. Baldwin, Emily Dolan, Bess Eisenstadt, Susan Michal-Smith, Walter Vance, and Greg Wilmoth. If you or your staff have any questions, please contact me on (202) 512-2834 or at ungarb@gao.gov. In studying the United States Mint’s marketing of the new dollar coin, our objectives were to (1) describe the Mint’s new dollar coin marketing program costs, the contracts and promotional programs in which the Mint engaged, and the revenues that were generated; (2) assess the barriers the Mint faces in increasing the public’s use of the new dollar coin; (3) describe the Mint’s future plans to promote the new dollar coin and the extent that these plans address the barriers; and (4) assess the extent that the Mint’s 2001 and 2002 reports to Congress on the marketing of the new dollar coin fully and accurately described the marketing programs, the results obtained, and the problems encountered. To obtain information regarding marketing program contracts, promotional programs, costs, and revenues, we interviewed officials from the Mint and the Federal Reserve System and managers from the marketing program contractors. We also collected and reviewed marketing program contracts, progress reports, plans, promotion agreements, press releases, and other related documents from the Mint and contractors. In addition, we requested information from the U.S. Postal Service on dollar coin use in vending machines and reviewed a Department of Transportation report on dollar coin use in transit systems and toll roads, but we did not independently verify the data from those agencies. Although we reviewed signed promotion agreements to determine the number of promotions the Mint conducted, the Mint did not provide documentation that would enable us to verify the actual number of coins distributed during each of the promotions. Our review did not include a financial audit of the marketing program. Also, we did not conduct an audit of paid advertising expenditures or audit the media to determine if all of the ads ran as planned. We also did not conduct a review of the contract award process or a review of how internal controls were applied during contract management. To evaluate the barriers to increasing dollar coin circulation, we reviewed our previous reports, and the laws and congressional hearings related to the new dollar coin; interviewed officials from the Mint, Mint contractors, and the Federal Reserve; examined Mint and contractor marketing documents, reports, and surveys; obtained information on high- denomination coins and notes from industrialized countries; and interviewed businesses and trade associations. We also sent questionnaires regarding barriers to increased circulation to 10 promotion partners that distributed new dollar coins. We focused on these large promotion partners because these firms represent all of the targeted industry sectors and make up 99 percent of the total promotion partner distribution goal. Seven of the 10 promotion partners responded to our survey. We did not survey the other promotion partners because of resource limitations and because most of the other promotions were with minor league baseball teams in smaller cities that had coin distribution goals of under 300,000 dollar coins. In interviewing businesses and associations, we contacted those we believed were most affected by the introduction of a new dollar coin, including bank trade associations, a trade association representing coin- operated businesses, and armored carriers. We also reviewed articles on the Susan B. Anthony dollar coin and interviewed the authors of these articles. Further, we obtained data on the highest value coins and lowest value notes used by the G-7 countries as of June 27, 2002. To obtain information on the Mint’s plans to overcome the barriers to increased dollar coin circulation, we interviewed officials from the Mint, Federal Reserve, Mint contractors, and trade associations and reviewed Mint documents and the 2002 new dollar coin marketing plan. We also reviewed our previous reports and studies on the dollar coin. We then identified specific actions in the Mint plan and analyzed the extent that these actions address the barriers identified in our review. To determine the extent that the 2001 and 2002 Mint reports to Congress fully and accurately described the marketing of the new dollar coin, results obtained, and problems encountered, we interviewed Mint officials and reviewed the reports. We also reviewed marketing program contracts, progress reports, plans, promotion agreements, and other related documents from the Mint, contractors, and the Federal Reserve. We then compared the information on marketing programs, costs, barriers, and use in the 2001 and 2002 reports with the information obtained in our review. We used some of the data on awareness and use from regular Mint surveys conducted during the marketing of the new dollar coin; however, we did not conduct a comprehensive review of the methodology used in these surveys. Our review did not include an audit of the contracts or promotion partner agreements noted in the Mint reports. We did our work from September 2001 to September 2002 in Washington, D.C., in accordance with generally accepted government auditing standards. From market research, the Mint and its contractors determined that within the state and local government sectors, transit system authorities and toll roads had good potential for dollar coin use. As shown in table 7, in April 2002, 19 out of the 20 largest transit systems accepted new dollar coins in either bus or rail modes. The Mint also targeted toll roads for some of its marketing efforts. Table 8 shows some data on dollar coin use in 20 large toll road operators as of December 2001. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | If the public used the dollar coin rather than the dollar note, the government could potentially save up to $500 million annually. The Mint spent $67.1 million to promote the new dollar coin from 1998 to 2001, including expenditures for a marketing and advertising program; public relations and publicity programs; 23 partnerships with banking, entertainment retail, grocery and restaurant chains; and promotional events with transit agencies. Most of the $67.1 million was used for a national advertising campaign to build public awareness, generate acceptance, and encourage use of the new dollar coin. The Mint also worked with contractors to stimulate the new dollar coin's use in state and local government operations and used its own staff for marketing activities in federal government facilities, but it did not track the costs for the use of Mint staff. According to the Mint, between January 2000 and December 2001, the new dollar coin had generated $1.1 billion in revenue and $968 million in seigniorage. The Mint faces several barriers in its efforts to increase the new dollar coin. The most substantial barrier is the current widespread use of the dollar bill in everyday transactions and public resistance to begin using the new dollar coin. Other barriers that hinder wider circulation include (1) negative perceptions the public may have of the coin after two failed introductions, (2) lack of public information about the savings to the government from using the new coin, (3) lack of public awareness about the comparative advantages of the dollar coin over the dollar bill, and (4) the idea that the ease of carrying the bill is more beneficial than the durability of the dollar coin. In general, the Mint's marketing plan describes a program that is much smaller in scope than the marketing campaign used to launch the new dollar coin in 2000. The Mint plans to address some, but not all, of the barriers to increasing use and recognizes that successfully achieving widespread use of the new dollar coin will be difficult if the dollar bill cocirculates with the new dollar coin. The Mint's 2001 report to Congress did not fully and accurately describe the costs of the marketing campaign, the results obtained, and problems encountered. The 2002 report gave more details on marketing costs and a fuller description of the problems encountered. |
GAO has issued several reports on the establishment of AFRICOM and its components. In 2008, we testified that DOD had made progress in transferring activities, staffing the command, and establishing an interim headquarters for AFRICOM but had not yet fully estimated the additional costs of establishing and operating the command. We also reported in 2008 that DOD had not reached an agreement with the Department of State (State) and potential host nations on the structure and location of the command’s presence in Africa, and that such uncertainty hindered DOD’s ability to estimate future funding requirements and raised questions about whether DOD’s concept for developing enduring relationships on the continent could be achieved. In 2009 we reported that the total future cost of establishing AFRICOM would be significant but remained unclear because decisions on the locations of AFRICOM’s permanent headquarters and its supporting offices in Africa had not been made. We also stated that it would be difficult to assess the merits of infrastructure investments in Germany for AFRICOM’s interim headquarters without knowing how long AFRICOM would use these facilities or how they would be used after a permanent location was established. To determine the long-term fiscal investment for AFRICOM’s infrastructure, we recommended that the Secretary of Defense, in consultation with the Secretary of State, as appropriate, conduct an assessment of possible locations for AFRICOM’s permanent headquarters and any supporting offices in Africa that would be based on transparent criteria, methodology, and assumptions; include the full cost and time-frames to construct and support proposed locations; evaluate how each location would contribute to AFRICOM’s mission consistent with the criteria of the study; and consider geopolitical and operational risks and barriers in implementing each alternative. We further recommended that DOD limit expenditures on temporary AFRICOM infrastructure until decisions were made on the long-term locations for the command. DOD partially agreed with the recommendations in our 2009 report, stating that in some cases, actions were already underway that would address the issues identified in our report. In 2007, the President directed the Secretary of Defense to establish a new geographic combatant command, consolidating the responsibility for DOD activities in Africa that had been shared by U.S. Central Command, U.S. Pacific Command, and U.S. European Command. AFRICOM was initially established as a subunified command within the European Command and was thus purposely staffed by European Command personnel. Because of this link to the European Command, DOD located AFRICOM’s headquarters at Kelley Barracks in Stuttgart, Germany, where the European Command headquarters was located, with the intent that this location would be temporary until a permanent location was selected. In 2008, AFRICOM became fully operational as a separate, independent geographic command. Since that time DOD has considered several courses of action for the permanent placement of the headquarters. Initially DOD’s goal was to locate AFRICOM headquarters in Africa, but that goal was later abandoned, in part because of what DOD described as significant projected costs and sensitivities on the part of African countries to having such a presence on the continent. Consequently, in 2008 DOD conducted an analysis of other locations in Europe and the United States, using cost and operational factors as criteria against which to evaluate the permanent placement of AFRICOM headquarters. Although this 2008 analysis contained no recommendation about where AFRICOM’s headquarters should be permanently located, it concluded that several locations in Europe and the United States would be operationally feasible as well as less expensive than Stuttgart. Finally, in January 2013, the Secretary of Defense decided to keep AFRICOM’s headquarters in Stuttgart, Germany. This decision was made following the completion of an analysis directed by the House Armed Services Committee in 2011 and reiterated in 2012 and conducted by CAPE. The study, which presented the costs and benefits of maintaining AFRICOM’s headquarters in Stuttgart and of relocating it to the United States, stated that the AFRICOM commander had identified certain operational concerns as critical and that even though the operational risks could be mitigated, it was the AFRICOM commander’s professional judgment that the command would be less effective in the United States. In announcing the decision to keep AFRICOM’s headquarters in Stuttgart, the Secretary of Defense noted that the commander had judged that the headquarters would be more operationally effective from its current location, given shared resources with the U.S. European Command. The initial plan for AFRICOM was to have a central headquarters located on the African continent that would be complemented by several regional offices that would serve as hubs throughout AFRICOM’s area of responsibility (see figure 1). According to DOD officials, having a command presence in Africa would provide a better understanding of the regional environment and African needs; help build relationships with African partners, regional economic communities, and associated standby forces; and add a regional dimension to U.S. security assistance. However, after conducting extensive travel throughout Africa to identify appropriate locations and meet with key officials in prospective nations, DOD concluded that it was not feasible to locate AFRICOM’s headquarters in Africa, for several reasons. First, State officials who were involved in DOD’s early planning teams for AFRICOM voiced concerns over the command’s headquarters location and the means by which the AFRICOM commander and the Department of State would exercise their respective authorities. Specifically, DOD and State officials said that State was not comfortable with DOD’s concept of regional offices because those offices would not be operating under the Ambassador’s Chief of Mission authority. Second, African nations expressed concerns about the United States exerting greater influence on the continent, as well as the potential increase in U.S. military troops in the region. Third, since many of the African countries that were being considered for headquarters and regional office locations did not have existing infrastructure or the resources to support them, DOD officials concluded that locating AFRICOM headquarters in Africa would require extensive investments and military construction in order to provide appropriate levels of force protection and quality of life for assigned personnel. Officials were also concerned that if the headquarters were located in Africa, assigned personnel would not be able to have dependents accompany them because of limited resources and quality-of-life issues. In 2008, the Office of the Secretary of Defense’s Office of Program Assessment and Evaluation conducted an analysis that considered other locations in Europe as well as in the United States for the permanent location of AFRICOM headquarters. It compared economic and operational factors associated with each of the locations and concluded that all of the locations considered were operationally feasible. It also concluded that relocating the headquarters to the United States would result in significant savings for DOD. However, DOD officials decided to defer a decision on the permanent location for AFRICOM headquarters until 2012 in order to provide the combatant command with sufficient time to stabilize. In 2011, the Office of the Under Secretary of Defense for Policy and the Joint Staff conducted a study that considered alternatives to the current geographic combatant command structure that could enable the department to realize a goal of $900 million in cost reductions between fiscal years 2014 and 2017. As part of DOD’s overall effort to reduce recurring overhead costs associated with maintaining multiple combatant commands, the study considered merging AFRICOM with either U.S. European Command (also located in Stuttgart, Germany) or U.S. Southern Command (located in Miami, Florida). The study concluded that these two options were neither “strategically prudent” nor “fiscally advantageous,” stating that combining combatant commands would likely result in a diluted effort on key mission sets, and that the costs incurred by creating a single merged headquarters would offset the available cost reductions. The study additionally found that altering the contemporaneous geographic combatant command structure would result in cost reductions well below the targeted $900 million. Subsequently, DOD determined that it would need to identify other ways to realize its goal of finding savings from combatant commands, and the department changed the timeframe to fiscal years 2014 through 2018. According to Joint Staff officials, DOD would seek to accomplish this goal by reducing funding in the President’s budget request for fiscal year 2014 across all the geographic and functional combatant commands by approximately $881 million for fiscal years 2014 through 2018. To realize these savings, these officials stated that the department would reduce the number of civilian positions at the combatant commands and Joint Staff by approximately 400 through fiscal year 2018, but they provided few specifics. See figure 2 for a timeline of the courses of action DOD considered. In January 2013, the Secretary of Defense decided to keep AFRICOM’s headquarters in Stuttgart, Germany. This decision was made following the completion of an analysis directed by the House Armed Services Committee in 2011 and conducted by the CAPE office. The purpose of the CAPE study was to present the strategic and operational impacts, as well as the costs and benefits, associated with moving AFRICOM headquarters from its current location to the United States. DOD considered two options for the basing of AFRICOM headquarters: (1) maintain AFRICOM’s current location in Stuttgart, Germany, or (2) relocate AFRICOM headquarters to the United States. However, the CAPE study also included a mitigation plan to address strategic and operational concerns identified by leadership as factors to consider in the event that AFRICOM were relocated to the United States. The main findings of the DOD study were as follows: The annual recurring cost of maintaining a U.S.-based headquarters would be $60 million to $70 million less than the cost of operating the headquarters in Stuttgart. The break-even point to recover one-time relocation costs to the United States would be reached between 2 and 6 years after relocation, depending on the costs to establish facilities in the United States. Relocating AFRICOM to the continental United States could create up to 4,300 additional jobs, with an annual impact on the local economy ranging from $350 million to $450 million. The study stated that the AFRICOM commander had identified access to the area of responsibility and to the service component commands as critical operational concerns. The study also presented an option showing how operational concerns could be mitigated by basing some personnel forward in the region. However, it stated that the commander had judged that the command would be less effective if the headquarters were placed in the United States. In January 2013, Secretary of Defense Leon Panetta wrote to congressional leaders notifying them of his decision to retain AFRICOM in Stuttgart. In the letter, the Secretary cited the judgment of the AFRICOM commander about operational effectiveness as a rationale for retaining the command in its current location. DOD’s decision to keep AFRICOM headquarters in Stuttgart was made following the issuance of CAPE’s 2012 study, although the extent to which DOD officials considered the study when making the decision is unclear. The decision, however, was not supported by a well-documented economic analysis that balances the operational and cost benefits for the options open to DOD. Specifically, the CAPE study does not conform with key principles GAO has derived from a variety of cost estimating, economic analysis, and budgeting guidance documents, in that (1) it is not well-documented, and (2) it does not fully explain why the operational benefits of keeping the headquarters in Stuttgart outweigh the benefit of potentially saving millions of dollars per year and bringing thousands of jobs to the United States. According to key principles GAO has derived from cost estimating, economic analysis, and budgeting guidance, a high-quality and reliable cost estimate or economic analysis is, among other things, comprehensive and well-documented. Additionally, DOD Instruction 7041.3, Economic Analysis for Decisionmaking, which CAPE officials acknowledged using to inform their analysis, states that an economic analysis is a systematic approach to the problem of choosing the best method of allocating scarce resources to achieve a given objective. The instruction further states that the results of the economic analysis, including all calculations and sources of data, must be documented down to the most basic inputs to provide an auditable and stand-alone document. The instruction also states that the costs and benefits associated with each alternative under consideration should be quantified whenever possible. When this is not possible, the analyst should still attempt to document significant qualitative costs and benefits and, at a minimum, discuss these costs and benefits in narrative format. CAPE officials agreed that DOD Instruction 7041.3 provides reasonable principles to apply in conducting a cost analysis, but officials stated that, as the independent analytic organization for the department, CAPE reserves the right to conduct analysis as it deems appropriate to inform specific decisions. In April 2013, after the decision had been made to maintain AFRICOM headquarters in Stuttgart, Secretary of Defense Chuck Hagel called on DOD to challenge all past assumptions in order to seek cost savings and efficiencies in “a time of unprecedented shifts in the world order, new global challenges, and deep global fiscal uncertainty,” to explore the full range of options for implementing U.S. national security strategy, and to “put everything on the table.” In particular, the Secretary stated that the size and shape of the military forces should constantly be reassessed. He stated that this reassessment should include determining the most appropriate balance between forward-stationed, rotationally deployed, and home-based forces. CAPE’s 2012 report describes strategic and operational factors that were considered when determining whether to place AFRICOM headquarters in the United States or keep it in its present location, and it includes estimates of annual recurring and one-time costs associated with each option. However, the analysis does not include enough narrative explanation to allow an independent third party to fully evaluate its methodology. Further, in our follow-up discussions, CAPE officials could not provide us with sufficient documentation for us to determine how they had developed their list of strategic and operational benefits or calculated cost savings and other economic benefits. CAPE officials told us that they did not have documentation to show how raw source data had been analyzed and compiled for the report. The CAPE report, entitled “U.S. Africa Command Basing Alternatives,” dated October 2012, consists of 28 pages of briefing slides. It includes a discussion of the study’s assumptions and methodology, along with the one-time and recurring costs of each option. The report presents a table summarizing the strategic and operational factors that were considered when determining whether to retain AFRICOM’s headquarters in Stuttgart or move it to the United States. The table indicates that the most critical factors for a combatant command headquarters are for it to have access to its area of responsibility, partners, and organizations, as well as to have access to service components and forces. Working groups of DOD officials had compiled a list of factors considered important for a combatant command and had selected the factors they considered “critical.” The list included access to the Pentagon, interagency partners, analytic intelligence capabilities, and European partners, including the North Atlantic Treaty Organization (NATO); ability to recruit and retain civilian personnel, embed personnel from other agencies, and leverage U.S.-based non-governmental organizations; and ability to operate independently without the need for agreement from a host country. However, the CAPE report contains limited explanation of how these factors were developed or why access to Africa and proximity to its service component commands were judged to be most critical. In follow- up discussions, CAPE officials told us that when they began their study they formed working groups to compile an authoritative list of strategic and operational factors critical to the operation of a combatant command headquarters, and that the groups independently developed similar factors, thereby verifying the comprehensiveness of the list and its relevance. However, CAPE officials provided no documentation of the meetings of these groups, the sources used to develop the factors, or the process used to arrive at a consensus in ranking the factors in terms of their criticality. According to CAPE officials, the reason they did not develop such documentation is that they viewed the study to be a straightforward analysis intended to be easily digestible for its policy- maker audience. CAPE officials told us that if they had anticipated an outside review of the study and its analysis, they would have documented the study differently. We therefore could not evaluate the methodology used in developing or ranking the operational and strategic factors presented in the CAPE study. Such an explanation is important, however, since operational and strategic factors were judged to outweigh cost savings and other economic benefits. Also, while proximity to Africa and to service component commands were ranked as the most important criteria for determining where to place the headquarters, some of the service components that were created to support the establishment of AFRICOM were originally located in Europe so that they would be close to the command headquarters. For similar reasons, we were not able to determine the comprehensiveness, accuracy, or credibility of CAPE’s cost estimates. The report itself does not provide sufficient explanation of how the costs were calculated or the effect of the various assumptions on the estimated costs for us to assess the estimates. Specifically, the report does not provide the sources of the cost estimates or the methodology used in calculating them. In follow-up discussions, CAPE officials explained that support for their calculations included e-mails and phone calls. Finally, the study presented estimates of the economic benefits that could accrue to a local community if the command were relocated to the continental United States, but it is unclear how these estimates were factored into the Secretary of Defense’s decision. In discussing the costs of the alternatives, the CAPE study presents a summary of one-time costs, including construction and the transfer of personnel and materiel. The study states that relocating AFRICOM to the continental United States may create up to 4,300 jobs (in addition to those of AFRICOM personnel), with a $350 million to $450 million a year impact on the local economy. However, the study does not explain how these possible savings were calculated, and CAPE officials could not explain how this analysis had been factored into the Secretary of Defense’s decision. CAPE’s analysis estimated that the annual cost of providing AFRICOM personnel with overseas housing and cost-of-living pay was $81 million per year, as compared with the $19 million to $25 million these would cost if the personnel were located in the United States. These costs associated with stationing military and civilian personnel overseas comprise the bulk of the savings from CAPE’s analysis. Although CAPE officials did not provide us with documentation for us to assess the accuracy and completeness of their cost estimates, they are comparable with those developed in OSD’s 2008 analysis. Moreover, our analysis confirmed that savings would be likely for both military and civilian personnel if the headquarters were located in the United States. For example, our analysis indicates that, conservatively, DOD could save from $5 million to $15 million per year overall on reduced housing allowances for military personnel, depending on where in the United States they were located. In addition, an AFRICOM document states that the command spent more than $30 million in fiscal year 2011 on overseas housing benefits for civilian personnel, which they would not receive if they were stationed in the United States. In its 2012 study, DOD tasked CAPE with analyzing two options—keeping AFRICOM’s headquarters in Stuttgart or moving it to a generic location in one of the four U.S. time zones. CAPE analysts also considered establishing a forward operating headquarters so as to allay concerns about a diminished forward presence if AFRICOM headquarters were located in the United States. In CAPE’s scenario, the forward headquarters would be staffed with about 25 personnel but would be rapidly expandable. It would also place an additional 20 personnel in existing component command headquarters. CAPE officials estimate that the annual recurring costs for the forward-deployed element would be $13 million, with a one-time cost of $8 million. CAPE added these estimates to its overall estimate of how much it would cost to move AFRICOM headquarters to the United States. In CAPE’s summary of its findings, however, there is no discussion of how this factored into the commander’s conclusion when he stated his preference, or of how the CAPE study had factored into the Secretary of Defense’s final decision. Operating with a U.S. headquarters with forward locations is the way in which the U.S. Central Command and U.S. Southern Command operate from their respective headquarters in Tampa, Florida, and Miami, Florida. The Central Command, for example, has a forward operating location in Qatar, and the Southern Command has forward locations in Honduras and El Salvador. AFRICOM already has a command element at a forward location—Combined Joint Task Force - Horn of Africa. According to Task Force officials, there are about 1,800 personnel temporarily assigned to this site at Camp Lemonnier, Djibouti. In 2012, the Navy submitted a master plan to Congress listing $1.4 billion in planned improvements to that site. . When we asked AFRICOM staff about the specific operational benefits of having its headquarters located in Stuttgart, they cited the following: (1) it takes less time to travel to Africa from Stuttgart than it would from the United States; (2) it is easier to interact with partners in Africa from Stuttgart because they are in the same or similar time zones; and (3) it is easier to interact with AFRICOM’s service components because they all are in Europe, and because the U.S. European Command headquarters is also in Stuttgart. An AFRICOM briefing, however, indicated that the strategic risk of relocating the headquarters to the United States would be “minimal,” and also stated that establishing a forward headquarters could mitigate strategic and operational risks. CAPE officials also stated that maintaining AFRICOM’s headquarters in Stuttgart makes it easier for AFRICOM to share resources at the service component level with the U.S. European Command, and that AFRICOM’s sharing service components with the European Command makes it unique among the combatant commands. During our site visits, however, European Command officials told us that the two commands do not share personnel, even though two of the components are dual-hatted. In its analysis, CAPE calculated the likely increase in hours that would be spent in traveling from the headquarters location to Africa if the headquarters were relocated to the United States. CAPE also estimated that if AFRICOM headquarters were relocated to the United States, the number of trips to Africa would likely remain the same. We believe that the number of trips to the United States would decrease. However, CAPE did not analyze travel patterns by individual AFRICOM staff. Our interview with AFRICOM officials and our review of travel patterns of AFRICOM staff indicate that being closer to Africa may offer few benefits for many personnel. For example, according to AFRICOM officials, 70 percent of AFRICOM staff travel infrequently. As a result, these staff could be relocated in the United States without negative effects. This is because the AFRICOM staff includes many support personnel–-accountants, personnel specialists, information technology experts, and planners, among other staff—who do their jobs primarily at the headquarters. (Appendix 1 shows a detailed breakdown of AFRICOM staff by mission area.) In addition, our independent analysis found that about 60 percent of AFRICOM headquarters staff’s travel in fiscal years 2010 and 2011 was to locations in the United States or within Europe. In fiscal year 2011, for example, AFRICOM spent $4.8 million on travel to the United States and $3.9 million on travel to other locations in Europe, while it spent about $5.2 million on travel to Africa (see figure 3). AFRICOM officials told us that travel to other parts of Europe includes trips to Berlin to obtain visas and passports, as well as to planning meetings with its components and other partners. If AFRICOM headquarters were to be relocated in the United States, the costs associated with travel to U.S. locations would likely be reduced. While some costs for official travel throughout Europe could increase, the travel that involves administrative tasks such as obtaining visas would be eliminated. In fiscal year 2011, this travel consumed almost one-third of all AFRICOM travel expenditures. Moreover, the view that AFRICOM could perform its mission from the United States is supportable, in part, because other combatant commands have operated successfully with a U.S.-based headquarters. During our review, we met with U.S Central Command and U.S. Southern Command officials to understand the extent to which their headquarters location in the United States affects them operationally. Officials expressed various opinions regarding the benefits of forward stationing personnel, and added that they are able to address time-zone and travel challenges. Central Command officials also explained that they manage partner relationships (including with NATO partners), overcome time-zone challenges, and travel to remote locations in their area of responsibility from their headquarters location in Tampa, Florida. They also stated that although they can quickly relocate personnel to a forward location in Qatar when needed, most of the headquarters staff does not need to be physically located in their area of responsibility in order to carry out their functions. A U.S. Southern Command official told us that they use video teleconferences with the components when they need to communicate with them. He also told us that the command has a forward presence in Honduras and in El Salvador. Neither the CAPE study nor the letter accompanying it when it was transmitted to Congress in January 2013 provides a complete explanation of why DOD decided that the operational benefits associated with remaining in Stuttgart outweigh the associated costs. Past studies conducted or commissioned by DOD, however, suggest that a more thorough approach to analyzing costs and benefits is possible. For example, unlike the 2012 analysis, DOD’s 2008 analysis of potential AFRICOM locations ranked each location according to how it fared against cost and operational factors. While the analysis made no recommendation and stated that Germany was superior to all of the considered U.S. locations based on factors other than cost, it concluded that any of the examined locations would be an operationally feasible choice, and that U.S. locations were routinely and significantly cheaper to maintain than overseas bases. Moreover, a 1994 study was initiated by the U.S. Southern Command and validated by a committee appointed by the Deputy Secretary of Defense to review and refine the analysis. The committee included the Assistant Secretary of Defense for Strategy and Requirements, the Principal Deputy Comptroller, and the Director for Strategic Plans and Policy, Joint Staff. The Committee’s final report quantified and prioritized operational benefits to determine where in the United States to place the U.S. Southern Command headquarters when it was required to move from Panama. Although this study did not consider overseas locations and assumed that remaining in Panama was not an option, it nevertheless stands as an example of a more transparent approach to weighing costs and operational concerns. This study examined 126 sites in the United States and then narrowed the possibilities based on criteria that addressed the mission and quality of life for assigned personnel. The names of the locations under consideration were “masked” to ensure that the criteria were applied objectively. As a result, six locations were chosen as most desirable: Tampa, Atlanta, New Orleans, Miami, Puerto Rico, and Washington, D.C. Visits were made to each of the locations and the final tallying of scores, including consideration of costs, showed that Miami was the preferred choice. The committee expanded the analysis through additional evaluation of Southern Command’s mission requirements and quality of life issues. Once its analysis was complete, the committee briefed the Deputy Secretary of Defense on its findings and conclusions based on three criteria: mission effectiveness, quality of life, and cost. In summary, the committee stated that if mission effectiveness was the most important of the three criteria, then Miami was clearly the superior location. If quality of life was the most important, then Washington was the leading candidate. If cost was the most important consideration, then New Orleans was the leading candidate. The committee’s recommendation was for the Secretary of Defense and the Deputy Secretary of Defense to select the final Southern Command relocation site from among those three candidate cities. Finally, a 2013 RAND study conducted in response to a congressional requirement for DOD to commission an independent assessment of the overseas basing presence of U.S. forces provides several examples of principles that can be used to determine where to geographically place personnel so that they can most effectively be employed. For example, the study states that, because basing personnel in overseas locations is generally more expensive than basing them in the United States, DOD could consider configuring its forward-based forces overseas so that they can provide the initial response to a conflict, while placing in the United States the forces that will provide follow-up support. To inform the assessment of overseas forces, RAND examined how overseas posture translates to benefits, the risks that it poses, the cost of maintaining it, and how these costs would likely change if the U.S. overseas presence were to be modified in different ways—for example, by changing from a permanent to a rotational presence. DOD’s letter describing the January 2013 decision to maintain the command in Stuttgart was based on operational benefits that are not clearly laid out, and it is unclear how cost savings and economic benefits were considered in the decision. DOD’s analysis stated that significant savings and economic benefits would result if the command were relocated to the United States, and our independent analyses confirmed that significant savings are possible. Moreover, the decision does not explain why using a small contingent of personnel stationed forward would not mitigate operational concerns. Our analysis of travel patterns and staff composition raises questions about why the AFRICOM staff needs to be located overseas, because not all staff would benefit from being closer to Africa—especially when other combatant commands operate with their headquarters in the United States. Key principles that GAO has derived for economic analysis and cost estimating, as well as a DOD instruction containing principles for certain types of economic analysis, suggest that the department’s rationale should be detailed and the study underpinning it should be comprehensive and well-documented. Since making the decision to keep AFRICOM’s headquarters in Stuttgart, the Department of Defense has sought to fundamentally rethink how the department does business in an era of increasingly constrained fiscal resources. Until the costs and benefits of maintaining AFRICOM in Germany are specified and weighed against the costs and economic benefits of moving the command, the department may be missing an opportunity to accomplish its missions successfully at a significantly lower cost. To enable the department to meet its Africa-related missions at the least cost, GAO recommends that the Secretary of Defense conduct a more comprehensive and well-documented analysis of options for the permanent placement of the headquarters for AFRICOM, including documentation as to whether the operational benefits of each option outweigh the costs. These options should include placing some AFRICOM headquarters personnel in forward locations, while moving others to the United States. In conducting this assessment, the Secretary should follow key principles GAO has derived for such studies, as well as principles found in DOD Instruction 7041.3, to help ensure that the results are comprehensive, well-documented, accurate, and credible. Should DOD determine that maintaining a location in Stuttgart is the best course of action, the Secretary of Defense should provide a detailed description of why the operational or other benefits outweigh the costs and benefits of relocating the command. In written comments on a draft of this report, DOD stated that the 2012 CAPE study met the requirements of the House Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2012. DOD stated that the CAPE study was not intended to be a comprehensive analysis to determine the optimal location for AFRICOM’s headquarters. Rather, DOD believed that the study provided sufficient detail to support the specific questions posed in the National Defense Authorization Act. While the CAPE office did present the estimated costs of relocating AFRICOM’s headquarters, the National Defense Authorization Act directing DOD to conduct this study specifically urged DOD to conduct this basing review “in an open and transparent manner consistent with the processes established for such a major review.” As we state in the body of our report, the CAPE study did not provide sufficient detail to support its methodology and cost estimates for a third party to validate the study’s findings. Moreover, DOD’s own guidance on conducting an economic analysis states that such an analysis should be transparent and serve as a stand-alone document. DOD also stated that Secretary Panetta’s decision not to relocate the AFRICOM headquarters to the United States was based largely on the combatant commanders’ military judgment, which is not easily quantifiable. We recognize that military judgment is not easily quantifiable. However, we continue to believe that an accurate and reliable analysis should provide a more complete explanation of how operational benefits and costs were weighed, especially in light of the potential cost savings that DOD is deciding to forego. DOD partially concurred with our recommendation. DOD stated that to meet the requirements of the Budget Control Act, the Department of Defense will consider a wide range of options. If any of these options require additional analysis of the location of AFRICOM headquarters, DOD said that it will conduct a comprehensive and well-documented analysis. We continue to believe that such an analysis is needed. Because of the current tight fiscal climate and the Secretary of Defense’s continual urging that DOD identify additional opportunities for achieving efficiencies and cost savings, DOD should reassess the option of relocating AFRICOM’s headquarters to the United States. The department’s written comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Defense and the Secretary of State. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3489 or at pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contact named above, Guy LoFaro, Assistant Director; Nicole Harris; Charles Perdue; Carol Petersen; Beverly Schladt; Mike Shaughnessy; Amie Steele; Grant Sutton; and Cheryl Weissman made major contributions to this report. | A House Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2013 mandated GAO to conduct an analysis of options for the permanent placement of AFRICOM headquarters. While GAO's work was ongoing, DOD announced its decision to keep AFRICOM's headquarters at its current location in Stuttgart, Germany. This report addresses the following questions: (1) What courses of action did DOD consider for the permanent placement of AFRICOM headquarters? and (2) To what extent was DOD's decision to keep AFRICOM headquarters in Stuttgart based on a well-documented analysis of the costs and benefits of the options available to DOD? To meet these objectives, GAO analyzed documents provided by and interviewed officials from the Office of the Secretary of Defense; the Joint Staff; and AFRICOM and other combatant commands. The Department of Defense (DOD) has considered several courses of action for the placement of the headquarters for U.S. Africa Command (AFRICOM) but decided in early 2013 to keep it in Germany. When AFRICOM was created in 2007, DOD temporarily located its headquarters in Stuttgart, Germany, with the intent of selecting a permanent location at a later date. DOD's initial goal was to locate the headquarters in Africa, but this was later abandoned in part because of significant projected costs and sensitivities on the part of African countries. Subsequently, in 2008, DOD conducted an analysis that found that several locations in Europe and the United States would be operationally feasible and less expensive than keeping the headquarters in Stuttgart. A final decision, however, was deferred until 2012, when the Cost Assessment and Program Evaluation office completed its analysis. Subsequent to this analysis, in January 2013, the Secretary of Defense decided to keep AFRICOM's headquarters in Stuttgart. In announcing the decision, the Secretary noted that keeping AFRICOM in Germany would cost more than moving it to the United States but the commander had judged it would be more operationally effective from its current location, given shared resources with the U.S. European Command. GAO's review of DOD's decision to keep AFRICOM headquarters in Germany found that it was not supported by a comprehensive and well-documented analysis that balanced the operational and cost benefits of the options available to DOD. The 2012 study that accompanied the decision does not fully meet key principles for an economic analysis. For example, the study is not well-documented and does not fully explain the decisions that were made. Although details supporting DOD's cost estimates were not well-documented, the analysis indicated that moving the headquarters to the United States would accrue savings of $60 million to $70 million per year. The 2012 study also estimated that relocating the headquarters to the United States could create up to 4,300 additional jobs, with an annual impact on the local economy ranging from $350 million to $450 million, but it is not clear how this factored into DOD's decision. Beyond costs and economic benefits, the study lists several factors to be considered when determining where to place a headquarters. It ranks two of these factors--access to the area of responsibility and to service components--as critical. However, little support exists showing how the factors were weighted relative to each other. Moreover, the study describes how a small, forward-deployed headquarters element such as the ones employed by other U.S.-based combatant commands might mitigate operational concerns, but the study is silent about why this mitigation plan was not deemed a satisfactory option. In discussions with GAO, officials from the Central and Southern Commands stated that they had successfully overcome negative effects of having a headquarters in the United States by maintaining a forward presence in their theaters. In sum, neither the analysis nor the letter announcing the decision to retain AFRICOM headquarters in Stuttgart explains why these operational factors outweighed the cost savings and economic benefits associated with moving the headquarters to the United States. Until the costs and benefits of maintaining AFRICOM in Germany are specified and weighed against the costs and benefits of relocating the command, the department may be missing an opportunity to accomplish its missions successfully at a lower cost. To meet operational needs at lower costs, GAO recommends that DOD conduct a more comprehensive and well-documented analysis of options for the permanent placement of the headquarters for AFRICOM, including documentation on whether the operational benefits of each option outweigh the costs. DOD partially concurred with GAOs recommendation, stating that the decision was based primarily on military judgment but that it will perform additional analysis of the location of the headquarters if the Secretary deems it necessary. GAO continues to believe such analysis is needed. |
Basel II rests on the New Basel Accord, which established a more risk- sensitive regulatory framework that was intended to be sufficiently consistent internationally but that also took into account individual countries’ existing regulatory and accounting systems. The U.S. bank regulators have been adapting the New Basel Accord for use by U.S. banks. In the United States, the four federal bank regulators oversee the implementation of Basel II for banks and SEC oversees the implementation of Basel capital rules for investment firms. The financial institutions that will be involved in the implementation of Basel II are organized as bank holding companies, thrift holding companies, or consolidated supervised entities (CSE). At a consolidated level the Federal Reserve supervises bank holding companies that are subject to Basel capital requirements, OTS supervises thrift holding companies that are not subject to Basel capital requirements, and SEC supervises CSEs that voluntarily choose to be subject to consolidated oversight including Basel capital reporting requirements. Each of these types of holding companies has subsidiaries that are depository institutions that could be required to adopt Basel II. Each of these banking institutions is regulated by a primary federal regulator according to the rules under which it is chartered. FDIC serves as the primary federal regulator of state chartered banks that are not members of the Federal Reserve System (state nonmember banks). It is also the deposit insurer for all banks and thrifts and has backup supervisory authority for all banks it insures. The Federal Reserve serves as the primary federal regulator for state chartered banks that are members of the Federal Reserve System (state member banks). OCC serves as the primary federal regulator for national (i.e., federally chartered) banks. Many of the nation’s largest banks are federally chartered. OTS serves as the primary federal regulator for all federally insured thrifts. Under the dual federal and state banking system, state chartered banks are supervised by state regulatory agencies in addition to a primary federal regulator. In 2004, SEC established a voluntary, alternative net capital rule for broker-dealers whose ultimate holding company consents to groupwide supervision by SEC as a CSE. This alternative net capital rule permits the use of statistical models for regulatory capital purposes. At the holding company level, CSEs are required to compute and report to SEC capital adequacy measures consistent with the standards in the Basel Accord, and SEC expects them to maintain certain capital ratios, though they are not required to do so. According to SEC, all CSEs have implemented Basel II. Primary U.S. broker-dealers affiliated with CSEs are required to comply with a capital requirement that SEC says is not identical to the Basel standards but makes use of statistical models in its computation. Depository institutions within the CSEs are subject to the same requirements as other banks of similar sizes and exposures including risk- based capital requirements, the leverage ratio, and PCA; however, there is no leverage requirement at the consolidated level for CSEs. Core banks face a range of competitors including non-core U.S. banks, other financial institutions, and foreign-based banks. Core banks that have varying business models—some focus on domestic retail banking activities, some on wholesale activities, and others are engaged in the full range of these activities—are overseen by a number of different bank regulators. Banks of different sizes that are likely to be under different capital regimes are more likely to compete with each other in retail markets, where they offer products such as residential mortgages to the same customers, than in wholesale markets. In certain wholesale markets, core banks often compete with U.S. investment firms. U.S.-based core banks also compete with foreign-based banks in foreign markets and in U.S. markets where foreign-based banks are very active. Since core banks compete with other financial institutions across various product and geographic markets, differences in capital rules or the implementation of those rules may have competitive effects by influencing such things as the amount of capital institutions hold, how banks price loans, and the cost of implementing capital regulations. Core banking organizations—those that meet the requirements in terms of asset size and foreign exposure for mandatory adoption of the Basel II advanced approaches—have adopted a variety of business models, but all compete with some other core banks. Some of the core banks are active in retail markets, some in wholesale markets, and some in the full range of banking activities. As illustrated in table 1, which is based on publicly available information, five core banking organizations—including one that is foreign-based—have at least 25 percent of their assets in retail markets and one of these, the only thrift that is a core banking organization (Washington Mutual Bank), has more than 60 percent of its assets in retail markets, while a few institutions have almost no activity in these markets. In addition, two core banks that appear less active in retail markets—with about 15 percent of their assets in these markets—may still have a major presence there because of their overall size. In wholesale markets, table 1 shows that some banks are active in making commercial and industrial loans while others hold a larger percentage of their assets as trading assets—assets held to hedge risks or speculate on price changes for the bank or its customers. However, the thrift institution has very little activity in these markets. The three smaller U.S.-based core banks, which are classified as core banks because they have large foreign exposures, engage primarily in custodial activities where they manage the funds of their clients. In this area they compete with the largest U.S. banks that are also engaged in these activities. Core banks are in some ways similar to non-core banks. For example, banks of all sizes continue to participate in some activities historically associated with banking, such as taking deposits and making loans. As table 2 shows, bank holding companies of different sizes hold similar proportions of certain loans such as residential mortgages and commercial and industrial loans. According to research conducted by Federal Reserve staff and other experts, banks of different sizes compete with each other for retail products such as residential mortgages. As illustrated in table 2, bank holding companies in all size ranges hold a relatively large percentage of their assets—from 15.5 to 23.1 percent—in residential mortgages. Customers can obtain mortgages from banks across the United States and generally can obtain pricing information from brokers or directly through the Internet or financial publications. For small thrifts, which make up a portion of the small non-core banking institutions in the United States but are not included in table 2, the proportion of mortgages is much higher. Unlike residential mortgages, only a few banks, including several core banks, are active in the credit card market, but some non-core banks are active in this market as well; and all credit card issuers generally compete for the same customers. For wholesale products, the competitive landscape is more complex. As table 2 illustrates, in some areas core banks differ substantially from non- core banks and are thus not likely to compete with them in those markets. For example, non-core banks hold a very small percentage of their assets as trading assets, an area where some core banks are very active, and core banks hold a relatively small proportion of their assets in commercial real estate, an area where non-core banks are very active. While table 2 shows that core and non-core banks are both active in the commercial and industrial loan markets, the market for loans from large banks may be quite different from those for smaller banks. According to a bank official and other experts, larger banks do not price commercial and industrial loans individually; instead, these loans generally are part of a package of products and services offered to major corporate clients. Financial market experts told us that often these loans are discounted to establish a relationship with the customer. Because smaller banks do not offer a full range of products and services, they likely are not competing for the same customers as larger banks. In addition, we and others have shown that smaller banks tend to serve the needs of smaller businesses with which they can establish a personal relationship. Because obtaining credit information on small businesses is difficult, community banks often have an advantage with these customers in that they may have better information about small businesses in their local market than do large national or internationally active banks. As a result, the largest banks are unlikely to be competing with community banks in these markets. At the same time, research conducted by Federal Reserve staff has shown that large non-core banks may compete with core banks for corporate customers. Core banks are much more likely than smaller or regional non-core banks to participate in activities often associated with investment banking. For example, core banks are much more likely to hold trading assets that typically are used to hedge risks or speculate on certain market changes either for the banking organization or its customers (see table 2). In addition, core banks are involved in international activities where they often provide investment banking products and services in the major capital markets around the world. In the United States and abroad, U.S.- based core banks, especially Citigroup and JPMorgan Chase, compete with the four major U.S. investment firms—Goldman Sachs, Merrill Lynch, Morgan Stanley, and Lehman Brothers. The core banks also are involved in custodial and asset management activities domestically and internationally. In this capacity, core U.S.-based banks compete with foreign-based banks, with investment firms, and with asset management firms that do not own depository institutions and are not subject to regulatory capital requirements. Basel capital requirements were established, in large part, to limit competitive advantages or disadvantages due to differences in capital requirements across countries; however, the New Basel Accord allows for certain areas of national discretion and this could create competitive advantages or disadvantages for banks competing in various countries. In addition, because a major part of Basel II involves direct supervision of the risk management processes of individual banks, further opportunities exist for differences across countries to develop as the new rules are implemented. While all but one of the core banks has some foreign exposure, some of the nine U.S.-based core banks have foreign exposures that are large relative to the size of the institution (see fig. 2). As noted above, most of these banks are engaged in asset management and investment banking activities globally. In addition, one of the banks is heavily engaged in retail banking activities in a wide range of countries where each country likely comprises a separate market. To the extent that U.S.-based banking institutions that have subsidiaries in foreign countries face more stringent capital requirements for the parent institution at home, U.S.-based banks could be disadvantaged in foreign markets. Much of the competition between U.S.- and foreign-based banks takes place in the United States, where foreign based-banks are very active through their subsidiaries, branches, and offices. Foreign-based banks account for about $2.8 trillion of the approximately $15 trillion of U.S. banking assets and subsidiaries of those banks account for 11 of the 50 largest U.S. bank holding companies. Further, as noted in table 1, three of the core banks in the United States are subsidiaries of foreign-based banks. Two of these operate primarily in wholesale markets, while the third, HSBC, is active in both retail and wholesale banking markets in the United States. In addition, some large U.S. non-core banks that are subsidiaries of foreign-based banks are likely to adopt the advanced approaches in the United States. The extent to which differences in capital requirements will affect competition in the United States between U.S.-based and foreign-based banks will depend, in part, on how the U.S. activities of the foreign-based banks are organized. For capital purposes, although foreign-based banks with U.S. subsidiaries will likely follow the Basel II rules in their home countries, the U.S. subsidiaries are regulated as U.S. banks within the United States and will follow U.S. rules. However, branches of foreign banks are not required to meet the U.S. rules. As a result, some foreign- based banks that have substantial U.S. operations, but conduct their banking activities in the United States through branches, will be following the Basel II rules in their home country rather than in the United States. Because holding capital is costly for banks, differences in regulatory capital requirements could influence costs, prices, and profitability for banks competing under different capital frameworks. If regulatory capital requirements increase the amount of capital banks hold relative to what they would hold in the absence of regulation, then the requirements would increase banks’ costs and reduce their profitability. Depending on the structure of markets, these higher costs could be passed on to bank customers in the form of higher prices—interest rates on loans or fees for services—or absorbed as reduced lending and profits. For example, higher capital costs driven by higher capital requirements could result in a competitive disadvantage for banks that compete for similar customers with banks subject to different capital rules. Conversely, lower capital requirements that allow banks to reduce the capital they hold for a particular asset could allow them to price those assets more aggressively, thereby increasing market share or earning higher returns at existing prices. Bank officials with whom we spoke and some empirical evidence we reviewed suggested that regulatory capital requirements are one of several key factors banks consider in deciding how much capital to hold. Other factors include management views on the amount of capital the firm needs internally and market expectations. These multiple and overlapping motivations for holding capital make it difficult to isolate the impact of regulatory capital on the amount of capital banks hold. Nevertheless, there is some evidence that banks hold more than the minimum required capital—a buffer—in part to reduce the risk of breaching that minimum requirement. For example, one study of United Kingdom banks found that an increase in required capital was followed by an increase in actual capital, although the increase was only about half the size of the increase in required capital. Thus, changes in minimum required capital could cause banks to change the amount of capital they hold to maintain a similar buffer of capital, consistent with the other goals of the bank. The study also found that banks with small buffers reacted more to a given change in individual capital requirements—and banks with larger buffers reacted little, if at all—supporting the view that the capital buffer is a form of insurance against falling below regulatory minimums. Differences in the implementation costs of capital requirements also could have competitive effects. In principle, higher implementation costs could lead to a one-time increase in costs or ongoing costs associated with compliance. One-time costs would influence profitability directly, while ongoing costs also could influence the cost of lending for banks in the same way that higher capital costs could influence pricing and profitability. Significant implementation costs are likely to be easier to bear the larger the institution—the costs of implementing regulation are on average higher (as measured by cost per employee) for smaller firms. The possible effects of differences in regulatory capital requirements on implementation and capital costs also could influence incentives for consolidation by making acquisitions more or less advantageous for banks operating under different capital rules. Such advantages would imply that those banks under a given capital regime might be able to use the capital resources of banks under a different regime more effectively, making it profitable for the former banks to acquire the latter ones. Conversely, if implementation costs for a capital regime imposed on larger banks were high, this might discourage some banks from merging because they would become large enough to be required to adopt a capital regime with high implementation costs. The new U.S. capital rules address some competitive concerns of banks; however, other concerns remain. Regulators analyzed some competitive issues raised by banks during the development of the Basel II rules in the United States. In the final rule for the advanced approaches, the regulators addressed concerns about differences between the NPR and the New Basel Accord that could have led to greater implementation costs. For example, in the final rule they harmonized the definition of wholesale loan default with the accord, thus responding to banks’ concerns that differences in the definition of wholesale loan default between the NPR and the accord could have led to increased costs of operating in multiple countries. However, core banks remain concerned that the leverage requirement will affect their ability to compete with both foreign- and some U.S.-based competitors. The coordination between U.S. and foreign regulators on implementation issues for core banks may address some competitive concerns of internationally active core banks. For non-core banks, the proposed standardized approach rule may address some concerns—for example, that core banks could hold less capital for similar assets. The proposed rule is more risk sensitive than Basel I, providing non-core banks with the possibility of lower regulatory capital minimums for certain assets or activities. Other factors, such as the leverage requirement, may reduce differences in capital for banks competing in the United States. As a result of the potential for large banks to hold less capital under Basel II, at least for certain assets, researchers, primarily at the Federal Reserve, conducted studies of the potential impact of Basel II on specific markets and on aspects of the rule, including the impact on residential mortgages, credit cards, operational risk, and mergers and acquisitions. These studies were limited by the availability of data and by a lack of information on the impact regulatory capital has on bank behavior. Nonetheless, the studies identified that there could be competitive impacts in the residential mortgage market and helped to lead to the development of alternatives to Basel I for non-core banks. OCC and OTS provided the Office of Management and Budget (OMB) with regulatory impact analyses that included examination of the impact of the rules on domestic competition. In addressing competitive issues in this analysis, OCC relied primarily on the studies conducted at the Federal Reserve. In its regulatory impact analysis, OTS incorporated OCC’s analysis adding appropriate material specifically related to the thrift industry. For example, OTS noted that because thrifts have high concentrations of assets in residential mortgages, the leverage requirement would be more likely to impose greater capital requirements on these firms than would the Basel II requirements and, as a result, would have a negative impact on the ability of thrifts to compete with other banking organizations. OTS also pointed out that interest rate risk for those mortgage-related assets that a bank is planning to hold rather than trade is particularly important to thrifts. However, the adequacy of capital held for these risks is being assessed in Pillar 2 rather than in Pillar 1, where the risks associated with changes in interest rates on mortgage related assets that are being actively traded are assessed. Since there is more regulatory flexibility in Pillar 2 than in Pillar 1, OTS expressed concern that thrifts could be disadvantaged if different regulatory agencies did not implement Pillar 2 consistently. The regulators did less analysis regarding the international competitive impact of the new rules. At the time that the capital rules were being developed, OMB provided little guidance on analyzing the international impact of U.S. rules and the agencies did not discuss international competition issues in their analyses. Alternatively, European Union guidance for regulatory impact analyses includes a more detailed evaluation of impacts on international trade and investment, and OMB is considering including more explicit guidance on the analysis of the impact on international trade and investment in the United States. During the development of Basel II, U.S. banks raised concerns about being disadvantaged internationally by certain aspects of the U.S. rules. Although regulators have harmonized some aspects of the advanced approaches final rule with the New Basel Accord, concerns remain about remaining differences in the final rule and other issues such as the leverage requirement that could have competitive effects. The final rule removed an important technical difference in the definition of default for wholesale products that existed between the U.S. NPR and the New Basel Accord. However, other differences were retained, such as the U.S. implementation schedule and the amount by which regulatory capital could decrease during a bank’s transition to the final rule. Core banks are specifically concerned that the leverage requirement will have negative effects on their ability to compete with CSEs and foreign-based banks. U.S. banking regulators harmonized certain aspects of the U.S. final rule on the advanced approaches with the New Basel Accord, reducing some concerns of core banks. For example, one of the major concerns of U.S. core banks was that the proposed rule included a different definition of default for wholesale products, which could lead to increased implementation costs through the need to maintain separate systems for data in the United States and in those foreign countries where U.S. core banks were required to adopt Basel II. The definition of default for wholesale products in the final rule now closely resembles the New Basel Accord’s definitions for these types of products, thus limiting the potential for higher implementation costs for core banks. Other technical differences that have been diminished for core banks include how core banks have to estimate their losses after a borrower has defaulted on a loan. Table 3 outlines several key technical differences between the earlier proposed U.S. rules and the New Basel Accord and highlights where U.S. regulators diminished or retained differences in the final rule. One technical difference that remains between the U.S. final rule on advanced approaches and the New Basel Accord is the treatment of SME loans. U.S. regulators believe that an adjustment to lower the capital charge for such business loans is not substantiated by sufficient empirical evidence. In other words, this suggests that, all other things equal, SME loans have risks comparable to those posed by larger corporate loans. U.S. regulators also noted that the SME treatment in the Accord might give rise to a domestic competitive inequity between core banks and banks subject to other regulatory capital rules, such as Basel I. Officials at one rating agency with whom we spoke said that a lower capital requirement for SME loans in the New Basel Accord was not reflective of the risk for these exposures, and the rating agency did not treat these loans differently from other business loans in their own assessments of capital adequacy. In addition, several experts with whom we spoke noted that this difference in capital requirements for SME loans would likely not have any immediate or major impact on competition between U.S. and foreign banks. In addition to the technical differences discussed above, the final rule addressed one concern related to a prudential safeguard U.S. regulators introduced in the 2006 NPR, but some core banks remain concerned about the implementation schedule. The NPR contained a benchmark—a 10 percent reduction in aggregate minimum capital among core banks—that would have been viewed as a material reduction in capital requirements that warranted modification in the rule. Core banks had commented that this safeguard could affect them negatively because of the uncertainty surrounding its application. In the final rule, U.S. regulators eliminated the benchmark. However, retention of the implementation schedule proposed in the 2006 NPR continues to raise concerns for some core banks because it will lead to a longer transition period in the United States than in other countries and delay any possible capital reductions. European banks and most Canadian banks on the advanced approaches most likely will exit their transitional periods by January 2010. In contrast, U.S. core banks cannot exit their transitional periods before April 2012 and could do so in 2014 or later. Furthermore, European banks will be able to reduce capital to 90 percent of Basel I requirements in 2008 and to 80 percent of Basel I requirements in 2009 while Canadian banks will be able to apply for approval to reduce their capital by similar amounts under the same timeframes. Under the final rule, U.S. core banks will have three distinct transitional periods during which required risk-based capital may be reduced to only 95 percent, 90 percent, and 85 percent of Basel I requirements respectively. The different implementation schedules and maximum capital reductions may provide foreign competitors of U.S. core banks an earlier opportunity to make use of any decreases in capital costs associated with lower required capital for certain assets or activities. Therefore, by making the transition to Basel II lengthier for U.S. core banks, foreign competitors may be able to take better advantage of strategic opportunities, such as a mergers or acquisitions. Though several core bank officials with whom we spoke remained concerned about the time difference, officials at one core bank explained that the current market environment may limit the competitive implications of that difference. Several core bank officials with whom we spoke mentioned that they would have wanted to have the option to select the standardized approach with some officials suggesting that the lack of a choice may lead to higher implementation costs. In the United States, the final rule requires all core banks to adopt the advanced approaches for both credit and operational risk, but affords opportunities for the primary federal supervisor to exercise some flexibility when applying the final rule to core banks. The advanced approaches rule specifically allows for the exemption of subsidiary depository institutions from implementing the advanced approaches, and, under the reservation of authority, the primary federal regulator can require a different risk weighted asset amount for one or more credit risk exposures, or for operational risk, if the regulators determine that the requirement under the advanced approaches is not commensurate with risk. However, some U.S. regulatory officials with whom we spoke noted the potential risk of a piecemeal approach and emphasized that they do not want banks to apply the advanced approach for credit risk to their least risky portfolios and to apply Basel I or the proposed standardized approach for their riskier portfolios. In contrast, some foreign banks have not been explicitly required to adopt the advanced approaches for credit and operational risk. For example, Canadian regulators told us that they have an expectation for their domestic banks with significant global operations to move to the advanced approach for credit risk and that there is no such expectation for domestic banks to use the advanced approach for operational risk. Furthermore, all other banks in Canada can decide to adopt the advanced approaches with the condition that the bank must adopt the advanced approach for credit risk before adopting the advanced approach for operational risk. In addition, regulatory officials from the United Kingdom told us that all banks were required to adopt the standardized approach in 2007, but some banks applied for a waiver to allow them to adopt the advanced approaches for determining capital requirements for credit risk or for operational risk. Moreover, officials from one European bank told us that they entered their first transitional year in their country with approximately three-quarters of their portfolios on the advanced approach for credit risk. Officials from some of the core banks with whom we spoke expressed concerns that they may be at a competitive disadvantage due to the retention of the U.S. leverage requirement, which applies to all depository institutions and U.S.-based bank holding companies. Foreign banks based in other industrialized countries are generally not subject to a leverage requirement. Some U.S.-based core banks are concerned about the impact of the leverage requirement for bank holding companies on their operations abroad. That is, in meeting the leverage requirement, a U.S. bank holding company must include the assets of its foreign operations, potentially increasing the amount of required regulatory capital in comparison with the regulatory capital requirements for foreign-based bank holding companies. For example, the additional capital needed to meet the leverage requirement may exceed the additional capital required under the advanced approaches for certain corporate loans that are estimated by banks to be relatively low-risk, as demonstrated in figure 3. Most core bank officials with whom we spoke also said that by maintaining the leverage requirement, U.S. regulators were preserving a regulatory capital requirement that was not aligned with the improved risk-management practices promulgated by the final rule on the advanced approaches. Officials from one trade association said that because the leverage requirement does not require additional capital as risk increases, banks may have an incentive to increase their return on equity by holding assets with higher risk and return, but no additional capital required by the leverage requirement. In contrast, regulatory officials have stated that risk- based and leverage requirements serve complementary functions in which the leverage requirement can be seen as offsetting potential weaknesses or supplementing the risk-based capital requirements. In terms of potential competitive effects domestically, some core bank officials with whom we spoke expressed concerns that certain financial firms, primarily the CSEs, offer similar wholesale products but lack similar regulatory capital requirements, while other core bank officials were no longer concerned. As noted previously, CSEs are required to compute and report to SEC capital adequacy measures consistent with the standards in the New Basel Accord, and SEC expects them to maintain certain capital ratios, though they are not required to do so. SEC has said that it will make modifications in light of the final rule adopted by U.S. bank regulators and subsequent interpretations. In addition, bank holding companies are subject to a leverage requirement, but CSEs do not have a similar requirement. For example, in December 2007, the leverage ratio for core bank holding companies ranged from about 4.0 percent to about 6.8 percent and for CSEs ranged from about 3 percent to 3.8 percent. U.S. regulators and their foreign counterparts are coordinating in ways that contribute to reducing the potential for adverse competitive effects on U.S. banks operating abroad. These efforts aim to resolve some issues that develop between regulators in a bank’s home country and those in other countries where the bank operates, usually referred to as home-host issues. Handling home-host issues is an essential element of the New Basel Accord framework because it allows for national discretion in a number of areas. Several foreign regulators with whom we spoke discussed how well U.S. regulators have been able to collaborate with their foreign counterparts on a variety of supervisory issues. Specific to Basel II implementation, U.S. regulators have been able to provide needed information to foreign bank supervisors that could limit the compliance costs of subsidiaries of U.S. banks operating abroad. For example, OCC examiners explained to us how they assisted a foreign regulator in better understanding some of the information a core bank was using in estimating credit risk for a certain loan portfolio. In another instance of collaboration, foreign regulators explained to us that they waived the requirement for a core bank to adopt the advanced approaches for its foreign-owned subsidiary until the core bank adopted the advanced approaches in the United States. Over the years, the U.S. regulators have entered into various information- sharing agreements that facilitate cooperation with their foreign counterparts. These agreements are intended to expedite the meeting of requests posed by foreign regulators for supervisory information from U.S. regulators. As of July 2008, OCC and Federal Reserve officials explained that they had some form of an information-sharing agreement with 25 and 16 foreign jurisdictions respectively. Likewise, FDIC and OTS officials both described good working relationships with their foreign counterparts as they related to U.S. banks with international operations that they supervise. U.S. regulators have been and continue to be active members in the Basel Committee and its various subcommittees, including the Accord Implementation Group. In addition, U.S. regulators participate in colleges of supervisors and other international bodies, such as the Joint Forum. Participation in such entities further provides U.S. regulators information on how U.S. banks may be treated by foreign regulators, thus allowing for more dialogue among regulators to preemptively address any home-host issues. The Accord Implementation Group’s purpose is to exchange views on approaches to implementation of Basel II, and thereby to promote consistency in the application of the New Basel Accord. Colleges of supervisors are meetings at which regulators from various countries discuss supervisory matters that relate to a specific bank that has global operations. Officials from the Federal Reserve stated that the colleges are more often better for sharing information among regulators than for addressing a specific regulatory issue. Though regulators from various countries are sharing information, several core banks expressed concerns to us that their foreign regulators have been implementing Basel II differently. As discussed earlier, because non-core banks compete with core banks in some markets, non-core banks were concerned that core banks would be able to hold less capital than non-core banks were holding under Basel I for the same assets. Part of this concern came from the April 2005 results of the fourth quantitative impact study (QIS-4), which estimated that Basel II could result in material reductions in aggregate minimum required risk- based capital among potential core banks. By holding less capital for certain products, such as residential mortgages, core banks might charge less for these products than non-core banks. Two studies of the potential impact of Basel II on the market for residential mortgages have disagreed as to the magnitude of any competitive impact—one suggested a potentially significant shift in income from mortgages toward banks on the advanced approaches, while the other argued that any competitive impact was unlikely. In addition, U.S. regulators have recognized that some banks were concerned about core banks being required to hold less capital overall, thus making it advantageous to acquire non-core banks. The proposed standardized approach rule should address some of the competitive concerns non-core banks expressed in the early 2000s, while several other factors, including the leverage requirement, also may reduce differences in capital between core and non-core banks. U.S. regulators have proposed the standardized approach in part to mitigate potential competitive differences between core and non-core banks. The U.S. version of the standardized approach features more risk- sensitive capital requirements than Basel I. In particular, it adds risk sensitivity for mortgages based on their loan-to-value (LTV) ratios and has lower capital requirements than Basel I for some lower-risk (lower LTV) mortgages (see fig. 4). The proposed standardized approach rule is also similar to the standardized approach under the New Basel Accord in that, like the accord, it features increased risk sensitivity for some externally rated exposures, including corporate loans. This is in contrast to the single risk weight for corporate credits and most mortgages in Basel I. Figure 5 demonstrates that the minimum required capital under the standardized approach for the credit risk associated with externally-rated corporate loans will be much more similar to that required under the advanced approaches than that required under Basel I. In addition, the standardized approach expands incentives for better risk management in that it allows banks to reduce capital in light of certain additional practices that could reduce risk, such as the use of collateral or third-party guarantees, and explicitly requires banks to set aside capital for operational risk. The added risk sensitivity of the standardized approach proposal should reduce some differences in risk-based capital requirements, as compared with the advanced approaches, for adopting banks. Once the standardized approach rule becomes final, non-core banks will have the option of choosing it or the advanced approaches, or remaining on Basel I. Presumably, non-core banks will take into consideration a wide range of issues when deciding what regulatory capital framework to adopt, including potential competitive effects. For example, a growing non-core regional bank that competes principally with core banks in wholesale and retail lending may find it beneficial to adopt the advanced approaches in order to model and receive lower risk-based capital requirements for certain lower-risk credits. Similarly, a smaller non-core bank that found itself increasingly competing with regional banks might opt for the additional risk-sensitivity of the standardized approach. However, one trade association representing some of the smallest non-core banks with whom we spoke said the standardized approach may not fully address the competitive concerns of these banks because the capital relief associated with holding some lower risk assets might be offset by additional capital required for operational risk. Officials at one large non-core bank told us that the bank was considering all of its options carefully and noted that there were a large number of factors to consider in deciding which risk- based capital rule to adopt. While the leverage requirement, particularly for bank holding companies, remains a competitive concern for core banks, the leverage requirements that all depository institutions must meet may limit competitive differences resulting from banks in the United States operating under multiple risk-based capital rules. Because these banking institutions must meet both risk-based and leverage requirements, the leverage requirement may be the effective or binding requirement for lower-risk assets held on the balance sheet, or more generally for banks with a relatively low-risk portfolio. The additional capital needed to meet the leverage requirement likely will exceed both the additional advanced and standardized approaches risk-based capital requirements for certain lower-risk assets held on balance sheets, such as low LTV mortgages and highly rated corporate credits. Figure 6 compares the capital required by the advanced approaches with the capital required by the leverage requirement for certain externally rated corporate loans. Because U.S. banks hold capital for a number of reasons and are generally expected to hold more than the minimum amount of capital required, banks under different risk-based capital rules may nevertheless hold similar capital for similar assets and activities—and therefore have similar capital costs—despite differences in minimum required capital. As already discussed, banks hold capital based on management views on the amount of capital the bank needs internally and market expectations, in addition to regulatory requirements. Furthermore, regulators generally expect banks to hold capital above these minimum requirements, commensurate with their risk exposure. For example, as part of Pillar 2, banks and regulators will assess risks not covered or not adequately quantified by Pillar 1 minimum requirements. Another factor that may reduce competitive effects resulting from differences in risk-based capital requirements is the ability of banks to originate loans and subsequently securitize and sell them to other entities. Differences in required capital for credit risk across multiple risk-based regimes would likely have a competitive impact only to the extent that banks retain the credits they originate on their balance sheets or retain a significant portion of the credit risk off their balance sheets. Banks may securitize residential mortgages and other types of loans into other marketable investments in order to raise further funds to originate additional loans. This is also known as an originate-to-distribute model where revenues are derived from the sale of assets rather than an ongoing stream of interest payments. However, the recent turmoil in the credit markets has reduced the volume of some securitizations and highlighted weaknesses in underwriting standards associated with the originate-to- distribute model. As a result, incentives for securitization could be influenced by changes in capital requirements and the market environment. The potential impact of the new regulatory capital rules on incentives for mergers and acquisitions remains uncertain because it is not clear how much capital requirements and other regulatory costs will change under the new capital rules. As noted earlier, differences in regulatory capital requirements could influence incentives for consolidation by making acquisitions more or less advantageous for banks operating under different capital rules, such as the multiple risk-based capital rules being introduced in the United States. However, several industry participants with whom we spoke said that mergers and acquisitions generally were driven by strategic concerns such as gaining access to a new market rather than capital concerns. In addition to the new capital rules, changes in credit markets may be affecting the benefits and costs of certain mergers. For example, one regional bank told us that the costs of implementing the advanced approaches is high especially for smaller banks and that the benefits of the advanced approaches were less certain in the current financial climate where credit quality has deteriorated. As a result, some industry participants said that regional banks may be forgoing mergers with each other to avoid being classified as core banks that would have to adopt the advanced approaches. Many factors have affected the pace of Basel II implementation in the United States, and while the gradual implementation is allowing regulators to consider changes in the rules and reassess banks’ risk-management systems, regulators have not yet addressed some areas of uncertainty that could have competitive implications. The final rule provides regulators with considerable flexibility and leaves open questions about which banks will be exempted from the advanced approaches. Without such clarification, core banks may expend greater resources to prepare for implementation than otherwise would be necessary. In addition, opportunities for regulatory arbitrage exist if regulators use different standards for exemptions. Regulators also have not fully developed plans for a required study of the impacts of Basel II implementation. Lack of development or specificity in criteria, scope, methodology, and timing will affect the quality and extent of information that regulators would use to help address competitive and other effects and make future changes in the rules. The financial market turmoil that began in the subprime housing market in 2007 accounts, in part, for banks’ delaying implementation of the Basel II advanced approaches. In part, because the economy had been experiencing benign conditions, in 2005, U.S. regulators had estimated in QIS-4—a study of the potential impact of Basel II as then proposed—that minimum capital requirements for credit risk would fall once Basel II was fully implemented. And, according to the head of one of the regulatory agencies, many were impatient with a gradual approach to implementing Basel II at that time. Now that credit markets are experiencing turmoil, some bank officials and regulators told us that banks will implement Basel II more slowly. As a result of the current financial turmoil, regulators have been considering modifications in the advanced approaches to Basel II and are assessing banks’ risk management systems. The Basel Committee has been reviewing certain aspects of the capital framework including the treatment of securitizations, greater specification of scenario testing in Pillar 2, and the treatment of credit risk charges for trading assets. The Basel Committee is also considering principles for sound risk management and supervision related to liquidity risk and issued a consultative document on this issue in June 2008. U.S. regulators have noted that the gradual implementation of Basel II in the United States is allowing them to better understand how the rules might need to be adapted or implemented in the changed financial climate. Regulators have also been speaking to bankers in a number of forums on the need to improve risk management practices in relation to Basel II. Gradual implementation is also built into the advanced approaches. (See app. III for an illustration of the timeline for the development and implementation of the advanced approaches.) As noted earlier, the advanced approaches rule took effect on April 1, 2008. Core banks generally must adopt an implementation plan approved by the bank’s board of directors by October 1, 2008, but do not actually have to begin the four intermediate phases that lead to full implementation of Basel II until April 1, 2010. If banks begin then and each of the four intermediate phases takes a year, they would then be ready to fully adopt Basel II by April 1, 2014. At the time the rule took effect, banks could start their parallel run, the first of the four intermediate phases, at the beginning of any quarter ranging from the second quarter of 2008 to the second quarter of 2010. The 2007 decision to offer non-core banks an option to adopt the standardized approach also has affected the pace of implementation in the United States. As a result of comments received on NPRs related to Basel II in 2006, U.S. regulators decided to offer non-core U.S. banking institutions the option of a standardized approach. Regulators issued the NPR in July 2008 but are uncertain as to when they will issue a final rule. In addition, the new NPR again asks the question of whether core banks should be permitted to adopt the standardized approach rather than advanced approaches creating uncertainties that will be discussed later. A primary goal of federal bank regulators is to promote the safety and soundness of the banking institutions they oversee. To fulfill this obligation, bank regulators must have the authority and flexibility to take actions to achieve this objective. The Federal Reserve and OCC have taken a number of steps to help ensure that Basel II is implemented consistently across the banking organizations they supervise and regulators have issued some joint statements and guidance to address some of the remaining uncertainty for banks. Nonetheless, the flexibility afforded by the rule for the advanced approaches could lead to inconsistent application of the rules, which could, in turn, produce competitive differences among the banks or provide opportunities for regulatory arbitrage. A certain amount of flexibility for primary bank supervisors and related uncertainty for banks is necessary for maintaining the safety and soundness of the banking system. Under the final rule for the advanced approaches, regulators can respond to new or unforeseen situations that pose risks to safety and soundness without having to first change the rule. The rule reserves the authority of primary federal bank regulators to require that banks hold an amount of capital greater than the minimums dictated by the rule. This authority is being maintained both in the application of Pillar 1, where regulators can require that a bank calculate required capital in ways that recognize the individual situation of that institution and in Pillar 2, which by its very nature promotes supervision that uniquely addresses the situations of specific banks, while following general principles. For example, under the advanced approaches, regulators can generally allow U.S.-based banks with foreign subsidiaries to use a different retail definition of default for subsidiaries in foreign countries unless the primary supervisor determines that the banking organization is using the differences in the definitions of default to engage in regulatory arbitrage. Given the provisions for primary federal regulators to exercise their judgment during the implementation of Basel II, the Federal Reserve and OCC, which oversee all but one of the banks that meet the asset size and foreign exposure criteria for core banks, have taken a number of steps to help ensure that Basel II is implemented consistently within and across the banking organizations they supervise. As we have noted in a previous report, the Federal Reserve has been aware that its decentralized structure could lead to inconsistent supervisory treatment of large banks it oversees and had developed some procedures to limit these differences. These procedures include having a management group, which consists of officials from the Federal Reserve Board of Governors and Federal Reserve District Banks, provide additional review of supervisory plans and findings for large, complex banks. They have been relying on this process to help ensure consistency in the application of Basel II. OCC also has been taking actions to help ensure that examiners will implement Basel II in an equitable manner across the banks it supervises. Heretofore, the OCC examination process permitted lead examiners to provide information to banks without obtaining specific input from headquarters staff; however, OCC has been requiring that information about Basel II be raised to higher levels and that some of the same personnel be involved in Basel-related examinations across banks. These two agencies also have taken a number of actions to ensure consistent application of Basel II across the agencies. For example, Federal Reserve and OCC examiners have conducted joint examinations to look at how banks are implementing some processes related to the advanced approaches. The other two primary bank regulators—OTS and FDIC—which oversee fewer core banks, have also participated in activities related to ensuring consistency in the implementation of Basel II. OTS is the primary regulator for the only thrift that meets the definition of a core bank on its own and is thus interested in ensuring that its processes for that bank are consistent with those of the other regulators overseeing similar institutions. OTS and FDIC oversee a number of depository institutions that have been identified as core banks because they are subsidiaries of U.S.-based banks that meet the asset size and foreign exposure criteria for core banks, and FDIC also oversees subsidiaries of foreign-based banks that may adopt the advanced approaches. Officials at both agencies said that they are active in Basel Committee activities and that they played a role in the Federal Reserve and OCC’s joint examination of credit risk. In addition, according to some of the regulators, all four primary regulators have participated in joint examinations of operational risk across some of the core banks. Regulators have taken actions to reduce uncertainty by jointly providing some clarifying information about certain aspects of the capital rules. For example, during the development of the advanced approaches rule the regulators issued proposed guidance and interagency statements that helped to clarify certain aspects of the rules and, beginning in July 2008, updated some of these to reflect the final rule. They updated the interagency statement on the qualification process that had first been issued in 2005, following the Basel Committee’s issuance of the New Basel Accord. They also issued updated supervisory guidance for Pillar 2 that had been proposed initially in February 2007 to provide banks with more detail on the NPR for the advanced approaches. Regulators and examiners at one agency said that, in their view, it is not necessary to update the guidance on Pillar 1 that had been issued under the NPR because of the time and care that went into crafting the extensive and detailed preamble that accompanied the advanced approaches rule. Nonetheless, officials at many of the core banks with whom we spoke said that the lack of additional or updated guidance, including the standards by which examiners will judge the banks’ compliance, had been a problem for them. Regulators may provide additional joint information to banks and examiners based on the questions they have received from banks since the advanced approaches rule was issued. Regulators told us they are considering providing this information in a question and answer format on their Web sites. In addition, each of the regulators will be providing separate guidance for its examiners to determine whether the banks they oversee are complying with the rule. Regulators said they do not intend to issue any joint guidance for the proposed standardized approach rule while it is out for comment or when a final rule is issued beyond information provided in a preamble. However, to ensure that non-core banks are not disadvantaged by core banks moving onto the advanced approaches, regulators have said they are planning to issue the standardized approach rule before core banks move into the first transitional period for the advanced approaches. Timely issuance of the final rule and any clarifying information will help to ensure that non-core banks have adequate information on which to base decisions about which capital regime—advanced approaches, standardized approach, or Basel I—will be best for them. While some flexibility is necessary and regulators have taken some steps to ensure greater consistency in the implementation of the rules, there are actions the regulators could take to further reduce banks’ uncertainty about Basel II without necessarily jeopardizing the safety and soundness of the banking system. One area where uncertainty could be reduced is in clarifying which core banking institutions would be exempt from the application of the advanced approaches rule. The rule allows for exempting any core bank—a bank that meets the size or foreign exposure criteria for core banks or a depository institution that is a core bank because it is a subsidiary of a core bank that meets those criteria. Although the rule outlines a mechanism for certain banks to be exempted and provides some broad factors regulators will use in making these determinations (asset size, level of complexity, risk profile, or scope of operation), the regulators have not been specific in the current rule about whether they will grant these exemptions and under what circumstances. The regulators have said that they will not grant many exemptions and did not specify these exemptions because they believe it is important for them to retain supervisory flexibility as they move forward with implementation of the final rule. As such, they said each decision is to be made on a case-by-case basis. Throughout the development of the rules, regulators had introduced uncertainty about the extent to which foreign-based banks with subsidiaries that are U.S. bank holding companies will be subject to the advanced rules in the United States and the current rule continues to provide the Federal Reserve, the regulator of bank holding companies, considerable flexibility in making these decisions. The Federal Reserve has not answered the question of which specific bank holding companies that are subsidiaries of foreign-based banks and qualify as U.S. core banks- —they have assets of $250 billion or greater—will be exempted from using the advanced approaches in the United States. When the advanced approaches NPR was issued in 2006, some foreign-based institutions with large bank holding companies in the United States but relatively small depository institutions were surprised to find that they would be treated as core banks in the United States. The final rule acknowledged the concerns of those institutions and noted that the Federal Reserve may exempt them, but it does not make it clear that they will be exempt. Because the Federal Reserve, the regulator of bank holding companies, has not issued more specific criteria or guidance for reviewing requests for exemptions, these banks (at least one bank has requested an exemption) may have to devote resources to complying with the U.S. final rule until they receive an answer on whether they will be exempted. On the other hand, while only one banking organization is affected, the rule specifically exempts bank holding companies with significant insurance underwriting operations that otherwise would meet the requirements to be a core bank. Similarly, the rule states that regulators will consider the same factors— asset size, level of complexity, risk profile, and scope of operations—in making a determination as to whether depository institutions that are subsidiaries of U.S. core banks can be exempted. As a result, institutions have little guidance concerning the likelihood that some of their depository institutions will be exempt and will need to prepare for a full implementation of the advanced approaches in each entity until they receive a response from their regulator on whether they will be exempted. Moreover, because the factors are so broad, if different regulators use different specific criteria to exempt entities, they may set up the potential for regulatory arbitrage. For example, a U.S. banking organization could hold higher-risk assets in subsidiary banks that are exempt and remain on Basel I and could hold lower-risk assets in subsidiary banks that are not exempt from the advanced approaches. And banks that do not currently have a structure that would allow them to reduce capital in this way could change their structure accordingly by acquiring or changing bank charters. The overall result could be lower capital held in the bank or resources being devoted to reducing capital that do not properly align capital with risk. However, officials from the Federal Reserve noted that regardless of the structure of the bank, at the holding company level, all material bank assets would be consolidated and subject to the advanced approaches rule. This continuing uncertainty could make it difficult for banking organizations to pursue the most cost-effective route to complying with Basel II and could create more risk for the banks at a time when risks are already high because of the turmoil in financial markets. For example, some industry participants told us that those parts of Basel II that do not improve risk management divert resources that banks otherwise would use to better manage risk. In addition, resources devoted to circumventing certain aspects of the rule through regulatory arbitrage will divert the attention of bank officials from improving banks’ risk-management systems. Finally, the uncertainty over which banking institutions ultimately will have to adopt the advanced approaches continues because the advanced approaches rule says all core banks will be required to adopt detailed implementation plans for the complex advanced approaches by October 1, 2008, and the proposed standardized approach rule, which will not be finalized by that time, contains a question about whether and to what extent core banks should be allowed to use the simpler proposed standardized approach. The advanced approaches rule generally requires core banks to comply with the advanced approaches and adopt an implementation plan no later than October 1, 2008. Under this rule, the Federal Reserve can exempt bank holding companies from meeting the requirements of the final rule for the advanced approaches and primary federal regulators can exempt depository institutions that meet the definition of a core bank from the advanced approaches requirements. Given the authority of the primary federal regulator, once the standardized approach rule is finalized, those regulators would be able to require that exempt banking organizations adopt that approach. However, the proposed standardized approach rule, which will not be finalized by the time the core banks must adopt their implementation plans, asks whether core banks should be allowed to use the standardized approach instead of the advanced approaches. In the press release accompanying the proposed standardized approach rule, the FDIC Chairman stated, “Given the turbulence in the credit markets, I take some comfort with the fixed risk weights established under the standardized approach as they provide supervisors with some control over unconstrained reductions in risk-based capital.” However, the interagency statement on U.S. implementation of the advanced approaches issued in July 2008, stressed the existing timelines for the advanced approaches. The continued discussion on whether core banks should be exempt from the advanced approaches and permitted to adopt the standardized approach indicates that the primary federal regulators continue to have questions about whether the advanced approaches are the best risk-based capital requirements for core banks. Thus, it is difficult to tell whether the regulators have found a solution to difficulties that resulted from the differing perspectives they brought to negotiations during the development of the advanced approaches. We recommended in our February 2007 report on Basel II that regulators take actions to jointly specify the criteria they will use to judge the attainment of their goals for Basel II implementation and for determining its effectiveness for regulatory capital-setting purposes. We noted that without clarification on the criteria to evaluate or make changes in the Basel II rules, the implementation will continue to generate questions about the adequacy of the framework. The regulators have not fully developed plans for an interagency study that is to assess implementation and provide the information to form the basis for allowing banks to fully transition to Basel II. Partly in response to recommendations we made in 2007, the final rule says that the regulators will issue annual reports during the transitional period and conduct a study of the advanced approaches after the second transitional period. According to the rule, the annual reports are to provide timely and relevant information on the implementation of the advanced approaches. The interagency study is to be conducted to determine if there are material deficiencies in the advanced approaches and whether banks will be permitted to fully transition to Basel II. In its regulatory impact analysis, OCC said that the regulators will consider any egregious competitive effects associated with implementation of Basel II, whether domestic or international in context, to be a material deficiency. Among the items the rule specifies that the study will cover, several are important first steps in studying the competitive effects of the rule. These include the level of minimum required regulatory capital under U.S. advanced approaches compared to the capital required by other international and domestic regulatory capital standards; comparisons among peer core banks of minimum regulatory capital requirements; the processes banks use to develop and assess risk parameters and advanced systems, and supervisory assessments of their accuracy and reliability; and changes in portfolio composition or business mix. Some of these steps are similar to the calculations the regulators performed as part of QIS-4. The advantage of the future study over QIS-4 is that it will be based on actual data provided by banks whose risk management and data systems have been reviewed by regulators as part of the approval process for banks to enter the first two transitional periods. In addition, one regulator noted that the study will also benefit from the stresses of the recent market turmoil. This study should allow the regulators to determine the extent to which total regulatory capital changes in the short run, the specific behavior in which banks engage to comply with some aspects of the rule, and how the rule affects the capital of different banks. However, plans for the study do not address a number of factors including the establishment of shared overall goals and criteria for Basel II that will help delineate the study’s scope, methodology, and timing. For example, while OCC in its impact study said that the evaluation of competitive impacts will be an important part of the study, the rule does not specify how this will be measured and the scope and methodology of the study are not clearly designed to achieve this objective. Because regulators design the study to evaluate Basel II in light of clearly specified overall objectives or criteria for Basel II, it will be difficult to jointly determine the extent to which the rules need to be modified or whether implementation of Basel II should proceed. If some regulators object to the full implementation of Basel II while others do not, the rule specifies that a regulator can permit the banking organizations for which it is the primary federal regulator to move forward with the advanced approaches if it first provides a public report explaining its reasoning. However, such an outcome would not provide confidence in the current regulatory system and could allow for regulatory arbitrage. Further, the scope of the study has not been well defined. While the study contemplates calculations of capital using the standardized approach, Basel I, and other international rules as well as the actual data on the banks following the advanced approaches, regulators have not said that they plan to collect comparable data on financial entities not adopting these approaches—specifically, those banking institutions that will adopt the standardized approach or remain on Basel I. In addition, the regulators have not explicitly included the CSEs in the study. The effectiveness of the study will be limited if the CSEs are not included because information on a major segment of competitors of core banks that has had significant experience with some aspects of the advanced approaches will have been excluded. The agreement signed on July 7, 2008, between the Federal Reserve and SEC regarding coordination and information sharing in areas of common regulatory interest should facilitate the inclusion of the CSEs in any study of the advanced approaches. Finally, the regulators have conducted little research on international differences that could have competitive effects in the past, and the study’s design does not explicitly include research on international differences that could have competitive effects. However, since U.S. regulators participate in the Capital Monitoring Group, Accord Implementation Group, and other similar groups, they will have some perspective on Basel II implementation in the other countries in that group including some European Union countries and Canada that they will be able to use for this purpose. OCC officials explained that the Capital Monitoring Group will collect and analyze information on the implementation of Basel II in other countries and suggested that this information will inform the U.S. study. In addition, some U.S. regulators noted that the study outlined in the rule will not preclude them from looking at a broad range of data. The methodology the study will use to evaluate competitive impacts initially is not fully developed, although from a methodological perspective Basel II affords an opportunity to consider the impacts of regulatory capital on bank behaviors and among groups of banks adopting different requirements at different times. While the measurements and comparisons envisioned for the study are a necessary first step for evaluating competitive impacts among the core banks and between the core banks and other groups, they do not take full advantage of the opportunities to better understand the impact of regulatory capital on a range of bank behavior. Because banks in the United States and around the world are adopting a range of capital requirements at different times, Basel II affords a unique opportunity to consider whether event studies could contribute to a better understanding of the impact of regulatory capital on a variety of bank behaviors. While regulators at OCC noted that with banks on different capital regimes, academics and other researchers, including those at the regulatory agencies, will have data available to study the impact of regulatory capital on bank behavior, they said that they had not thoroughly considered the use of event studies as part of the study planned by the regulators. Because regulators have not clearly specified how they will evaluate the competitive impacts of Basel II, there is an increased likelihood that the kinds of data needed to complete an effective study will not be available. In addition, the advanced approaches rule does not specify a methodology for the study to analyze the extent to which the new rules provide opportunities for regulatory arbitrage that could limit the effectiveness of the rules in promoting improved risk management throughout the banking system. Several industry participants noted that having multiple capital requirements with different levels of risk sensitivity provides incentives for core banks to hold less risky assets and leave more risky assets in banks using the standardized approach or Basel I. Higher risk-based capital requirements for high-risk assets at core banks may increase their cost of holding these assets. Greater costs would reduce the supply of credit for these types of loans, and thus returns would increase. As a result, banks with less risk-sensitive capital requirements under Basel I or the standardized approach might find some higher risk credits more attractive at these higher rates of return. (As illustrated earlier in fig. 2, there may be different amounts of capital required for the same asset across the different risk-based rules.) Officials at one regulatory agency said that all of the regulators were aware of this potential outcome and planned to look at changes in the portfolios of core banks in the study. Further, for non-core banks, regulators at another agency said they would become aware of non-core banks increasing their holdings of high-risk assets through their normal oversight duties. However, the advanced approaches rule does not specify how the study would more fully explore this potentially important outcome of the new rules. If this arbitrage took place, the rules could require less capital overall in the banking system and would leave banks with the least well-developed risk management systems with the riskiest assets, thus exposing the U.S. banking system to greater systemic risk. Finally, the timing of the study is unclear. The rule specifies that the study will be published after the second transitional period, but core banks could begin the four intermediate phases required for full implementation in 2008, 2009, or 2010 and different banks (as well as different types of banks) could enter the second transitional period in different years. The phased implementation produces uncertainty about timing and could throw into question how many banks will be included in the study, and whether the results of the study will provide relevant information for all of the banks. For example, if the banks in the second transitional period in 2011 are primarily retail banks, the results are not likely to be applicable for the custodial banks, or vice versa. As a result of this and other factors discussed, the use of the study for taking actions that would improve risk management or reduce competitive concerns may be limited. Some regulators told us that they have not yet focused on plans for the study, in part, because it is early in the Basel II implementation process and they and the banks they supervise have been dealing with the financial turmoil. In addition, some regulators said that the language and factors laid out in the final rule should be viewed as a starting point, and officials at one regulatory agency said that the study will benefit from the data that will be available from the financial turmoil in the world’s credit markets. A global effort is underway to implement the New Basel Accord, which aims to improve the risk-management practices of banks, in part, by aligning the capital banks hold more closely with the risks they face. Capital’s role becomes more important in periods of economic uncertainty because banks rely on capital to weather unexpected losses. Although the impact of regulatory capital on a bank’s ability to compete is not always obvious because banks often hold more than their minimum required capital, regulatory capital is one of many factors that affects competition. And, the adoption of Basel II in the United States has raised concerns about competitive effects it could have on banks of varying sizes and in various locations. In addition, regulators have made clear that in light of the current market turmoil further revisions will be made in Basel II. Uncertainty about how to implement Basel II, to whom the rules will apply, and the effects the rules will have may lead banks to devote resources to information gathering and implementation that could otherwise be dedicated to improving risk management or other purposes. In our 2007 report, we noted that the rulemaking process for Basel II could benefit from increased transparency to respond to broader questions and concerns about transitioning to Basel II in the United States. The regulators referred to the recommendation in the advanced approaches rule and, with that rule and the proposed standardized approach rule, they have provided greater clarity about some aspects of Basel II. We recognize that the time table for Basel II implementation in the United States has slowed since we issued our earlier report and that both the regulators and the banks have been dealing with the market turmoil that began in mid- 2007. This gradual implementation is allowing bank regulators to reassess banks’ risk-management systems and consider changes in the rules before any banks begin their Basel II implementation. As part of this preparation period, regulators have taken and are planning some actions to reduce uncertainty, but could take further actions to address remaining uncertainties about the implementation of the rules and facilitate banks’ planning and preparation for their implementation of a new capital regime. Regulators have taken actions to reduce some of the uncertainty surrounding implementation of Basel II by providing information to aid examiners and banks in interpreting the rules. Regulators have updated some publicly available information on the process they will use to qualify banks for the advanced approaches and examine them under Pillar 2. Regulators have also engaged in discussions among themselves concerning posting additional information in a question and answer format on their Web sites. The timely issuance of additional information on the advanced approaches and a final standardized approach rule, which is in process, will enable banks to best prepare to meet the new risk-based capital requirements and will help to ensure regulatory consistency across the banks. As a result, we encourage the regulators to continue providing joint information in a timely manner on both the advanced and standardized approaches. We recognize that regulators have taken steps to reduce some uncertainties related to Basel II; however, the regulators could take additional steps to address uncertainties that are not related to their need for flexibility to respond to innovation in the industry and to unintended consequences that the rules may have. For example, in the final rule, the regulators did not specify which banks technically met the definition of core banks. Although, the rule specifically says that certain banks may be exempted by their primary regulator from the advanced approaches requirements, it does not provide well-defined criteria for evaluating requests for exemptions. Because this clarity has not been provided and specific criteria have not been laid out, regulators may not provide exemptions in a consistent manner. The issuance of more specific guidance on which banks will be exempt from applying the advanced approaches would provide clarity and enable banks to plan accordingly. Also, the question in the NPR for the standardized approach about whether core banks should be able to use the proposed standardized approach indicates that the primary federal regulators continue to have questions about whether the advanced approaches are the best risk-based capital requirements for core banks. Regulatory differences on these issues can lead to increased costs for the banks, inefficiencies for their regulators, and may weaken the overall effectiveness of the regulatory system by creating opportunities for banks to engage in regulatory arbitrage. In our 2007 report, we recommended that regulators issue public reports on the progress and results of implementation efforts and that this reporting should include an articulation of the criteria by which they would assess the success of Basel II. While the regulators have proposed a study of the core banks after the second transitional year of the implementation of the advanced approaches, they have not yet developed the criteria on which to base the study’s design and objectives. These are needed for a determination of whether Basel II is effective for regulatory capital-setting purposes and whether to ultimately allow banks to move past the third transitional period to full Basel II implementation. As delineated in the advanced approaches rule, the study will measure the changes in capital and portfolios held by the core banks and will look at the differences in required capital for these banks if they were under the standardized approach rule or Basel I—necessary steps for evaluating the competitive impact of Basel II—but it does not explicitly describe components needed to determine if there are material deficiencies in the rule or for regulators to reach agreement on whether banks should be permitted to fully implement the advanced approaches. However, the gradual implementation of the advanced approaches in the United States affords regulators time to jointly establish criteria for evaluating Basel II and to fully develop a study that flows from those criteria—including (1) a broad enough scope—inclusion of non-core banks, CSEs, and foreign-based banks—to capture competitive effects; (2) consideration of a number of methodologies; and (3) the resolution of the timing issue. Such actions would help the regulators make better-informed decisions on an interagency basis about whether changes to the rules were necessary and whether to permit banks to fully implement Basel II. Without these criteria, it will be difficult for regulators to make these judgments and provide consistent guidance for banks. We are making two recommendations to the heads of the FDIC, Federal Reserve, OCC, and OTS: To further limit any potential negative effects, where possible, regulators should move to minimize the uncertainty surrounding certain aspects of Basel II. Specifically, regulators should clarify how they will use certain regulatory flexibility under the advanced approaches rule, particularly with regard to how they will exercise exemptions for core banks from the advanced approaches requirement and the extent to which core banks will be allowed to adopt the standardized approach. To improve the understanding of potential competitive effects of the new capital framework, the regulators should take steps jointly to plan for the study to determine if major changes need to be made to the advanced approaches or whether banks will be able to fully implement the current rule. In their planning, they should consider such issues as the objectives, scope, methodology, and timing needs for the future evaluation of Basel II. The plan should include how the regulators will evaluate competitive differences between core and non-core banks in the United States, between core banks and CSEs, and between U.S.-based banks and banks based in other countries. We provided the heads of the Federal Reserve, FDIC, OCC, OTS, SEC, and Department of the Treasury with a draft of this report for their review and comment. We received written comments from the banking regulators in a joint letter. These comments are summarized below and reprinted in appendix IV. The banking regulators also provided technical comments that we incorporated in the report where appropriate. We did not receive comments from SEC or the Department of the Treasury. In their letter, the banking regulators strongly endorsed our opening statement that ensuring that banks maintain adequate capital is essential to the safety and soundness of the banking system and said that it is this overarching objective that will guide their efforts and has led them to include additional prudential safeguards in their implementation of the Basel II rules. In a somewhat related matter, the regulators said that the report emphasizes the cost to banks of holding capital but did not discuss how a bank’s strong capital base confers competitive strength and create strategic opportunities. While we describe some of the costs to banks of holding additional capital because this is an important channel through which the new capital rules could affect the competitiveness of U.S. banking organizations, we also note that more capital reassures creditors and reduces the cost of borrowing. In addition, as noted in the draft, banks hold capital for this and other reasons including the ability to take advantage of strategic opportunities such as acquiring other banking institutions. As we detailed in the draft, the banking regulators highlighted the actions they have taken to address many of the concerns that bankers and others have raised about the potential competitive equity effects of the implementation of Basel II and said that they are in general agreement with our recommendations. Specifically, they said that they will work together to resolve, at the earliest possible time, the question posed for comment in the proposed standardized approach rule regarding whether and to what extent core banks should be able to use the standardized approach. With regard to clarifying how they will decide whether to grant requests from core banks to be exempt from the requirement to adopt the advanced approaches, the regulators said they will assess each exemption request in light of the specific facts and circumstances applicable to the institution seeking the exemption and that they have already commenced discussions to ensure a clear and consistent interpretation of these provisions is conveyed to U.S. banks. Regarding the need to jointly plan the required study, the regulators commented that they will work together to develop “plans for the required study of the impact of the advanced approaches of Basel II.” Specifically, they said that they will begin to develop more formal plans for the study once they had “a firmer picture of banks’ implementation plans” but noted the difficulties concerning drawing definitive conclusions about the effects of changes in regulatory capital rules. They also said that they would consider including in their analysis the potential competitive effects with CSEs and foreign banks as we recommended. While we are encouraged by the regulators’ recognition of the need for more formal plans and consideration of the scope of the study to include CSEs and foreign banks, we noted a number of additional factors that also should be considered, such as developing criteria that will help them determine whether there are material deficiencies that can be attributed to the new rules and what changes, if any, could address those deficiencies. Finally, because Basel II affords an opportunity to consider the impacts of regulatory capital on bank behaviors and among groups of banks adopting different requirements at different times, we noted in the draft that it is important that regulators consider a number of methodologies for evaluating the new capital rules and potential competitive effects to determine which are the most appropriate. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its date of issue. At that time we will send copies of this report to interested congressional committees, the Chairman of the Board of Governors of the Federal Reserve System, Chairman of the Federal Deposit Insurance Corporation, the Comptroller of the Currency, the Director of the Office of Thrift Supervision, the Chairman of the Securities and Exchange Commission, and the Secretary of the Treasury. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives in this report were to discuss (1) the nature of the competitive environment in which U.S. banking organizations compete, (2) the extent to which different capital requirements may have competitive impacts on U.S. banking organizations internationally and domestically, and (3) actions regulators could take to address competitive effects and other potential negative effects of the new capital rules during implementation. For all our objectives, we reviewed a variety of documents, including regulators’ statements; the international Basel II framework (entitled “International Convergence of Capital Measurement and Capital Standards: A Revised Framework”) and other documents from the Basel Committee, such as the 1988 Basel Capital Accord (Basel I); the Basel II, Basel 1A, and Standardized Approach Notices of Proposed Rulemaking (NPR) and the final rule on the advanced approaches; supervisory guidance; academic articles, and our previous reports on banking regulation. We interviewed senior supervisory officials at the Board of Governors of the Federal Reserve System and Federal Reserve Banks of Boston, New York, and Richmond (Federal Reserve), Office of Management and Budget, Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC), Office of Thrift Supervision (OTS), Securities and Exchange Commission, and Department of the Treasury. We also interviewed officials from the Accord Implementation Group, several foreign banking regulatory agencies, domestic, international, and foreign trade associations, credit rating agencies, and several academics and consultants with banking expertise. In addition, we interviewed officials from all of the core banks and other banks, both foreign and domestic, with operations in the United States. Finally, we attended several conferences held by regulators and trade associations that included discussions related to Basel II. To describe the competitive environment in which U.S. banks operate, we collected data from several sources to illustrate which types and sizes of banks are active in which kinds of products. We used data from the Federal Reserve’s Structure and Share Data for U.S. Banking Offices of Foreign Entities, and Consolidated Financial Statements for Bank Holding Companies (i.e., FR Y-9C). These data include the amount of assets in particular products that bank holding companies hold on and off of their balance sheets. For banks and thrifts that do not report assets in particular products at the consolidated level to their regulator, we used data on banks and thrifts in the Federal Financial Institutions Examination Council’s (FFIEC) Consolidated Reports of Condition and Income (FFIEC 031 or Call Report) and OTS’s Thrift Financial Reports, respectively. We also used data from FFIEC’s Country Exposure Lending Survey. To compare activities across banks of different sizes, we used data at the consolidated level because banks generally compete on an enterprisewide basis. For bank holding companies, we used data provided by the Federal Reserve. Almost all bank holding companies that have assets greater than $500 million report assets in particular product categories on a consolidated basis to the Federal Reserve using the Y-9C form; however, a large proportion of those with assets under $500 million about 80 percent of the bank holding companies and a few larger bank holding companies do not report consolidated assets on a product basis to the Federal Reserve. We included these bank holding companies that have few assets outside their chartered commercial banks in our analysis, by having staff at the Federal Reserve group the commercial banks by bank holding company and sum the assets reported in the Call Reports accordingly. Thrift holding companies do not report data on assets by product category to OTS on a consolidated basis. Because thrift holding companies are often engaged in a wide variety of activities outside of banking, we could not rely on the thrift financial report data on individual thrifts to approximate the holding company for some thrifts as we did in the case of some bank holding companies. However, we were able to have OTS staff provide thrift financial report data that we used to approximate the thrift holding companies for those thrift companies primarily in banking. We did this by having OTS staff group the thrifts by holding company for those where thrifts make up 95 percent of the assets of the holding company and where they make up 75 percent of the assets of the holding company. The allocation of assets across product lines was substantially the same for these two categories, which allowed us to conclude that the data gave us a good approximation of differences between thrift holding companies that are primarily in the business of banking and bank holding companies. We concluded that they do differ in that thrifts that are engaged primarily in the business of banking hold a much larger percentage of their assets in residential mortgages than do bank holding companies across all size categories. To assess the reliability of these data, we talked with knowledgeable agency officials about the data and tested the data to identify obvious problems with completeness or accuracy. We determined the data were sufficiently reliable for the purposes of this report. To determine the extent to which different capital requirements may impact how various U.S. banking organizations compete, we reviewed the available academic literature on the role capital plays in bank competition. We also estimated minimum required capital for some assets under the advanced and standardized approaches for credit risk, Basel I, and leverage requirements, based on available information and data from the U.S. federal banking regulators’ fourth quantitative impact study (QIS-4) and Moody’s Investors Service. There are some limitations associated with the data from QIS-4. At the time, the regulators emphasized that QIS-4 was conducted on a “best efforts” basis without the benefit of either a definitive set of proposals or meaningful supervisory review of the institutions’ systems. We assessed the reliability of the data we used and found that, despite limitations, they were sufficiently reliable for our purposes. We conducted this performance audit from May 2007 to September 2008 in Amsterdam, The Netherlands; Brussels, Belgium; Boston, Massachusetts; Chicago, Illinois; and Charlotte, North Carolina; London, United Kingdom; New York, New York; Toronto, Canada; and Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Pillar 1 of the advanced approaches rule features explicit minimum capital requirements, designed to ensure bank solvency by providing a prudent level of capital against unexpected losses for credit, operational, and market risk. The advanced approaches, which are the only measurement approaches available to and required for core banks in the United States, will make capital requirements depend in part on a bank’s own assessment, based on historical data, of the risks to which it is exposed. Under the advanced internal ratings-based (A-IRB) approach, banks must establish risk rating and segmentation systems to distinguish risk levels of their wholesale (most exposures to companies and governments) and retail (most exposures to individuals and small businesses) exposures, respectively. Banks use the results of these rating systems to estimate several risk parameters that are inputs to supervisory formulas. Figure 7 illustrates how credit risk will be calculated under the Basel II A-IRB. Banks must first classify their assets into exposure categories and subcategories defined by regulators: for wholesale exposures those subcategories are high-volatility commercial real estate and other wholesale; for retail exposures those subcategories are residential mortgages, qualifying revolving exposures (e.g., credit cards), and other retail. Banks then estimate the following risk parameters, or inputs: the probability a credit exposure will default (probability of default or PD), the expected size of the exposure at the time of default (exposure at default or EAD), economic losses in the event of default (loss given default or LGD) in “downturn” (recession) conditions, and, for wholesale exposures, the maturity of the exposure (M). In order to estimate these inputs, banks must have systems for classifying and rating their exposures as well as a data management and maintenance system. The conceptual foundation of this process is that a statistical approach, based on historical data, will provide a more appropriate measure of risk and capital than a simple categorization of asset types, which does not differentiate precisely between risks. Regulators provide a formula for each exposure category that determines the required capital on the basis of these inputs. If all the assumptions in the supervisory formula were correct, the resulting capital requirement would exceed a bank’s credit losses in a given year with 99.9 percent probability. That is, credit losses at the bank would exceed the capital requirement with a 1 in 1,000 chance in a given year, which could result in insolvency if the bank only held capital equal to the minimum requirement. Banks may incorporate some credit risk mitigation, including guarantees, collateral, or derivatives, into their estimates of PD or LGD to reflect their efforts to hedge against unexpected losses. To determine minimum required capital for operational risk, banks will use their own quantitative models of operational risk that incorporate elements required in the advanced approaches rule. To qualify to use the advanced measurement approaches (AMA) for operational risk, a bank must have operational risk management processes, data and assessment systems, and quantification systems. The elements that banks must incorporate into their operational risk data and assessment system are internal operational loss event data, external operational loss event data, results of scenario analysis, and assessments of the bank’s business environment and internal controls. Banks meeting the AMA qualifying criteria would use their internal operational risk quantification system to calculate the risk-based capital requirement for operational risk, subject to a solvency standard specified by regulators, to produce a capital buffer for operational risk designed to be exceeded only once in a thousand years. Regulators have allowed certain banks to use their internal models to determine required capital for market risk since 1996 (known as the market risk amendment or MRA). Under the MRA, a bank’s internal models are used to estimate the 99th percentile of the bank’s market risk loss distribution over a 10-business-day horizon, in other words a solvency standard designed to exceed trading losses for 99 out of 100 10-business- day intervals. The bank’s market risk capital requirement is based on this estimate, generally multiplied by a factor of three. The agencies implemented this multiplication factor to provide a prudential buffer for market volatility and modeling error. The OCC, Federal Reserve, and FDIC are proposing to incorporate their existing market risk rules and are proposing modifications to the market risk rules, to include modifications to the MRA developed by the Basel Committee, in a separate NPR issued concurrently with the proposal for credit and operational risk. OTS is proposing its own market risk rule, including the proposed modifications, as a part of that separate NPR. In previous work, regulatory officials generally said that changes to the rules for determining capital adequacy for market risk were relatively modest and not a significant overhaul. The regulators have described the objectives of the new market risk rule as including enhancing the sensitivity of required capital to risks not adequately captured in the current methodologies of the rule and enhancing the modeling requirements consistent with advances in risk management since the implementation of the MRA. In particular, the rule contains an incremental default risk capital requirement to reflect the growth in traded credit products, such as credit default swaps, that carry some default risk as well as market risk. The Pillar 2 framework for supervisory review is intended to ensure that banks have adequate capital to support all risks, including those not addressed in Pillar 1, and to encourage banks to develop and use better risk management practices. Banks adopting Basel II must have a rigorous process of assessing capital adequacy that includes strong board and senior management oversight, comprehensive assessment of risks, rigorous stress testing and validation programs, and independent review and oversight. In addition, Pillar 2 requires supervisors to review and evaluate banks’ internal capital adequacy assessments and monitor compliance with regulatory capital requirements. Under Pillar 2, supervisors must conduct initial and ongoing qualification of banks for compliance with minimum capital calculations and disclosure requirements. Regulators must evaluate banks against established criteria for their (1) risk rating and segmentation system, (2) quantification process, (3) ongoing validation, (4) data management and maintenance, and (5) oversight and control mechanisms. Regulators are to assess a bank’s implementation plan, planning and governance process, and parallel run performance. Under Pillar 2, regulators should also assess and address risks not captured by Pillar 1 such as credit concentration risk, interest rate risk, and liquidity risk. Pillar 3 is designed to encourage market discipline by requiring banks to disclose additional information and allowing market participants to more fully evaluate the institutions’ risk profiles and capital adequacy. Such disclosure is particularly appropriate given that Pillar I allows banks more discretion in determining capital requirements through greater reliance on internal methodologies. Banks would be required to publicly disclose both quantitative and qualitative information on a quarterly and annual basis, respectively. For example, such information would include a bank’s risk- based capital ratios and their capital components, aggregated information underlying the calculation of their risk-weighted assets, and the bank’s risk assessment processes. In addition, federal regulators will collect, on a confidential basis, more detailed data supporting the capital calculations. Federal regulators would use this additional data, among other purposes, to assess the reasonableness and accuracy of a bank’s minimum capital requirements and to understand the causes behind changes in a bank’s risk-based capital requirements. Federal regulators have developed detailed reporting schedules to collect both public and confidential disclosure information. July: Basel Committee issu Basel Cpitl Accord (Basel I), interntionl rik-based cpitl reqirement for bank in G10 contrie, to e flly implemented y 1992. Bnk regtor—OCC, OTS, Federl Reerve, nd FDIC (herefter “regtor”)—implement Basel I with trition period to 1992. Regtorlly phase in Basel I asrt of roder chnge to cpitregtion. The prompt corrective ction proviion of FDICIA reqire dequately cpitlized nd well-cpitlized intittion to meet or exceed Basel I rik-based cpitl reqirement as well as leverge reqirement. January: Basel Committee mend Basel I to incorporte mrket ri. The Mrket Rik Amendment introdce the use of intittion’ internl model of rik to determine regtory cpitl reqirement. September: OCC, Federl Reerve, nd FDIC issue finl rle implementing the Mrket Rik Amendment, reqiring intittion with ignificnt trding ctivity to use internl model to measure nd hold cpitl in support of mrket rik exposure. June: Basel Committee propo for comment incrementl reviion to Basel I for credit rik (ndrdized pproch), pl to develop lterntive internl rting-based (IRB) pproch, nd the propoed cpitl chrge for other mjor ri, inclding opertionl rik. Aut: Regtor releasdvnce NPR on Basel II for comment. The propoed rle reqire the dvnced pproche for credit nd opertionl rik to pplied y only the lrge nd/or interntionlly ctive bank nd holding compnie. Exiting cpitl rle wold e retined for ll other bank. January: Basel Committee releas revied proposabased on consulttion with industry nd supervior. The Committee im to encoge improved rik mgement prctice in prt throgh cpitl incentive for bank to move to the more rik-enitive IRB pproch. April-May: Basel Committee releas result of global quantittive impct dy (QIS-) nd issu third conective pper for comment. June: SEC releas lterntive net cpitl rle tht permit certin roker-deler to use internl mthemticl model to clcte mrket nd derivtive-relted credit rik. To pply the rle, roker-deler’ ltimte holding compny must conent to gropwide superviion nd report cpitdequacy measure content with Basel ndrd. June: Basel Committee issu finl revied frmework for Basel II (New Basel Accord). It reitertejective of rodly mintining the level of ggregte reqired cpitl while o providing incentive to dopt the more dvnced pproche. The frmework inclde chnge such as 1.06 ling fctor y which cpitl reqirement for credit rik wold e mltiplied in order to mintin cpitl netrlity with previously etimted result. January: Regtorssue intergency tement on qualifiction process for dvnced pproche based on New Basel Accord. April: Regtor nnonce dely in Basel II rlemking process, fter result of uantittive impct dy (QIS-4) etimted mteril redction in ggregte cpitl reqirement nd ignificnt vrition in result cross intittion nd portfolio. Regtorter te thsuch result wold cceptable in ctual cpitl regime. September: Regtor nnonce 1-yer dely in implementtion nd dditionsafeguard to prevent cceptable decline in reqired cpitastimted in QIS-4. The gencie retin the leverge reqirement, dd trition yer, nd eablitricter trition period limit on cpitl redction for individual intittion. October: Regtorssue Basel IA dvnce NPR. It revi Basel I to ddress competitive ineqitie etween lrge nd ll intittion y providing more rik-enitive frmework imilr to the ndrdized pproch nder the Basel II interntionccord. June: Basel Committee releas result of global quantittive impct dy (QIS-5) of etimted chnge in minimm reqired cpitnder Basel II. June: EU issu finl rle implementing Basel II (EU Cpitl Directive). March: Federl Reerve releas drft Basel II NPR to llow industry time to comment nd prepre. In ddition to previously nnonced safeguard, it te thgencie wold view 10 percent or greter decline in ggregte rik-based cpitl reqirement (compred to Basel I) as teril drop wrrnting chnge to the Basel II frmework. September: Regtor release for comment officil NPR for Basel II nd for mrket rik. The Basel II NPR req comment on whether nd how the ndrdized pproch hold e provided to bank as n option in ddition to the dvnced pproch for credit rik. December: Regtor release NPR for Basel IA. February: Regtorssue propoed gidnce for dvnced pproche nd superviory review. March: Comment period for Basel II nd Basel IA NPR cloe. July: Regtor gree to issudvnced pprochele more content with New Basel Accord nd to issun NPR for n optionndrdized pproch. December: Regtorssudvnced pprochele. July: Regtorssue NPR for optionndrdized pproch contining tion on whether core bank hold able to chooe thi pproch. Regtorssupdted gidnce for superviory review. October: Last dte for core bank to dopt implementtion pligned rd of director. Firt tranitional floor period (90% floor) Second tranitional floor period (80% floor) Core bank mut bein 4 quarter for parallel run (200-2010 April) Full implementation (firt date available for Bael II) Core bank will enter firt tranitional floor period (95% floor) (2009-2011) Core bank will enter econd tranitional floor period (90% floor) and tudy to assss material deficiencie will be conducted (2010-2012) Core bank will enter third tranitional floor period (85% floor) (2011-2013) Core bank will be ready to implement advanced approache (2012-2014 April) Floor pplie to individual intittion’pitl redction. In addition to the contact named above, Barbara I. Keller (Assistant Director), Nancy Barry, Emily Chalmers, Michael Hoffman, Joe Hunter, Robert Lee, Marc Molino, Carl Ramirez, Barbara Roesmann, Paul Thompson, and Mijo Vodopic made key contributions to this report. Financial Market Regulation: Agencies Engaged in Consolidated Supervision Can Strengthen Performance Measurement and Collaboration. GAO-07-154. Washington, D.C.: March 15, 2007. Risk-Based Capital: Bank Regulators Need to Improve Transparency and Overcome Impediments to Finalizing the Proposed Basel II Framework. GAO-07-253. Washington, D.C.: February 15, 2007. Deposit Insurance: Assessment of Regulators’ Use of Prompt Corrective Action Provisions and FDIC’s New Deposit Insurance System. GAO-07-242. Washington, D.C.: February 15, 2007. Financial Regulation: Industry Changes Prompt Need to Reconsider U.S. Regulatory Structure. GAO-05-61. Washington, D.C.: October 6, 2004. Risk-Focused Bank Examinations: Regulators of Large Banking Organizations Face Challenges. GAO/GGD-00-48. Washington, D.C.: January 24, 2000. Risk-Based Capital: Regulatory and Industry Approaches to Capital and Risk. GAO/GGD-98-153. Washington, D.C.: July 20, 1998. Bank and Thrift Regulation: Implementation of FDICIA’s Prompt Regulatory Action Provisions. GAO/GGD-97-18. Washington, D.C.: November 21, 1996. | Basel II, the new risk-based capital framework based on an international accord, is being adopted by individual countries. It includes standardized and advanced approaches to estimating capital requirements. In the United States, bank regulators have finalized an advanced approaches rule that will be required for some of the largest, most internationally active banks (core banks) and proposed an optional standardized approach rule for non-core banks that will also have the option to remain on existing capital rules. In light of possible competitive effects of the capital rules, GAO was asked to examine (1) the markets in which banks compete, (2) how new capital rules address U.S. banks' competitive concerns, and (3) actions regulators are taking to address competitive and other potential negative effects during implementation. Among other things, GAO analyzed data on bank products and services and the final and proposed capital rules; interviewed U.S. and foreign bank regulators, officials from U.S. and foreign banks; and computed capital requirements under varying capital rules. Large and internationally active U.S.-based banks (core banks) that will adopt the Basel II advanced approaches compete among themselves and in some markets with U.S.-based non-core banks, investment firms, and foreign-based banks. Non-core banks compete with core banks in retail markets, but in wholesale markets core banks often compete with investment firms and foreign-based banks. Because holding capital is costly for banks, differences in regulatory capital requirements could influence costs, prices, and profitability for banks competing under different capital requirements. The new U.S. capital rules addressed some earlier competitive concerns of banks; however, other concerns remain. By better aligning the advanced approaches rule with the international accord and proposing an optional standardized approach rule, U.S. regulators reduced some competitive concerns for both core and non-core banks. For example, the U.S. wholesale definition of default for the advanced approaches is now similar to the accord's. Core banks continue to be concerned about the leverage requirement (a simple capital to assets calculation), which they believe places them at a competitive disadvantage relative to firms not subject to a similar requirement. Foreign regulators have been working with U.S. regulators to coordinate Basel II implementation for U.S. banks with foreign operations. The proposed standardized approach addresses some concerns non-core banks raised by providing a more risk sensitive approach to calculating regulatory requirements. But other factors likely will reduce differences in capital for banks competing in the United States; for example, the leverage requirement establishes a floor that may exceed the capital required under the advanced and standardized approaches. Many factors have affected the pace of Basel II implementation in the United States and, while the gradual implementation is allowing regulators to consider changes in the rules and reassess banks' risk-management systems, regulators have not yet taken action to address areas of uncertainty that could have competitive implications. For example, the final rule provides regulators with considerable flexibility and leaves open questions such as which banks may be exempted from the advanced approaches. Although the rule provides that core banks can apply for exemptions and regulators should consider these in light of some broad categories, such as asset size or portfolio mix, the rule does not further define the criteria for exemptions. Some industry participants we spoke with said that uncertainties about the implementation of the advanced approaches have been a problem for them. Moreover, regulators have not fully developed plans for a required study of the impacts of Basel II before full implementation. Lack of specificity in criteria, scope, methodology, and timing will affect the quality and extent of information that regulators will have to help assess competitive and other impacts, determine whether there are any material deficiencies requiring future changes in the rules, and determine whether to permit core banks to fully implement Basel II. |
The Congress enacted the Ryan White CARE Act on August 18, 1990, to “improve the quality and availability of care for individuals and families with HIV disease.” The CARE Act makes funds available through four titles to states, EMAs, and nonprofit entities for developing, organizing, coordinating, and operating more effective and cost-efficient service delivery systems. The Health Resources and Services Administration, part of the Department of Health and Human Services’ U.S. Public Health Service, administers the program. Over $579 million in CARE Act funds were appropriated in fiscal year 1994 for services to people with AIDS and HIV. About $326 million (56 percent) of these funds were appropriated for title I, which provides “emergency assistance” to EMAs—metropolitan areas disproportionately affected by the HIV epidemic. Half of title I funds are distributed by formula, and half are distributed competitively. To be eligible, a metropolitan area must have a cumulative count of more than 2,000 cases of AIDS since reporting began in 1981 or a cumulative count of AIDS cases that exceeds one-quarter of 1 percent of its population. In fiscal year 1994, there were a total of 34 EMAs in 17 states, the District of Columbia, and Puerto Rico. Since fiscal year 1991, the number of EMAs has more than doubled. For title II, $184 million (32 percent of total CARE Act funds) were appropriated in fiscal year 1994. Title II provides funds to states to improve the quality, availability, and organization of health care and support services for people with HIV. Of the title II funds distributed to the states in fiscal year 1994, 90 percent were distributed by formula, and 10 percent were distributed competitively through Special Projects of National Significance. The remaining titles—titles IIIb and IV—were funded at about $48 million (8 percent) and $22 million (4 percent), respectively, in fiscal year 1994. Title IIIb funds are intended for early intervention programs, and title IV funds are intended for pediatric AIDS programs. Under both of these titles, funds are awarded competitively. Our examination of the existing title I and II formulas indicates that neither formula meets the beneficiary and taxpayer equity criteria. Per-case funding is not systematically related to either EMA or state service costs or their fiscal capacity. (See app. II for details of our analysis.) The title I formula does not meet the beneficiary equity criterion because per-case funding is not systematically related to the cost of treating people with HIV. Specifically, our analysis of fiscal year 1994 funding for EMAs showed that per-case funding ranged from $818 to $2,663—a difference of over 200 percent. However, only 10 percent of this variation was related to cost differences—though the cost differences themselves were significant. As an illustration, the Dallas and Oakland EMAs each received title I allocations of approximately $1,200 per person with AIDS, but the cost of providing health care services in Oakland is about 37 percent higher than in Dallas. The title I formula also does not meet the taxpayer equity criterion because, in addition to not being systematically related to cost differences, EMA grant amounts are not highly related to the EMAs’ fiscal capacity. Our analysis of fiscal year 1994 funding for all EMAs showed that more than 40 percent of the variation in EMAs’ per-case funding was unrelated to differences in cost and fiscal capacity. For example, the Dallas and Oakland EMAs received about the same per-case funding, but Oakland’s funding capacity when measured in terms of its tax base, costs, and concentration of AIDS cases is about 17 percent lower than that of Dallas. The distribution of combined title I and II funds across states does not meet either the beneficiary or the taxpayer equity criterion. Total per-case funding for California and New York is about 20 percent and 30 percent above the national average, respectively, while Hawaii, Ohio, and Vermont have total per-case funding levels about 50 percent below the national average. These funding differences are not strongly related to differences in states’ costs and fiscal capacity to provide services. Our statistical analysis found that differences in service costs and fiscal capacity account for 33 percent of these differences in per-case funding. That is, 67 percent of the variation in state funding per AIDS case is unrelated to states’ funding needs. (See app. II for details.) Several features of the title I and II formulas contribute to the funding inequities we have identified. Specifically, inequities occur because EMA cases are counted in both the title I and II formulas, an inappropriate caseload measure is included in the title I formula, an inappropriate measure of EMAs’ and states’ fiscal capacity is included in both formulas, and neither formula includes a measure of EMAs’ and states’ service costs. (See appendixes for details.) Our analysis of differences in states’ per-case funding amounts indicates that about half of the variation is due to the double counting of EMA cases in both the title I and II formulas rather than differences in funding needs (that is, cost or fiscal capacity differences). States where most cases live in EMAs receive the largest amounts per case, since larger proportions of their caseloads are double counted. For example, per-case funding was about $1,100 in states without an EMA, $1,700 in states where less than half the state’s caseload lived in an EMA, and $2,200 in states where more than half of the caseload lived in an EMA (see fig. 1). Thus, most of the variation in per-case funding can be explained by the extent to which a state’s caseload is double counted rather than by the state’s funding needs. The title I caseload measure is based on the cumulative number of people with AIDS that EMAs reported to CDC since 1981 when reporting began. By the end of 1993, however, two-thirds of these people had been reported to have died and were, therefore, no longer using services funded by title I. Because the formula includes deceased persons, the EMAs that experienced the first outbreak of AIDS receive substantially more per-case funding than do newer EMAs. For example, in fiscal year 1994, the 18 EMAs that were eligible to receive title I funds in the first 2 years of eligibility—1991 and 1992—were funded at about $1,500 per case. In contrast, the 16 EMAs that became eligible in 1993 and 1994 were funded at only about $1,000 per case—one-third less than the older EMAs (see fig. 2). While the cost of providing AIDS and HIV services varies among EMAs and states, neither the title I nor title II formula includes a factor to measure those differences. Information on the actual costs of providing health and support services to people with AIDS and HIV within different geographic areas is not available. However, most of the delivery costs appear to be associated with the personnel who provide the labor-intensive outpatient health, support, and case management services titles I and II primarily fund. A proxy measure for these labor costs is available through the Medicare Hospital Wage Cost Index. Using this index for title I cities, we estimated that the cost of providing medical services was about 30 percent above the national average in the New York, Oakland, and San Francisco EMAs and about 10 percent below the national average in the Miami EMA—a difference of about 40 percent. This suggests that the New York, Oakland, and San Francisco EMAs must spend much more than the Miami EMA to provide a comparable level of services to their patients. Similarly, under title II, we estimated that the cost of providing medical services was more than 15 percent above the national average in the states of Alaska, California, and New York, about 15 percent below the national average in Alabama and Arkansas, and about 20 percent below the national average in Mississippi. State and EMA fiscal capacity depends on the size of the tax base and the service demands placed on that tax base. The current title I formula measures the demand for services through the use of an AIDS incidence rate factor, but the strength of each EMA’s tax base is not included. As a result, the title I formula does not adequately adjust EMAs’ allocations to target those with smaller tax bases and fewer resources to draw upon to meet the needs of the cases they must serve. The title II formula does measure the strength of each state’s tax base through the use of per capita personal income. However, it does not consider the demand for services that is placed on state tax bases. As a result, the title II formula does not adequately adjust state allocations to target states with tax bases that are burdened by a heavy demand for services. Greater funding equity can be achieved by changing the formulas’ structure and components. The formulas can be modified to make their funding distribution meet either the beneficiary equity criterion or the taxpayer equity criterion. Alternatively, although no formula can completely satisfy both criteria simultaneously, the formulas could be modified to partly meet both criteria, emphasizing beneficiary equity over taxpayer equity or vice versa. Regardless of which criterion is emphasized, however, the following changes could make the title I and II formulas more equitable. (See appendixes for details.) The current title I and II structure could be revised to avoid inequities created by counting EMA cases in both formulas. Presently, funding for titles I and II does not always reflect the division of service responsibilities between EMAs and state governments. Through title I, EMAs provide medical and support services to people who reside in their areas of coverage. Through title II, states provide medical and support services to people living outside EMAs and commonly provide these services to people living in EMAs as well. In addition, through title II, states administer services such as medication assistance and insurance continuation statewide for cases both within and outside of their EMAs. Nonetheless, while EMAs typically provide the bulk of medical services to people living within their areas, title II provides funding as if states were providing both medical and statewide services to the EMA cases. This results in a higher level of per-case funding for states with EMAs because the EMA cases are double counted. A more equitable structure would, in effect, double count all cases. Cases would be counted once for the statewide services such as medication assistance and insurance continuation, and again for medical services that are jointly provided by states and EMAs. One means for achieving this would be to make separate appropriations for the major activities funded by the CARE Act. One appropriation would be made for services that state governments provide statewide, and a second appropriation would be made for medical services that are jointly provided by states and their EMAs (see fig. 3). Funding for statewide services would be allocated to state governments on the basis of each state’s total AIDS caseload. Funding for medical services would be divided into two separate allocations for state governments and EMAs. The allocation to state governments would be based on AIDS cases living outside a state’s EMAs. The allocation to EMAs would be based on AIDS cases living in their service delivery areas. With this method, each state’s entire caseload is counted twice: once for funding statewide services and again for funding state-EMA medical services. The approach would only be a means of allocating federal funds to the entities responsible for delivering services and would not change the latitude currently afforded local governments and states in deciding how to best use those funds. Consequently, this approach should have only a minimal effect on existing service delivery structures because it leaves EMA and state responsibilities essentially unchanged. In addition to changing the structure of the formulas, funding equity could be improved by changing the formulas’ components. Specifically, funding equity could be improved by modifying the existing caseload and fiscal capacity measures, and by including a cost measure. First, funding equity could be improved by including a caseload measure that better reflects the number of people living with AIDS and excludes deceased persons. We have developed a proxy measure of people living with AIDS from existing CDC data. Funding equity could also be improved by including a cost measure, such as the Medicare Hospital Wage Cost Index. Use of such a measure would better compensate the EMAs and states that must pay more to provide services to their patients because of their higher private sector health care costs. Finally, to increase resources in states and EMAs with poorer fiscal capacity, the current fiscal capacity factors could be revised to better measure the EMAs’ and states’ AIDS incidence rates and tax bases. Currently, the title I fiscal capacity factor lacks a measure of EMAs’ tax bases, and the title II factor lacks a measure of states’ AIDS incidence rates. By having more complete measures of EMA and state fiscal capacities, the formulas could adjust grants on the basis of both the demand for services and the strength of tax bases. In addition, using total taxable resources (TTR) in the state formula instead of personal income could result in a more comprehensive measure of state tax bases. (For the effects of these changes on specific state and EMA grants, see app. V.) Our analysis of the existing formulas demonstrates that federal funding under titles I and II of the CARE Act can be made more equitable. An important purpose of the Ryan White CARE Act was to target emergency funding to areas of greatest need. At the time the law was enacted, high incidences of HIV were found in fewer areas of the country, service delivery networks were just beginning to form, and these service delivery systems had to rely primarily on private and volunteer resources. In the past 5 years, however, the HIV epidemic has become more widespread and less localized. Hence, areas where the AIDS caseload has burgeoned recently need per-case funding levels comparable to those in areas where AIDS was initially concentrated. To achieve greater equity in the distribution of funds, we recommend that the Congress modify the funding formulas to reduce the double counting of EMA cases so that comparable medical services funding is available for people with AIDS, regardless of where they live, adopt a caseload indicator that better reflects the number of people living with AIDS who are in need of services, and include an indicator that reflects the relative differences across states and EMAs in the cost of serving people with AIDS. If the Congress wishes to target more aid to states and EMAs with limited fiscal capacity, then it may consider adopting an indicator that reflects the relative strength of local tax bases and concentrations of people with AIDS. Alternatively, the Congress may wish to discontinue the use of AIDS incidence rates in the title I formula and per capita income in the title II formula because of the funding inequities that these components produce. Finally, modifying the formulas to achieve a more equitable distribution of funds will involve significant changes in grants to some EMAs and states. To avoid possible disruption of service delivery, the Congress may wish to consider phasing in formula modifications. This should minimize, if not avoid, disruption for the service delivery networks the CARE Act has made possible over the last 5 years. If you or your staff have any questions regarding this report, please contact me on (202) 512-7119 or Jerry Fastrup, Assistant Director, on (202) 512-7211. Major contributors to this report are listed in appendix VII. To determine how equitably title I and II funds are distributed, we examined the existing formulas, applied two widely recognized equity criteria, and determined whether the existing or alternative formula factors would best allocate funds according to these standards. Title I funds are distributed on the basis of the cumulative number of AIDS cases EMAs report and their cumulative AIDS incidence rate. Cases = the cumulative number of AIDS cases in the ith EMA, REMA = the per capita incidence of cumulative AIDS cases in an EMA, RAll EMA = the per capita incidence of cumulative AIDS cases in all EMAs. Title II funds are distributed to states on the basis of the number of AIDS cases they reported in the 2 most recent fiscal years and their per capita income. Cases = the number of cases reported by the ith state in the 2 most recent fiscal years, PCI = the average per capita income of the ith state/the United States. The two standards of equity that we applied were the beneficiary and taxpayer equity criteria. To meet the beneficiary equity criterion, funding should be distributed in a way that enables EMAs and states to purchase comparable levels of AIDS and HIV medical and support services. In other words, per-case funding should be about the same in each of the EMAs and states after adjusting for cost differences. The formula for producing a funding distribution that meets the beneficiary equity criterion is In this formula, Cases = the number of people in need of services in the ith EMA or state, Cost Index = an index measuring relative differences in the per-case cost of serving recipients in the ith EMA or state, A = the total amount of funds to be allocated. To meet the taxpayer equity criterion, funding should be distributed in a way that enables EMAs and states to purchase comparable levels of AIDS and HIV services with comparable burdens on their taxpayers. Therefore, under this criterion, per-case funding should be about the same in each of the EMAs and states, once adjusted for differences in their service costs and fiscal capacities. Per-case funding should only differ to the extent that costs and fiscal capacities do. The formula for producing a funding distribution that meets the taxpayer equity criterion is In this formula, cases and costs are the same as in the beneficiary equity formula and represent an EMA’s or state’s funding need. The federal percentage represents the share of an EMA’s or state’s funding need that will be counted in the formula and varies with EMAs’ and states’ fiscal capacity according to the following formula:The fiscal capacity index represents the ability of grantees to fund services from state and local resources. We applied a weight of 0.20 to this index because that is the weight implicitly applied to fiscal capacity through the AIDS incidence rate found in the existing title I formula. As shown in the preceding figures, to meet the beneficiary equity standard, the funding formula would base its allocation on states’ or EMAs’ cases and costs, and to meet the taxpayer equity standard, the formula would also include a fiscal capacity factor. Hence, in determining whether the formulas distribute title I and II funds in accordance with the beneficiary and taxpayer equity criteria, we sought indicators that were reflective of these three factors and were appropriate for use in grant allocation formulas. We considered four approaches to estimating the number of people living with AIDS in each of the EMAs and states: cumulative AIDS cases, AIDS cases less reported deaths, AIDS cases reported in the 2 most recent years, and weighted AIDS cases. The first approach—cumulative AIDS cases—is the caseload measure found in the current title I formula. In the context of our equity criteria, this approach assumes that the number of people currently living with AIDS can be estimated by using the cumulative number of AIDS cases reported since 1981. About 66 percent of these AIDS cases are no longer living, however, and the likelihood of death increases substantially the longer one has AIDS. As a result, this measure would direct funds more to where the epidemic occurred initially rather than to where it appeared more recently. The second approach—AIDS cases less reported deaths—subtracts each state’s and EMA’s total reported deaths from their total reported AIDS cases for the 10 most recent years. The total number of living cases is then determined by adding each year’s surviving cases. While this approach appears to potentially provide a reasonable estimate of the number of people living with AIDS, it is not an appropriate caseload measure for allocating funds. Our interviews with experts and our review of the literature indicated that this estimate would be biased because AIDS-related deaths are more extensively and quickly reported in some states and EMAs than in others, and this results in measurement errors. Furthermore, since funds are based on the number of people living with AIDS, those states and EMAs that underreport AIDS-related deaths would be rewarded, while others with more reliable reporting would, in effect, be penalized. Many of the experts we interviewed expressed concerns that this method could introduce incentives to purposely underreport deaths. Consequently, states and EMAs might delay or not even report these deaths, which could lead to another bias to the caseload measure and result in less reliable information on the lifespans of people with the disease. The third approach that we considered uses the number of AIDS cases reported in the most recent 2 years to estimate the number of living AIDS cases. This is the caseload measure currently used in the title II formula, and it appears to reasonably estimate the number of living cases. However, because this measure consists of cases from a narrow time frame, we believe it could be too sensitive to sudden caseload changes and disrupt the continuity of funding over time. Also, the expected lifespans for people with AIDS could increase over time. If this occurs in the future, the cases reported in a 2-year interval may not accurately reflect the number of people living with the disease. The final approach—weighted AIDS cases—is a proxy measure of living AIDS cases. This approach estimates the number of AIDS cases living in an EMA or state on the basis of on the number of AIDS cases reported to CDC for each of the most recent 10 years and national average survival rates since a case was first reported. Specifically, the number of AIDS cases that an EMA or a state had reported for each of these 10 years would be weighted by the national percentage of cases estimated to be living as of the first day of the most recent year of that period. These percentages would be estimated from national data on the number of people reported to have AIDS during a 10-year period who had not been reported to have died of the disease. Table I.1 shows the cumulative survival rates for each of 10 years as of fiscal year 1992. According to these data, 88 percent of the cases reported in 1992 were estimated to have survived at least 1 day in that year, and 57 percent of the prior year’s cases were estimated to still be alive as of that date. This approach appears to be the most appropriate. Unlike the cumulative AIDS cases measure, it has been adjusted to account for people with AIDS who are no longer living and thus better reflects the intended service population. In contrast to the second approach, this one averages out differences in reporting mortality and avoids incentives to underreport deaths. Specifically, since the algorithm for estimating living cases would be based on national data, any uniqueness in how states and EMAs report mortality would not affect the amount of funds that they would receive. Finally, this measure applies differential weights to cases from a wide time frame. As a result, sudden caseload changes should not significantly disrupt funding continuity over time. Also, this measure can be adjusted to recognize changes in AIDS mortality. Table I.2 compares the proxy measure of people living with AIDS based on weighted-AIDS cases and the proxy measure based on cumulative AIDS cases—the existing title I case measure—for each of the EMAs as of December 1993. Also shown is each EMA’s caseload share based on these measures. Since funds are distributed on the basis of caseload shares rather than number of cases, the former is actually the more relevant measure from a formula perspective. The extent to which the cumulative case measure distorts EMAs’ demand for services is shown by the differences in weighted and cumulative caseload shares. For example, the cumulative case measure overestimates caseload shares for the New York City, San Francisco, Newark, and Jersey City EMAs from 7.85 to 15.52 percent. Conversely, the cumulative case measure underestimates demand for services in EMAs such as Riverside-San Bernardino, Orlando, St. Louis, Tampa-St. Petersburg, and Phoenix from 13.22 to 18.39 percent. These differences reflect the distortions created by the large number of deceased persons in the case counts for the older and larger EMAs. Count (#) Share (%) Count (#) Share (%) Table I.3 compares our proxy measure of people living with AIDS that is based on weighted living cases and the existing title II case measure—2 years of cases—for each of the states and territories as of December 1993. The table also shows the caseload shares based on these two measures. Once again, an examination of caseload shares under these two methods demonstrates the distortions that result from using only 2 years of cases to estimate the number of people living with AIDS. The measure overestimates caseload shares for Delaware and South Dakota from 9.61 to 17.94 percent and underestimates caseload shares for Hawaii, Montana, and New Jersey from 5.22 to 7.18 percent. Cases (#) Share (%) Cases (#) Share (%) (continued) Cases (#) Share (%) Cases (#) Share (%) Neither the title I nor title II formula includes a factor that reflects differences in the cost of serving AIDS cases. We were not able to locate existing information on the actual cost of providing health and support services to people with AIDS and HIV within different geographic areas. As a result, we constructed a proxy for the cost of serving AIDS cases. The major factors that typically affect service costs are the personnel who supply the service, capital costs such as office rent, and supply costs such as for medications. Titles I and II primarily fund outpatient health, support, and case management services, which are labor-intensive. Hence, most of the service delivery costs for services funded by titles I and II would be associated with the personnel who provide the services. Furthermore, from our discussions with experts, we determined that an existing measure of health labor costs—the Medicare Hospital Wage Cost (MHWC) Index—might be an appropriate indicator of differences in labor costs among EMAs and states. This wage index was derived by HCFA from hospital salary surveys and was designed to reflect personnel costs in hospitals subject to the Medicare prospective payment system (PPS). Accordingly, the index is based on the salaries of nurses, therapists, technicians, physicians, and administrative staff. In addition to being used for PPS, the MHWC Index has been used to estimate cost variation for ambulatory service centers, home health care providers, and skilled nursing facilities. An underlying assumption in our using the MHWC Index to estimate costs for the personnel who deliver services funded by the CARE Act is that the relative differences in these costs should mirror the relative differences in costs of hospital personnel. That is, in places where hospital personnel costs are high, costs should also be high for the personnel who provide services funded by the CARE Act. Likewise, in places where hospital personnel costs are low, the costs for the personnel providing services funded by the CARE Act should also be low. HCFA collects nationwide data on hospitals participating in PPS, so cost data are readily available for each of the EMAs and non-EMA areas. HCFA publishes these data for metropolitan areas, and using HCFA’s automated MHWC database, we were able to construct a wage index for each of the states. We were unable to locate existing data on the second major cost category—capital costs. High-cost areas, however, tend to have high costs both for salaries and capital (for example, rent for office space). In our view, therefore, the MHWC Index would appear to be a reasonable proxy for differences in both personnel and capital costs. The third major cost category—the cost of supplies such as medications—is assumed not to systematically vary by location. This is because the amount an EMA or state pays for supplies like medications is determined by a number of factors, including the price that they are able to negotiate with suppliers. For our analysis, we constructed a cost index assuming 30 percent of costs do not systematically vary by location and 70 percent do. The MHWC Index served as a proxy for the variation of these costs. (.7 MHWC) We applied a weight of 30 percent for costs that do not systematically vary because that is the approximate percentage of title II funds typically expended on medications. Tables I.4 and I.5 display our estimated service costs for each of the EMAs and states. As shown in these tables, costs can vary by as much as 100 percent. For example, service costs in Oakland are twice those in Ponce and San Juan, and 48 percent higher than in Miami. Similarly, service costs in Alaska are over 50 percent higher than in Mississippi. MHWC Index (average = 1.00) GAO cost index (.3 + .7 * MHWC) (average = 1.00) MHWC Index (average = 1.00) GAO cost index (.3 + .7 * MHWC) (average = 1.00) (continued) MHWC Index (average = 1.00) GAO cost index (.3 + .7 * MHWC) (average = 1.00) A comprehensive indicator of an EMA’s or state’s fiscal capacity to provide AIDS and HIV health and support services is one that includes both a measure of the resource base (that is, tax base) and the potential demand placed on these resources to fund AIDS and HIV services. For the title I formula, we used per capita income (PCI) as the proxy measure for EMA resources. PCI data are compiled by the Department of Commerce and are used to measure the income received by a jurisdiction’s residents, including wages and salaries, rents, dividends, interest earnings, and income from nonresident corporate business. PCI also includes an adjustment for the rental value of owner-occupied housing on the grounds that such ownership is similar to the interest income earned from alternative financial investments. While PCI does not measure all taxable income, it is the most comprehensive measure of EMA residents’ income currently available. As a proxy for the level of demand placed on EMAs’ resources, we used AIDS incidence rates based on our estimate of living AIDS cases. AIDS incidence indicates the proportion of each EMA’s population that has been reported to have AIDS. As such, AIDS incidence considers the relative rather than the absolute demand placed on an EMA’s resources. Those EMAs with larger proportions of their populations having the disease are expected to have greater demands on their resources than are EMAs with smaller proportions of their populations infected. A complete title I fiscal capacity measure was constructed by first producing cost-adjusted income amounts for each EMA through dividing their PCI by their MHWC Index values. This adjustment ensured that EMAs were compared in terms of income that was of comparable purchasing power. Next, we divided these cost-adjusted values by each EMA’s AIDS incidence rate. Fiscal capacity PCI / MHWC / AIDS Incidence We followed similar steps in constructing the title II fiscal capacity measure, with the exception of using total taxable resources (TTR) to measure income. TTR is a broader measure of income than PCI because it considers all income potentially subject to a state’s taxing authority. TTR is an average of PCI and per capita Gross State Product (GSP). GSP measures all income produced or received within a state, whether received by residents, nonresidents, or retained by business corporations. Below is the the title II fiscal capacity measure. Under the current formulas, fiscal capacity is incompletely measured. The title I formula includes a measure of EMAs’ AIDS incidence but omits a measure of their resources, which creates a bias against those EMAs with relatively low tax bases. In table I.6, we show EMAs’ fiscal capacity as measured by the complete indicator that we constructed—real PCI per weighted case—and by the existing measure—AIDS incidence. This table also shows the percentage difference or disparity between these two measures. As shown in this table, fiscal capacity for the Riverside-San Bernardino EMA is estimated to be 147 percent of the EMA average when measured with a complete indicator—PCI per weighted case. When only AIDS incidence is considered, however, the EMA’s fiscal capacity is estimated to be 245 percent of the average. Hence, when demand for services is considered relative to available resources, Riverside-San Bernardino’s fiscal capacity is estimated to be 67 percent lower than what is estimated under the existing formula. Conversely, when measured with a complete indicator, San Francisco’s fiscal capacity is estimated to be about 21 percent higher than is estimated under the current formula. This occurs because of the EMA’s relatively high tax base. GAO’s real PCI per wtd. case index (average = 100) Current title I AIDS incidence index(average = 100) West Palm Beach, FL (continued) GAO’s real PCI per wtd. case index (average = 100) Current title I AIDS incidence index(average = 100) In contrast to the title I formula, the title II formula measures states’ income and omits their AIDS incidence rates. This omission creates a bias against those states with relatively high service demands on their resources. Table I.7 shows states’ fiscal capacity when measured by a complete indicator—real TTR per weighted case—followed by the existing measure—nominal per capita income. Also, the table shows the percentage difference or disparity between these two measures. GAO’s real TTR per wtd. case index (average = 100) Current title II nominal PCI index (average = 100) (continued) GAO’s real TTR per wtd. case index (average = 100) Current title II nominal PCI index (average = 100) (continued) GAO’s real TTR per wtd. case index (average = 100) Current title II nominal PCI index (average = 100) N/A = Not applicable. As shown in table I.7, Kentucky’s fiscal capacity is estimated to be 17 percent below the national average when only PCI is considered. When income is adjusted by AIDS incidence rates, however, the state’s fiscal capacity is estimated to be more than four times the national average. This occurs because of the relatively low AIDS incidence rate in Kentucky as compared with the state’s available resources. Conversely, while the District of Columbia has a relatively large resource base (33 percent above the national average), its AIDS incidence is also relatively high. Consequently, when measured with a complete indicator, the District of Columbia’s fiscal capacity is found to be 72 percent below the national average. We compared the distribution of title I funding and the combined title I and II funding against the beneficiary and taxpayer equity criteria. These comparisons indicated that the current formulas do not distribute funding in accordance with either of the equity criteria. Under the beneficiary equity standard, the size of the grant award depends on two factors: the number of cases and the cost of services. If the grant is expressed on a per-case basis, this implies that per-case funding should vary only with differences in the cost of services. To determine how well the current distribution of title I funds meets the beneficiary equity standard, we performed a regression analysis to determine the extent to which cost differences can account for differences in nominal per-case funding. If the current distribution of title I funds reflected the beneficiary equity standard, then a substantial share of the variation in per-case funding could be explained by cost differences. Our statistical analysis, however, indicates the current distribution of title I funds bears little relation to the variation in costs. The strength of a relationship is commonly measured by a statistic known as R. In this case, the R are displayed in figure II.1. If differences in per-case funding and costs were perfectly correlated, all EMAs would fall along the straight line shown in this figure. The wide scatter around the line, however, demonstrates that per-case funding and costs are not systematically related. For example, the service costs for the Oakland and San Francisco EMAs are about the same; yet, per-case funding for San Francisco is about twice that for Oakland. If per-case funding and costs were more strongly correlated, both EMAs would be positioned closer to the straight line. Furthermore, Oakland’s per-case funding would even be slightly higher than San Francisco’s rather than vice versa. Figure II.1: Nominal Title I Funding Per Case and Cost Orange Co. This relationship is also illustrated by other pairs of EMAs. For example, New York City’s service costs are about 20 percent higher than Jersey City’s, but their per-case funding is about the same—about 30 percent above the EMA average. Consequently, at their current funding levels, the Jersey City EMA can purchase more services for its patients than can New York City. From the perspective of beneficiary equity, therefore, the current per-case funding distribution is inequitable. However, if these differences can be accounted for by differences in fiscal capacity, then the grant distribution may reflect our taxpayer equity criterion. The taxpayer equity standard implies that per-case funding should be related to both cost differences and differences in fiscal capacity. Thus, to determine if the formula meets the taxpayer equity standard, we performed a regression analysis to determine the extent to which differences in per-case funding can be explained by both cost and fiscal capacity differences. We used per-case funding (measured in nominal dollars) as the dependent variable and used both cost and fiscal capacity (also measured in nominal dollars) as independent variables. Orange Co. However, there are many exceptions to this general tendency. For example, while the San Francisco and New York City EMAs’ funding capacities are comparable, they receive very different per-case real funding amounts. Real per-case funding is 40 percent above the average for San Francisco and only about average for New York City. Since the grant amounts have already been adjusted for cost differences, we would conclude that the New York City EMA is underfunded compared to San Francisco. Similarly, both West Palm Beach and Tampa have average funding capacities, but West Palm Beach receives about 25 percent more title I funds than Tampa. Based on examples like these and our regression results, we conclude that while title I funding demonstrates a tendency to target more aid to low-capacity EMAs, substantial inequities exist. The beneficiary equity standard for the combined distribution of title I and II funds uses caseload and cost measures that encompass the entire state rather than just an EMA. Using state rather than EMA data, we estimated the same regression models to assess the equity of the combined title I and II funding. These results show that the current distribution of title I and II funds, in combination, does not meet either equity standard. If beneficiary equity were fully realized, cost differences would account for all of the variation in per-case funding, but costs explain only 14 percent of this variation. To express this another way, under the beneficiary equity model, states with the same relative cost of services should receive equal funding on a per-case basis. As shown in figure II.3, however, states with dramatically different service costs received comparable per-case funding amounts. For example, service costs for both Georgia and New Mexico are about average; but Georgia’s per-case funding is 69 percent higher than New Mexico’s. A similar situation exists for Ohio and Texas. Both the regression results and these examples indicate that title I and II funds, in combination, are not distributed in a way that meets the beneficiary equity criterion. Under the taxpayer equity model, cost and fiscal capacity should account for 100 percent of the variation in per-case funding. Our regression analysis indicates these two factors account for only 33 percent of the variation in per-case funding. By implication, about 67 percent of the variation in per-case funding is unrelated to need as reflected by differences in cost and fiscal capacity. Specific examples of the inequities are illustrated in figure II.4. For example, Massachusetts and New Hampshire receive comparable per-case funding amounts, but New Hampshire’s tax base is about four times that of Massachusetts. Similar situations exist for the states of Connecticut and Kentucky and for Delaware and Maine. In contrast, Hawaii receives a grant that is about half the amount that Missouri receives; yet, the states’ tax bases are comparable. This is also the case for the states of Georgia and Nevada and for Illinois and Oregon. On the basis of this analysis, we conclude that the combined title I and II funding does not meet the taxpayer equity criterion. When funding for titles I and II is considered jointly, the major cause of funding inequities is that EMA cases are counted in both formulas but cases outside EMAs are not. As a result, states with few or no cases in an EMA receive disproportionately less per-case funding than do states with large proportions of their caseloads in EMAs. The following two-state example demonstrates how the current structure produces funding inequities between a state with an EMA and one without an EMA. For this example, we will assume that $1,000 has been appropriated for each of titles I and II. Also, the two states are assumed to be alike in terms of their costs and funding capacity; however, they differ in the number of cases they must serve and whether these cases live in an EMA. State A has 200 cases, all living in an EMA while State B has 100 cases and no EMA. Hence, State A has two-thirds and State B has one-third of the total cases. Since title I funds are allocated based on each state’s share of EMA cases, the entire $1,000 would be distributed to the EMA in State A, and none of the funds would be distributed in State B (see fig. III.1). Title II funding is allocated in proportion to each state’s total caseload. Since State A has two-thirds of all cases, it would receive two-thirds ($667) of the title II appropriation. State B would receive one-third ($333) of the appropriation. Each state’s total grant is then determined by summing their title I and II grants (see fig. III.2). To determine the states’ per-case funding amounts, their total grant amounts are divided by their total caseloads (see fig. III.3). In this example, the current title structure produced differences in per-case funding for these two states that amounts to about 150 percent. Moreover, this difference is unrelated to the states’ funding needs and occurs solely because of the existing title structure. To determine the extent to which the current structure accounts for per-case funding differences, we compared two regression models. The first model was our earlier one that examined the effects of differences in cost and fiscal capacity on states’ combined title I and II per-case funding amounts. For the second model, we examined these effects along with the effect of the percentage of AIDS cases in an EMA on states’ combined title I and II per-case funding amounts. As discussed in appendix II, the cost and fiscal capacity model explains only 33 percent of the variation in nominal title I and II per-case funding. In contrast, however, the model that also includes the percentage of EMA AIDS cases as a factor explains 85 percent of this variation. The relationship between the percentage of EMA AIDS cases and per-case funding is displayed in figure III.4. As shown in this figure, the states with fewer cases in EMAs, for example, Hawaii, Ohio, and West Virginia receive the smallest grants, and the states with larger percentages of cases in EMAs, for example, California, the District of Columbia, and New York, tend to receive the largest grants. As demonstrated by the regression analysis and this figure, the funding differences result from the structure of the formulas rather than funding needs as measured by cases, costs, and fiscal capacity. Consequently, these differences are inequitable. Of the states without an EMA, only the two that received the smallest and largest grants are displayed. While improvements in funding equity can be achieved by adopting better indicators of caseload, cost, and funding capacity in the allocation formulas, greater improvement could result from changing the allocation structures to avoid inequities created by counting EMA cases in both formulas. A variety of approaches could be used that vary in terms of how they would affect existing service delivery responsibilities and structures. The simplest way to improve funding equity is to consolidate titles I and II into a single grant and distribute funds to the state governments through an equity-based formula. State governments would be the political entity responsible for using the aid provided for serving those in need. The funds would be allocated based on each state’s total cases, thus avoiding any double counting of EMA cases. Two implications of the consolidated grant approach, however, are the potential infringement on the autonomy currently afforded EMAs in delivering services and changes in existing service responsibilities and structures. Currently, EMAs are responsible for delivering services within their areas, and they have service delivery networks already in place. Under a consolidated grant, all funds would be distributed to the states. Hence, a state could potentially assume total responsibility for service delivery in EMAs or continue to allow the EMAs to administer the programs they now operate, funding them from its consolidated grant. A second corrective approach maintains the two distinct titles—title I for EMAs and title II for states—however, the EMAs would become responsible for all services in their areas, including those services currently under the purview of the states through title II. Hence, EMAs and states would continue to be funded under separate titles, but the services funded under these titles would be identical. Both title I and II funds would be allocated through equity-based formulas. Title I funds would be distributed to EMAs on the basis of their respective shares of cases, and title II funds would be distributed to states on the basis of their respective shares of the non-EMA cases. Like the previous approach, this one avoids the inequities currently caused by counting EMA cases in both formulas. Furthermore, as do the existing formulas, this approach maintains the EMAs’ autonomy in the delivery of services. In addition, this approach allows comparable per-case funding levels among the states and EMAs. However, this approach would lead to significant changes in service delivery responsibilities. Because EMAs are not currently responsible for providing services such as insurance continuation and medication assistance, they would have to develop the capacity to administer these services in addition to the medical and support services they now provide. A third corrective approach involves allocating funds for medical and support services separately from those services that states provide statewide. This approach avoids inequities produced by double counting EMA cases, continues the existing autonomy afforded EMAs, and requires no changes in existing service delivery responsibilities. Furthermore, the approach ensures that comparable per-case funding is available across EMAs and states and between EMA and non-EMA areas. The following example, using the same two states from the example in appendix III, illustrates how this approach would improve funding equity. The two states are alike in terms of their costs and funding capacity, but they differ in the number of cases they serve and whether the cases live in an EMA. In this example, a total of $2,000 is appropriated: $1,500 for medical services and $500 for statewide services. As will be shown through this example, dividing funds in this way would result in funding amounts for title I and II activities that are comparable to what was found in the earlier example in which both titles each had a $1,000 appropriation. That is, $1,000 would still be available for the EMA, and $1,000 would still be available for the states because state funding would be determined by adding together the $500 for non-EMA medical services and $500 for statewide services. State A has 200 cases, all living in an EMA, and State B has 100 cases and no EMA. Hence, once again, State A has 100 percent of the EMA cases and two-thirds of total cases. State B has 100 percent of the non-EMA cases and one-third of total cases. Expressed differently, two-thirds of all cases live in an EMA and one-third of all cases do not. Under this approach, the medical services appropriation would be divided between EMA and non-EMA areas on the basis of their respective caseloads. Since two-thirds of all cases live in an EMA, two-thirds of the medical services appropriation ($1,000) would be set aside for the EMA. The remaining one-third of the medical services appropriation ($500) would be set aside for distribution to states on the basis of the number of cases living outside an EMA. EMA medical services funds are allocated based on the shares of cases living in an EMA. Since State A contains all EMA cases, all of these funds ($1,000) would be allocated within State A. None of these funds would be allocated within State B as it has no EMA. Non-EMA medical services funds would be allocated based on states’ shares of cases living outside an EMA. Since State A has no non-EMA cases, it would receive none of these funds. State B would receive the entire $500 of non-EMA medical services funds because it contains all non-EMA cases. The statewide services appropriation ($500) would be allocated based on each state’s share of total cases. Since State A has two-thirds of the total cases, it would receive $334; State B would receive the remaining one-third of funds ($167). The states’ total grants would be the sum of their EMA medical services grant, non-EMA medical services grant, and statewide services grant. In this case, State A would receive a total of $1,334 and State B would receive a total of $667 (see fig. IV.1). As before, the states’ per-case funding amounts would be obtained by dividing their total grant amounts by their total caseloads. Figure IV.2 shows the per-case funding amounts for the two states. Under our proposed approach, each state would receive identical per-case funding. This contrasts significantly with the current approach, which produces highly unequal per-case funding that is unrelated to either costs or funding capacity and is therefore inequitable. In this appendix, we describe how title I and II funding would be distributed if the formulas were changed to meet either the beneficiary or taxpayer equity criterion. Both the beneficiary and taxpayer equity formulas were described in greater detail in appendix I, which also provided more detailed discussion of the caseload, cost, and fiscal capacity factors used in these formulas. Depending on the amount of title I and II funds appropriated or the use of funding-loss mechanisms such as hold-harmless provisions, formula modifications could decrease funding to some EMAs and states and increase funding to others. Whether and how funding losses should be prevented would be the decision of the Congress; however, in this appendix, we show the effects of formula changes when title I and II appropriations remain constant and no funding-loss mechanisms are employed. Table V.1 displays each EMA’s title I fiscal year 1995 funding under both the existing and the beneficiary equity formulas, along with the difference in funding that would be received under these formulas. Relative to the existing formula, changes in EMAs’ allocations under the beneficiary equity formula would range from a decrease of 33.57 percent to an increase of 58.72. Beneficiary equity formula FY 1995 allocation (continued) Table V.2 displays the distribution of title II fiscal year 1995 funding under both the existing and the beneficiary equity formulas. Relative to the existing formula, changes in states’ allocations under the beneficiary equity formula would range from a decrease of 69.84 percent to an increase of 247.33 percent. Beneficiary equity FY 1995 allocation (continued) Table V.3 displays title I fiscal year 1995 funding under both the existing and the taxpayer equity formulas. Relative to the existing formula, changes in EMAs’ title I allocations under the taxpayer equity formula would range from a decrease of 37.18 percent to an increase of 38.90 percent. Taxpayer equity FY 1995 allocation (continued) Title II funding for fiscal year 1995 under the existing and taxpayer equity formulas is shown in table V.4. Relative to the existing formula, changes in states’ allocations under the taxpayer equity formula would range from a decrease of 77.14 percent to an increase of 290.91 percent. Taxpayer equity formula FY 1995 allocation (continued) Taxpayer equity formula FY 1995 allocation (Table notes on next page) As demonstrated by these tables, changing the existing title I and II formulas would redistribute funds across EMAs and states. Compared with the existing formulas, either the beneficiary or the taxpayer equity formula would increase funding for more EMAs and states than it would decrease it (see table V.5). Nonetheless, under either formula, funding would decrease for some EMAs and states if appropriations remained stable. A number of mechanisms could be employed, however, to avoid EMA and state funding losses when funding equity is improved. Appropriations could be made to a level that obviates funding losses, hold-harmless provisions could be applied, or a limit could be placed on the amount of funds eligible for redistribution. For example, funding losses could be avoided by only redistributing funds that were appropriated in excess of a previous year’s amount. We solicited comments on our report from the Department of Health and Human Services (HHS) through the Director, Office of HIV/AIDS Policy, Health Education and Human Services. He provided us his comments, along with those of the Division of HIV Services, Health Resources Services Administration, which administers titles I and II of the CARE Act. We also received comments from officials of the Centers for Disease Control and Prevention (CDC). In their general comments, the HHS officials stated that moving to a more equitable funding formula could cause significant funding changes and potential disruption to service delivery structures. We share these concerns and have discussed methods to avoid these kinds of difficulties in our report. The HHS officials also raised questions about the appropriateness of the Medicare Hospital Wage Cost (MHWC) Index as a proxy for estimating labor costs for AIDS and HIV services among EMAs and states. While we believe a wage index that is more closely related to these services would be preferable to the MHWC Index, we were unable to locate such an index. On the basis of our discussions with experts, however, we determined that the MHWC Index would be an appropriate alternative to a wage index that is specific to AIDS and HIV services. In addition, the HHS officials expressed concerns about the adequacy of both the level of funding and the health care infrastructure for AIDS and HIV services. While problems may exist with regard to funding and infrastructure, these issues were not within the scope of our study. The HHS officials provided specific comments about our report, which have been incorporated as appropriate. The CDC officials indicated that they agreed that a caseload indicator based on an estimate of living cases was preferable to the existing measure; however, they recommended the use of the number of AIDS cases reported during the previous 2 years rather than our proposed measure of weighted cases. These officials stated that our caseload measure would require annual revision, would serve as an incentive to states to underreport AIDS-related mortality, would be technically difficult to compute, and was not a standard method for estimating living AIDS cases. We agree that our caseload measure might periodically need revision, as would any such measure, in accordance with significant changes in AIDS mortality. However, our caseload measure, when adjusted over time, would more appropriately reflect the impact of changes in AIDS mortality on the number of people living with AIDS than would a measure based on cases reported in the previous 2 years. We do not believe our proposed measure would serve as an incentive to underreport AIDS-related mortality because states’ funding would not be directly affected by their reported mortality data. As discussed in appendix I, we propose the use of weighted AIDS cases as a proxy measure rather than an actual estimate of living AIDS cases to avoid this potential incentive. Finally, we do not believe that our proposed measure would be technically difficult to compute. In addition to those named above, the following individuals made important contributions to this report: David Bieritz, Evaluator; Leslie Albin, Reports Analyst; Ann McDermott, Publishing Adviser. Federal Security Agency, Social Security Administration. The Principle of Equalization Applied to the Allocation of Grants-in-Aid. Memorandum No. 66. September 1947. Institute for Health and Aging, University of California, San Francisco. Review and Evaluation of Alcohol, Drug Abuse, and Mental Health Services Block Grant Allotment Formulas, Final Report. 1986. Office of Education Research and Improvement, U.S. Department of Education. Poverty, Achievement, and the Distribution of Compensatory Education Services. 1986. Office of State and Local Finance, Department of the Treasury. Federal-State-Local Fiscal Relations: Report to the President and Congress. September 1985. U.S. General Accounting Office. Older Americans Act: Funding Formula Could Better Reflect State Needs (GAO/HEHS-04-41, May 12, 1994). U.S. General Accounting Office. Maternal and Child Health: Block Grant Funds Should Be Distributed More Equitably (GAO/HRD-92-5, Apr. 2, 1992). U.S. General Accounting Office. Highway Funding: Federal Distribution Formulas Should Be Changed (GAO/RCED-86-114, Mar. 31, 1986). U.S. General Accounting Office. Changing Medicaid Formula Can Improve Distribution of Funds to States (GAO/GGD-83-27, Mar. 9, 1983). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the funding formulas established under the Ryan White Care Act, focusing on: (1) whether the existing formulas distribute funds equitably to states and eligible metropolitan areas (EMA); (2) the factors that inhibit greater funding equity; and (3) formula changes that are needed to improve funding equity. GAO found that: (1) although Ryan White Care Act funding formulas include factors used in equity-based formulas, they result in per-case funding discrepancies because EMA cases are double counted; (2) states without EMA do not benefit from double counting and receive significantly less funding; (3) the indicators used to target funds to needy states and EMA fail to take geographic cost differences into consideration; (4) EMA funding levels are based on the cumulative number of reported acquired immunodeficiency syndrome (AIDS) cases, resulting in the oldest EMA receiving the most funding; (5) better cost indicators could be used to target more funds to states and EMA where resources are the most needed; and (6) funding equity could be improved by eliminating the inappropriate double counting of AIDS cases and by using more appropriate measures of EMA and state funding needs. |
At first glance, it might seem premature to discuss preparations for the decennial census; after all, Census Day, April 1, 2020, is still almost 8 years away. However, our reviews of the 1990, 2000, and the 2010 Censuses have shown that early planning, the use of leading management practices, and strong congressional oversight can help reduce the costs and risks of the national headcount. Indeed, the characteristics of the decennial census—long-term, large-scale, complex, high-risk, and politically sensitive—together make a cost-effective enumeration of the nation’s population and housing a monumental project-planning and management challenge. Despite the complexity, cost, and importance of the census, however, recent enumerations were not planned well. Indeed, shortcomings with managing and planning the 2000 and 2010 enumerations led to acquisition problems, cost overruns, and other issues, and, as a result, we placed both enumerations on our list of high-risk programs. For example, leading up to the 2010 Census, we found that additional costs and risks associated with the data capture technologies used in the 2010 Census were related to a failure to adequately link specifications for key information technology systems to requirements. Additionally, the lack of skilled cost estimators for the 2010 Census led to unreliable life- cycle cost estimates, and some key operations were not tested under census-like conditions. GAO, Information Technology: Census Bureau Needs to Improve Its Risk Management of Decennial Systems, GAO-08-79 (Washington, D.C.: Oct. 5, 2007). program difficult and hampered accountability, succession planning, and staff development. Since then, we and other organizations—including the Bureau itself— have stated that fundamental changes to the design, implementation, and management of the census must be made in order to address these operational and organizational challenges. For its part, the Bureau has stated that to contain costs and maintain quality, bold innovations in both planning and design of the 2020 Census will be required, and has launched a number of change initiatives. Some of these efforts are directed at transforming the Bureau’s organization, while others focus on reexamining the fundamental approach to the 2020 Census. Although bold reform plans are critical steps in the right direction, the Bureau’s past experience has shown that the more difficult challenge will be sustaining those efforts throughout the course of the decade. Indeed, preparations for both the 2000 and 2010 Censuses started with ambitious plans that gave reason for optimism that major improvements were on the way. However, in the subsequent ramp-up to those enumerations, the Bureau had difficulty identifying and implementing promising innovations, progress on reforms slowed, and as Census Day drew closer, the success of those head-counts became an open question. In our April 2011 testimony, we noted that based on the results of prior enumerations, simply refining current methods—some of which have been in place for decades—will not bring about the reforms needed to control costs while maintaining accuracy given ongoing and newly emerging societal trends such as concerns over personal privacy and an increasingly diverse population.reconsider the nation’s approach to the census including rethinking such activities as how it plans, tests, implements, monitors, and evaluates enumeration activities. The Bureau concurred and its 2020 Census business plan states that the Bureau needs substantial innovation to achieve its cost and quality targets and to meet its strategic goals. Consequently, the Bureau will need to As one example, with respect to its research and testing efforts, the Bureau plans to use the American Community Survey—an ongoing Bureau survey of population and housing characteristics that is administered monthly throughout the decade—as a vehicle to test certain decennial census processes and information technology (IT) systems. According to the Bureau, this approach will enable it to conduct many small tests throughout the decade in a production environment instead of relying on a small number of large, expensive tests as was the case in past decennial planning cycles. According to the Bureau, refining systems in the American Community Survey reduces the risk of building one-use systems for the decennial that need to operate flawlessly the first time they are put into production. With respect to implementing the census, among other activities, the Bureau is researching potential electronic methods of promoting the census and collecting data, including with the Internet via social networking sites, e-mail, and text messages, as well as with automated phone systems. For the 2010 Census, the Bureau initially investigated the use of an Internet response option but dropped plans based on concerns over information technology security, and after completing a cost-benefit analysis that led the Bureau to conclude that Internet data collection would not significantly improve the overall response rate or reduce field data collection. The Bureau is also researching how it can use administrative records to reduce the cost of certain decennial activities. Administrative records from government agencies, including driver licenses and school records, can be used to identify persons associated with a particular household address. Administrative records could save the Bureau money because they could help reduce the need for certain costly and labor-intensive door-to-door visits by Bureau employees to collect data in-person from non-respondents. During the 2010 Census, the Bureau made only limited use of administrative records. Expanding their use to supplement respondent data on a national level will present a certain degree of risk, and issues concerning data quality and access to records must first be resolved. With so many innovations underway at the Bureau, strong and continuing stewardship at the senior level will be critical for ensuring they stay on track moving forward. However, the announced resignation of the Director coming up this August could mean that it will be a number of months before an agency head appointed by the President and confirmed by the Senate will be in place. As with the heads of all federal agencies, it will be important for the Bureau Director to possess the requisite leadership and management skills and background to successfully address the challenges facing the Bureau in the years ahead. On the basis of our knowledge of past and present census operations and a review of readily available literature, certain general stewardship roles that the Director, as a senior executive, will play in managing the institution, and their related qualifications, merit particular attention in this regard. These roles and qualifications are not necessarily unique to the Bureau, and it is unlikely that any one person will excel in all of these areas. That said, based on our knowledge of past and present census operations and review of available literature on leadership—particularly of federal agencies—we identified the following characteristics of a successful leader: Strategic leader. As the head of the Census Bureau, the Director is responsible for, among other activities, (1) leading change and (2) leading people. In leading change, the Director will be expected to build a shared vision or long-term view for the organization among its stakeholders, as well as be a catalyst for developing and implementing the Bureau’s mission statement and strategic goals, and be cognizant of the forces affecting the Bureau. Moreover, in addition to the decennial census, the Bureau is also responsible for a number of other vital national data gathering and statistical programs such as the American Community Survey. As a result, it will be important for the Director to ensure the Bureau’s information products continue to meet the current and emerging needs of its numerous and diverse customers, including Congress, state, local and federal government organizations, and a wide array of other public and private organizations. In leading people, the Director should ensure that human resource strategies, including recruitment, retention, training, incentive, and accountability initiatives are designed and implemented in a manner that supports the achievement of the organization’s mission and goals and addresses any mission critical skill gaps. In particular, it will be important for the Director to motivate headquarters, field, and temporary staff to ensure they function as an integrated team rather than a stovepiped bureaucracy. Technical professional. It is logical to expect that the Director would have at least a general background in statistics or a related field. Although no one person will have the full range of knowledge needed to answer the many methodological and technical questions that the Director may face, it is important that he or she have sufficient technical knowledge to direct the Bureau’s statistical activities. In addition, the Director should manage for results by developing and using performance measures to assess and improve the Bureau’s operations. Administrator. Like other agency heads, the Director is responsible for acquiring and using the human, financial, and information technology resources needed to achieve its goals and mission. The Director should, for example, be capable of setting priorities based on funding levels. Further, because the Bureau’s product is information, the Director should ensure that the Bureau leverages technology, such as the Internet, to improve the collection, processing, and dissemination of census information. Collaborator. It will be important for the Director to continually expand and develop working relationships and partnerships with those in governmental, political and professional circles to obtain their input, support, and participation in the Bureau’s activities. For example, it will be important for the Director to continue working with local government officials to have them play a more active role in taking the census. We previously found that leveraging such data as local response rates, census socio-demographic information, as well as other data sources and empirical evidence, might help control costs and improve accuracy by providing information on ways the Bureau could more efficiently allocate its resources. For example, some neighborhoods might require a greater level of effort to achieve acceptable results while in other areas those same results might be accomplished with fewer resources. The 2010 Census had several census-taking activities tailored to specific population groups. As one example, the Bureau budgeted around $297 million on paid media to raise awareness and encourage public participation in the census. To determine where paid media efforts might have the greatest impact, the Bureau developed predictive models based on 2000 Census data and other sources. Other efforts included mailing a bilingual English/Spanish questionnaire in some areas, and sending a second “replacement” census questionnaire to about 53 million households in areas with historically lower response rates. Preliminary Bureau evaluations suggest that some of these targeted efforts contributed to an increased awareness of the census and were associated with higher questionnaire mail-back response rates. For the 2020 Census, the Bureau is considering expanding its targeting efforts to activities such as address canvassing, an operation where Bureau employees go door-to-door across the country verifying street addresses and identifying possible additions or deletions to its address list. This operation is important for building an accurate address list. In the 2010 Census, address canvassing was conducted at the vast majority of housing units. For the 2020 Census, the Bureau believes it might be able to generate cost savings by using existing address records for those neighborhoods that have been stable, and only canvass those areas where significant changes have occurred. We previously found that studying the value added of a particular operation, such as the extent to which it reduced costs and/or enhanced data quality, could help the Bureau make more cost-effective use of its resources. As one example, in addition to address canvassing, the Bureau has several other operations to help it build a complete and accurate address list. This is to help ensure that housing units missed in one operation get included in a subsequent operation. However, the extent to which each individual operation contributes to the overall accuracy of the address list is uncertain. This in turn makes it difficult for the Bureau to fully assess the extent to which potential reforms such as targeted address canvassing or other operations might affect the quality of the address list. Indeed, the Bureau’s formal program of assessing and evaluating various 2010 Census operations and activities, with which it expects to have completed over 100 studies by early in 2013, has only a few studies designed to produce information describing the return on investment. Designing future studies to better isolate the return on investment would help the Bureau further tailor its operations to specific population groups and locations and potentially generate substantial cost savings. A key priority for the Bureau will be to continue to address those shortcomings that led us to designate the 2010 Census a high-risk area in 2008, including strengthening its ability to develop reliable life-cycle cost estimates and following key practices important for managing information technology (IT) so that they do not recur in 2020. In February 2011, we removed the high-risk designation from the 2010 Census because of the Bureau’s progress and strong commitment to and top leadership support for addressing problems, among other actions. The Bureau has made progress in these areas. However, additional efforts are needed. Processes for developing a life-cycle cost estimate. In our January 2012 report, we found that the Bureau had not yet established policies, procedures, or guidance for developing the 2020 Census life cycle cost estimate and is at risk of not following related best practices. A reliable cost estimating process, according to our Cost Estimating and Assessment Guide, is necessary to ensure that cost estimates are comprehensive, well documented, accurate, and credible. The Bureau intends to use our cost guide as it develops cost estimates for 2020 and follow best practices wherever practicable; however, as we reported, the Bureau has not yet documented how it plans to conduct its cost estimates and could not provide a specific time when such documentation would be finalized. Developing this necessary guidance will help ensure the Bureau has a reliable life-cycle cost estimate, which in turn will help ensure that Congress, the administration, and the Bureau itself can have reliable information on which to base decisions. IT management issues. As the Bureau prepares for 2020, it will be important for it to continue to improve its ability to manage its IT investments. Leading up to the 2010 Census, we made numerous recommendations to the Bureau to improve its IT management procedures by implementing best practices in risk management, requirements development, and testing.many of our recommendations, but not our broader recommendation to institutionalize these practices at the organizational level. The challenges experienced by the Bureau in acquiring and developing IT systems during the 2010 Census further demonstrate the importance of establishing and enforcing a rigorous IT systems development and management policy Bureau-wide. In addition, it will be important for the Bureau to improve its ability to consistently perform key IT management practices, such as IT investment management, system development and management, and enterprise architecture management. The effective use of these practices can better ensure that future IT investments will be pursued in a way that optimizes mission performance. We have ongoing reviews of the Bureau’s early 2020 Census planning for its IT investment management, as well as its information security program, which we expect to report out in the months ahead. As we noted in our May 2012 report, the Bureau’s early planning and preparation efforts for the 2020 Census are consistent with most leading practices in each of three management areas we reviewed— organizational transformation, long-term planning, and strategic workforce planning. For example, the Bureau is in the middle of a major organizational transformation of its decennial operations, and consistent with our leading practices, top Bureau leadership has been driving the transformation through such activities as issuing a strategic plan for the 2020 Census, incorporating annual updates of its business plan, and chartering an organizational change management council comprised of Bureau-wide executives and senior managers. The Bureau also has focused on a key set of principles as it begins to roll-out the transformation strategy to staff, and has created a timeline to build momentum and show progress. Although the decennial directorate is progressing with its organizational transformation, the person responsible for this effort—the Bureau’s organizational change manager—is responsible for a number of tasks, including transformation planning and implementation, and leading two working groups. At this point in the process, the amount of change-related activity the Bureau is considering may exceed the resources the Bureau has allocated to plan, coordinate, and carry it out. As a result, the planned transformation efforts could be difficult to sustain. We also noted in May 2012 that the Bureau is taking steps consistent with many of the leading practices for long-term project planning, such as issuing a series of planning memorandums in 2009 and 2010 that laid out a high-level framework documenting goals, assumptions, and timing of the remaining four phases of the 2020 Census. The Bureau also created a high-level schedule of program management activities for the remaining phases, documented key elements such as the Bureau’s decennial mission, vision, and guiding principles, and produced a business plan to support budget requests, which is being updated annually. These are important steps forward that, if continued, could help the Bureau’s planning stay on track for 2020. However, the Bureau’s schedule does not include milestones or deadlines for key decisions needed to support transition between the planning phases which could result in later downstream planning activity not being based on evidence from such sources as early research and testing. Also in the area of long-term planning, to help incorporate lessons learned, in 2011 the Bureau created a recommendation follow-up process, built around a database it created containing various oversight and internal Bureau recommendations. Not having a formal process for recommendation follow-up for prior censuses made it difficult to ensure that recommendations were considered by those at the Bureau best able to act on them. The Bureau has provided these recommendations to relevant Bureau research and testing teams and is beginning to take steps to hold the teams accountable for reporting on how they are considering them. The Bureau is also taking steps consistent with leading practices for strategic workforce planning, including identifying current and future critical occupations with a pilot assessment of the competencies of selected information technology 2020 Census positions. However, the Bureau has done little yet either to identify the goals that should guide workforce planning or to determine how to monitor, report, and evaluate its progress toward achieving them, which could help the Bureau identify and avoid possible barriers to implementing its workforce plans. While the Bureau’s efforts are largely consistent with leading practices in each of these areas, in our May 2012 report, we noted that additional steps could be taken going forward to build on these early planning efforts. Specifically, we recommended that the Director take a number of actions to make 2020 Census planning more consistent with key practices in the three management areas, such as examining planned transformation activity to ensure its alignment with resources, developing a more-detailed long-term schedule to smooth transition to later planning phases, and setting workforce planning goals and monitor them to ensure their attainment. The Department of Commerce concurred with our findings and recommendations and has taken steps to address our recommendations. For example, to support to its organizational transformation activities the Bureau has added additional staff and contractor support. The Bureau is moving forward along a number of fronts to secure a more cost-effective 2020 enumeration. Many components are already in place, a number of assessment and planning activities are underway, and the Bureau has been responsive to our past recommendations. Further, the Bureau is generally applying key leading practices in the areas of organizational transformation, long-term project planning, and strategic workforce planning, although additional efforts are needed in the months ahead. In short, the Bureau continues to make noteworthy progress in reexamining both the fundamental design of the census as well as its own management and culture. While this news is encouraging, it is still early in the decade, and the Bureau’s experience in planning earlier enumerations has shown how ambitious preparations at the start of the census life-cycle can derail as Census Day draws near. Thus, as the Bureau’s 2020 planning and reform efforts gather momentum, the effectiveness of those efforts will be determined in large measure by the extent to which they enhance the Bureau’s ability to control costs, ensure quality, and adapt to future technological and societal changes. Likewise, it will be important for Congress to hold the Bureau accountable for results, weighing-in on key design decisions, providing the Bureau with resources the Congress believes are appropriate to support that design, and ensuring that the progress made to date stays on track. The Bureau’s initial preparations for 2020 are making progress. Nonetheless, continuing congressional oversight remains vital. Chairman Carper, Ranking Member Brown, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this statement, please contact Robert Goldenkoff at (202) 512-2757 or by e-mail at goldenkoffr@gao.gov. Other key contributors to this testimony include Richard Hung, Ty Mitchell, Lisa Pearson, Mark Ryan, and Timothy Wexler. 2020 Census: Additional Steps Are Needed to Build on Early Planning. GAO-12-626. Washington, D.C.: May 17, 2012. Decennial Census: Additional Actions Could Improve the Census Bureau’s Ability to Control Costs for the 2020 Census. GAO-12-80. Washington, D.C.: January 24, 2012. 2010 Census: Preliminary Lessons Learned Highlight the Need for Fundamental Reforms. GAO-11-496T. Washington, D.C.: April 6, 2011. 2010 Census: Data Collection Operations Were Generally Completed as Planned, but Long-standing Challenges Suggest Need for Fundamental Reforms. GAO-11-193. Washington, D.C.: December 14, 2010. 2010 Census: Follow-up Should Reduce Coverage Errors, but Effects on Demographic Groups Need to Be Determined. GAO-11-154. Washington, D.C.: December 14, 2010. 2010 Census: Key Efforts to Include Hard-to-Count Populations Went Generally as Planned; Improvements Could Make the Efforts More Effective for Next Census. GAO-11-45. Washington, D.C.: December 14, 2010. 2010 Census: Plans for Census Coverage Measurement Are on Track, but Additional Steps Will Improve Its Usefulness. GAO-10-324. Washington, D.C.: April 23, 2010. 2010 Census: Data Collection Is Under Way, but Reliability of Key Information Technology Systems Remains a Risk. GAO-10-567T. Washington, D.C.: March 25, 2010. 2010 Census: Key Enumeration Activities Are Moving Forward, but Information Technology Systems Remain a Concern. GAO-10-430T. Washington, D.C.: February 23, 2010. 2010 Census: Census Bureau Continues to Make Progress in Mitigating Risks to a Successful Enumeration, but Still Faces Various Challenges. GAO-10-132T. Washington, D.C.: October 7, 2009. 2010 Census: Census Bureau Should Take Action to Improve the Credibility and Accuracy of Its Cost Estimate for the Decennial Census. GAO-08-554. Washington, D.C.: June 16, 2008. Information Technology: Significant Problems of Critical Automation Program Contribute to Risks Facing 2010 Census. GAO-08-550T. Washington, D.C.: March 5, 2008. Information Technology: Census Bureau Needs to Improve Its Risk Management of Decennial Systems. GAO-08-259T. Washington, D.C.: December 11, 2007. 2010 Census: Census Bureau Has Improved the Local Update of Census Addresses Program, but Challenges Remain. GAO-07-736. Washington, D.C.: June 14, 2007. Information Technology Management: Census Bureau Has Implemented Many Key Practices, but Additional Actions Are Needed. GAO-05-661. Washington, D.C.: June 16, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 1, 2005. Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity. GAO-04-394G. Washington, D.C.: March 1, 2004. Comptroller General’s Forum, High-Performing Organizations: Metrics, Means, and Mechanisms for Achieving High Performance in the 21st Century Public Management Environment. GAO-04-343SP. Washington, D.C.: February 13, 2004. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO-04-37. Washington, D.C.: January 15, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. 2000 Census: Lessons Learned for Planning a More Cost-Effective 2010 Census. GAO-03-40. Washington, D.C.: October 31, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Obtaining an accurate census in the face of societal trends such as increased privacy concerns and a more diverse population has greatly increased the cost of the census. At $13 billion, 2010 was the costliest census in U.S. history. Without changes, future enumerations could be fiscally unsustainable. GAOs past work noted that early planning, leading management practices, and strong congressional oversight, can help reduce the costs and risks of the enumeration. GAO also identified four key lessons learned from 2010 that could help secure a more cost-effective 2020 Census. The Bureau agreed and is taking steps to address them. As requested, this testimony focuses on the Bureaus progress on these lessons learned and what remains to be done going forward. It is based on GAOs completed work, including an analysis of Bureau documents, interviews with Bureau officials, and field observations of census operations in urban and rural locations across the country. Overall, the U.S. Census Bureaus (Bureau) planning efforts for 2020 are off to a good start, as the Bureau made noteworthy progress within each of the four lessons learned from the 2010 Census. Still, additional steps will be needed within each of the lessons learned in order to sustain those reforms. 1. Reexamine the nations approach to taking the Census. The Bureau has used a similar approach to count most of the population for decades. However, the approach has not kept pace with changes to society. Moving forward, the Bureau has begun to rethink its approach to planning, testing, implementing, and monitoring the census. For example, the Bureau is researching how it can use administrative records, such as data from other government agencies, to locate and count people including nonrespondents. Use of administrative records could help reduce the cost of field operations, but data quality and access issues must first be resolved. 2. Assess and refine existing operations focusing on tailoring them to specific locations and population groups. The 2010 Census had several operations tailored to specific population groups or locales. For example, the Bureau mailed bilingual English/Spanish forms to some areas and sent a second questionnaire to areas with historically lower response rates. Preliminary evaluations show these targeted efforts contributed to an increased awareness of the census and higher mail-back response rates. For 2020, the Bureau is considering expanding these efforts. Designing future studies to better isolate the return on investment of key census operations would help the Bureau further target its operations to specific population groups and locations and potentially gain significant cost savings. 3. Institutionalize efforts to address high-risk areas. Focus areas for the Bureau include improving its ability to manage information technology (IT) investments and develop a reliable cost estimates. In January 2012, GAO reported that the Bureau did not have policies and procedures for developing the 2020 Census cost estimate. In moving forward, it will be important for the Bureau to improve its IT acquisition management policies and develop better guidance to produce more reliable cost estimates. 4. Ensure that the Bureaus management, culture, and business practices align with a cost-effective enumeration. In May 2012, GAO reported that the Bureaus early planning efforts for the 2020 Census were consistent with most leading practices for organizational transformation, long term planning, and strategic workforce planning. Nevertheless, GAO found that additional steps could be taken to build on these early efforts. For example, the Bureaus schedule does not include milestones for key decisions to support the transition between planning phases. These milestones are important and could help with later downstream planning. GAO is not making new recommendations in this testimony, but past reports recommended that the Bureau strengthen its testing of key IT systems, develop policies and procedures for its cost estimates, and take actions to make 2020 Census planning more consistent with leading management practices. The Bureau generally agreed with GAOs findings and recommendations and is taking steps to implement them. |
Processing: IRS processes millions of paper and electronically-filed (e-filed) tax returns and validates key pieces of information during the tax filing season. The overwhelming majority of returns are e-filed through IRS’s Modernized e-File (MeF) system. Beginning last filing season, IRS and taxpayers benefitted from IRS’s switch from weekly to daily tax return processing on its Individual Master File (IMF) legacy system which allowed for faster refund processing for more taxpayers. IRS is continuing to transition from its antiquated IMF legacy system to a more modern return processing system known as the Customer Account Data Engine 2 (CADE 2). Telephone: Taxpayers can call to speak directly with an IRS customer service representative (CSR) to obtain information about their accounts or ask tax law questions. Taxpayers can also listen to recorded tax information using automated telephone menus. Automated services are provided on IRS’s 149 Tele-tax lines for tax law topics and 71 phone lines for account information. CSRs are also responsible for responding to paper correspondence from taxpayers. IRS tries to respond to paper correspondence within 45 days of receipt, and considers correspondence that is not addressed within that time to be overage. Minimizing the amount of overage correspondence is important because delayed responses may prompt taxpayers to write again or call IRS. Website: On IRS.gov, taxpayers can download forms, instructions, and publications and research tax law issues using interactive tools. Taxpayers can use interactive tools to check the status of their refunds, request transcripts (which are copies of a taxpayer’s account information), and apply for IAs. Face-to-Face Assistance: Taxpayers can obtain face-to-face assistance at IRS’s 390 Taxpayer Assistance Centers (TAC) or at more than 13,000 sites staffed by volunteer partners. At TACs, IRS staff provide answers to basic tax law questions, review and adjust taxpayer accounts, take payments, authenticate Individual Taxpayer Identification Number applicants and identity theft victims, and prepare returns for qualified taxpayers. At the sites staffed by volunteers, taxpayers can receive return preparation assistance as well as financial literacy information. Taxpayers can enter into IAs to pay their tax debts after filing their return with a balance due. IAs are an important tool for IRS to collect revenue. IRS assesses and collects billions of dollars each year through IAs. IAs can be established, paid off, defaulted on, reinstated, or terminated at any time during the year. IRS provides four types of IAs—Guaranteed, Streamlined, Regular/Routine, and Partial Payment—with different eligibility and payment requirements. IAs allow taxpayers to pay off their tax liabilities gradually over time and can encompass multiple years. Failure to adhere to the terms of the IA can cause default, termination, The most common type of IA by and more expensive collection actions.far is the streamlined agreement, which provides taxpayers with flexibility in paying off liabilities. Appendix I provides additional information on the types and characteristics of IAs. IRS defines a non-filer as a taxpayer who has a legal obligation to file a return, but fails to file a return by the filing deadline (either April or October depending on whether the taxpayer filed for an extension). IRS is authorized to issue a notice requesting a delinquent return and it can send up to four notices to taxpayers. Despite late tax law changes which delayed the start of filing season and compressed the time that IRS had available to process tax returns, IRS officials and external stakeholders such as large tax preparation firms reported relatively smooth processing, with a few exceptions. IRS usually begins processing tax returns in early to mid-January; this year, it started processing most returns January 30, 2013. Table 1 shows that IRS achieved an 84 percent e-file rate for individual returns, and processed 11 percent fewer paper returns compared to last year. Continued increases in e-filing are important because processing costs are lower for e-filed returns. According to IRS, in fiscal year 2012, it cost 23 cents to process an e-filed return, as opposed $3.36 for returns filed on paper. IRS relied solely on MeF this filing season, and attributed this success to expanded testing of the system which improved the MeF’s stability and facilitated return processing. External stakeholders confirmed IRS’s assertion about the reason for MeF’s success. Although operations were relatively smooth, IRS and others reported issues that delayed processing for tax returns filed with Form 8863 (Education Credits), and with Form 5405 (First-time Homebuyer Credit). While IRS was able to resolve processing delays for about 700,000 tax returns filed with the Form 8863, multiple class action suits have been filed against a large tax preparation firm regarding the issues involved in the processing delays.about 88,000 returns filed with Form 5405 that were caused by a compliance filter that resulted in additional scrutiny. Table 2 shows that, in 2013, IRS received 93.5 million calls— a 5 percent decrease from 2012 but higher than 2009 through 2011. Compared to 2012, IRS answered almost 9 percent fewer calls using automated services. Officials attributed some of the reduction in automated calls answered to fewer e-file personal identification number requests. Answering as many calls as possible through automation is important because IRS estimates that it costs 38 cents per call to provide an automated answer, but about $33 per call to use a live assistor. Table 2 also shows that callers experienced a shorter average wait time to speak to an IRS assistor this year compared to the same period last year, but the time was still much longer than 2008 through 2011. The percentage of callers seeking live help who received it stayed the same as 2012 at 68 percent. In 2010, we found that that IRS sets its annual goal based on factors such as resource availability, the expected number and complexity of calls, and anticipated volume of taxpayer correspondence, but not on an analysis of what taxpayers would consider to be good service. At that time, we recommended that IRS determine a telephone standard based on the quality of service provided by comparable organizations, what matters most to the customer, and resources required to achieve this standard based on input from Congress and other stakeholders. IRS disagreed saying its current process of developing a planned level of telephone service takes into consideration many factors, including its budget and assumptions about call demand. We noted, however, that such a standard would allow IRS to communicate to Congress what it believes constitutes good service. Further, since 2010, the IRS Oversight Board has said than an acceptable level of service (LOS) should be about 80 percent. However, IRS has yet to set such a standard. As shown in Table 3, the amount of correspondence received between 2009 and 2013 increased from 19 to 21 million (a 10.5 percent increase) and the percentage of overage correspondence nearly doubled to 47 percent in 2013 from 25 percent in 2009. As noted earlier, IRS generally considers paper correspondence that is not resolved within 45 days to be overage. In December 2010, we concluded that providing timely responses to paper correspondence remains critical to taxpayer service because if IRS’s responses take too long taxpayers may write again or call IRS for additional assistance. We recommended that IRS establish a performance measure that includes providing timely correspondence service to taxpayers. IRS agreed to this recommendation, and is beginning to take steps to implement it. Since we made that recommendation, the percentage of overage correspondence has continued to increase. IRS officials attribute the increase in the percentage of overage correspondence to budget constraints (less overtime for CSRs who provide both telephone assistance and work paper correspondence), and more complex taxpayer inquiries such as correspondence related to identity theft cases which can be more time consuming to address. IRS has taken some steps to identify why taxpayers write in. Based on a recent small, judgmental sample of correspondence cases, IRS found that the top three most common reasons taxpayers write in are: balance due payoffs, penalty abatements, and miscellaneous account inquiries. In that same sample, IRS found that its own processes, such as the wording of notices or requirements for a paper signature, influenced taxpayers to write in. IRS officials told us they are currently analyzing a statistically valid sample of correspondence to identify additional factors that influence the level of correspondence. Use of IRS.gov continues to increase, with IRS receiving approximately 374 million visits to its website through July 2013, an increase of nearly 26 percent over the same period in 2012. IRS officials attribute this increase to the launch of the redesigned website, introduction of new online tools such as Where’s My Amended Return, and implementation of the Patient Protection and Affordable Care Act.recommended that IRS develop a long-term strategy to improve web We previously services provided to taxpayers that includes studies of leading practices at a strategic level, measurable goals for taxpayer satisfaction, business cases for new online services that describe potential benefits and costs and prioritized projects, and links to investments in security. IRS reported that it is planning to update its long-term web services strategy to include our recommended changes in early 2014. See Appendix III for additional information on website use from 2008 through 2013. IRS received about 2.6 million visits to its TACs, a decline of approximately 5 percent from 2012. Additionally, the number of returns prepared continues to decline—in 2013, IRS prepared nearly 125,000 returns at TACs, about a 16 percent decline from 2012. IRS attributes the decline to its efforts to manage demand, and increased taxpayer awareness of online tools and services. In contrast, the number of returns prepared at the roughly 13,000 volunteer sites increased 5 percent between 2012 and 2013, to nearly 3.3 million in 2013. See Appendix IV for additional information on services and taxpayer use of TACs and volunteer sites since 2010. In 2012, we noted that IRS needs to dramatically revise its strategy for providing telephone and correspondence services and recommended that it define appropriate levels of telephone and correspondence services based on an assessment of demand and resources among other things.IRS neither agreed nor disagreed with our recommendation, saying it already has an objective of providing taxpayers with access to accurate services while managing demand. However, IRS’s efforts to date have not reversed the declines in taxpayer service. We noted, and IRS officials acknowledged, that incremental efficiency gains of the type IRS has realized in recent years would not be enough to combat the imbalance between taxpayer demand for services and available resources. We concluded that, with expected levels of resources, reversing the declines in telephone and correspondence services may require IRS to consider difficult tradeoffs such as limiting the types of phone calls that would be answered. Given expected budget levels for the 2014 filing season, IRS has identified six services for elimination or reduction, which officials told us were chosen because taxpayers had other options. IRS officials reported that they have discussed these options within IRS, with external stakeholders, and with Congressional committees that oversee IRS operations. IRS’s proposed service eliminations or reductions are: 1. Limiting tax law inquiries to answer only basic tax law questions during the filing season, and reassigning CSRs to work account- related inquiries; 2. Launching the “Get Transcript” tool, which will allow taxpayers to obtain a viewable and printable transcript online on www.irs.gov, and redirecting taxpayers to this and other automated tools for getting a transcript; 3. Redirecting refund-related inquiries to automated services and not answering refund inquiries until 21 days after the tax return has been filed, except for refunds held for potential fraud; 4. Limiting access to the Practitioner Priority Services line to only those practitioners working tax account issues; 5. Limiting live assistance and redirecting requests for domestic Employer Identification Numbers to IRS’s online tool; and 6. Eliminating free return preparation at IRS’s TAC sites and directing taxpayers to free alternatives including at IRS partner sites staffed by volunteers. The proposed elimination or reductions in services are examples of the difficult choices that we recommended need to be made if more timely access to telephone service and handling of correspondence is going to be achieved from the available IRS resources. While these cuts represent initial steps consistent with our recommendation from last year, they do not fully address it. Furthermore, even with these reductions, officials in IRS’s Wage and Investment Division responsible for the filing season told us they are anticipating that the level of telephone service could be 61 percent in 2014. IRS reported that it is working on final approval to implement the service options and bring about a better balance between demand for service and resources. However, the continued deterioration in taxpayer service in 2013, high cost of shifting staff from collections work to the telephones and correspondence, and anticipated level of telephone service for 2014 all highlight the importance of continuing to address the recommendation we made last year based on the need for a dramatic revision in IRS’s strategy. The choice may be between providing a broader range of services at a low level of performance or a narrower range of services at a higher level of performance. Until IRS develops a strategy, it risks not communicating expectations about the level of services it can provide based on the resources available. IRS could then use the strategy to facilitate a discussion with Congress and other stakeholders about the appropriate mix of service, level of performance, and resources. Taxpayers can apply for IAs online, by phone, in person, or by completing Depending on and mailing Form 9465, Installment Agreement Request.the type of IA, taxpayers can make their monthly payments via check or money order, direct debit, payroll deduction, online, or credit card. If a taxpayer fails to make monthly payments on time or incurs a new tax liability, the taxpayer is considered to be in default on the IA with a few exceptions. Appendix V shows the process for taxpayers entering into and making payments on IAs. Table 4 shows that, in fiscal year 2012, IRS approved more than 3 million new IAs and collected $9.8 billion. At the end of fiscal year 2012, IAs represented nearly $28 billion in unpaid balances. The difference between the amount of unpaid balance of assessment and amount collected is due to the fact that IAs can be paid off over multiple years. According to IRS officials, the increases in IA inventory since 2009 are due to a variety of factors, such as more taxpayers entering into IAs because of expanded eligibility and payment terms discussed below, and changes in the economy. Table 4 also shows, in fiscal year 2012, taxpayers defaulted on approximately 1.2 million IAs. In 2012, we found that, of those taxpayers that had a balance due at the filing deadline, almost two-thirds eventually paid in full or entered into IAs, and about 18 percent of taxpayers defaulted on IAs in fiscal year 2012. Therefore, we recommended that IRS pilot risk-based approaches for contacting taxpayers who have a balance with the goal of reducing the default rate. IRS agreed with the recommendation, but has not funded the related research project. Until IRS tests and implements more advanced risk-based approaches, it may be challenged to deal with the default problem. In 2012, IRS expanded its Fresh Start Initiative to assist struggling taxpayers in meeting their tax obligations. Specifically, the threshold for requesting a streamlined IA was raised from $25,000 to $50,000 and the maximum term for streamlined IAs was increased from 60 to 72 months for repayment. The expanded eligibility for streamlined IAs in particular allowed more people to qualify for the program and potentially pay taxes owed. Also under Fresh Start, IRS is encouraging taxpayers with IAs to sign up for direct debit agreements which generally have lower default rates. IRS said it is difficult to pinpoint which specific aspects of the Fresh Start initiative are effective. IRS is currently conducting an analysis of the initiative’s benefits in terms of collection and default prevention. As of November 2013, that analysis has not been completed. IRS allocated about 1,800 FTEs to the IA program in fiscal year 2012, which is over a 10 percent increase since fiscal year 2009. The level of and growth in this area highlights the importance of testing and implementing risk-based approaches for collecting balances due including through IAs. IRS recently made process improvements to help streamline and standardize its IA program operations. For example, IRS introduced automated tools, known as the Compliance Suite, which provides tax examiners (TEs) with online tools such as helping the TE determine which letter to send to the taxpayer. However, despite these process improvements, we observed some inefficiency. For example, we observed that some TEs had developed their own extensive sets of prewritten, standardized case notes that allowed them to quickly update a taxpayer’s account because the Compliance Suite lacked that capability. IA program managers were aware of this practice, said it gives TEs flexibility in writing case notes, but agreed that automated case notes may yield efficiencies. They also noted that the Compliance Suite was only introduced a few months ago, and its full capabilities were being explored. We agree that flexibility is desirable but providing an extensive set of standardized notes in the Compliance Suite would give all TEs the option of using them. This could lower the cost of making account entries. In addition, we found unnecessary redundancy. We observed TEs handwriting case notes on paper copies of IAs and then typing those same notes into IRS’s computer systems. IRS managers agreed this practice is redundant, and TEs noted this is the way it has always been done. Such redundant data entry increases the time it takes TEs to handle each IA case. Furthermore, GAO’s internal control guidance states that control activities should be regularly evaluated to ensure their appropriateness in intended function. By not developing a more standardized comprehensive set of case notes and not reducing redundant data entry, IRS is missing opportunities to reduce resources devoted to handling IA case files. With 1,800 FTEs devoted to the program, small gains in the efficiency of each TE could add up to substantial savings. Beginning in October, IRS uses prior year tax returns, third-party reports, such as W-2s and Forms 1099, and applications for automatic extensions of time to file to identify taxpayers who appear to have missed the mid- April tax return filing deadline. In June 2013, we found that most information reports are not received by IRS until well after the April tax return filing deadline. IRS does a second match in March of the following year for taxpayers who filed an extension. According to IRS officials, not all potential non- filer cases identified are selected for notification and review; IRS prioritizes cases based on factors such as income, the potential to collect taxes due, and whether the taxpayer is an IRS or other federal employee. After the first match in October, IRS sends notices to non-filers in November and December requesting the return or a justification for not filing. IRS begins to send notices to cases from the second match between March and July. Table 5 shows that, in tax year 2010 (the year for which the most current data are available), IRS identified over 7.4 million potential non-filer cases. Of these, IRS selected more than 3.2 million cases for review and sent notices to those non-filers requesting the return. IRS officials said the percentage of cases selected for review fluctuates based on resources and selection criteria, such as income and potential balance due amounts, which vary from year-to-year. Table 5 also shows in tax year 2008 (the year for which the most complete data are available), IRS received approximately 1.4 million delinquent returns, which is about 39 percent of the non-filers selected for review and notification. IRS officials attribute the relatively low number of returns filed in response to notices to taxpayers that have stopped filing for unknown reasons, those that do not have the resources to pay the potential balance due, and taxpayers who do not file until more extensive collection actions are taken. Securing the delinquent return as soon as possible as part of IRS’s notification process is important because taxpayers continue to incur penalties and interest until they file a return and IRS undertakes increasingly expensive enforcement actions against the taxpayer. In addition to taxpayers who do not respond to notices, IRS’s existing non-filer strategy notes that IRS also has a significant problem with repeat non-filers. For fiscal year 2012, IRS data shows that 43 percent of closed cases were repeat non-filers that did not file for more than 1 tax year. As of November 2013, IRS is waiting on executive approval for its updated non-filer strategy which it expects to receive by the end of the year and should address how IRS plans to improve non-filer compliance including notice response and repeater rates. IRS is currently analyzing data on the characteristics of non-filers, such as filing status, income, and other key characteristics, and their response rates to the notifications to determine the best approach, and expect this effort to be completed in January 2014. IRS officials also are considering several initiatives for improving the notification process once its updated non-filer strategy is approved. Despite efficiency gains in processing returns, additional website services, and shifting employees from working collections cases to handling telephone calls and correspondence, the gap between taxpayers’ demand for service and IRS resources widened. As a result, taxpayer access to IRS’s telephone assistors remained at a low level and the percentage of overage correspondence grew. The widening gap highlights the importance of fully implementing our recommendation made last year for a dramatic revision in managing taxpayer service that defines an appropriate level of service and recognizes both the demand for services and resources available. We stressed that this would mean making difficult tradeoffs. Consistent with our recommendation, IRS has proposed eliminating or reducing some services. By eliminating or reducing some services IRS should be able to devote more resources to its continuing services. However, IRS officials told us that the cuts proposed so far may not reverse the decline in telephone and correspondence performance. As a consequence, the cuts may only be a down payment on the difficult choices needed and our recommendation needs to be fully addressed. Fully addressing our recommendation would result in a strategy that could be used to facilitate a discussion with Congress and other stakeholders about the appropriate mix of service, level of performance, and resources. The imbalance between the demand for services and resources also puts a higher priority on scrutinizing existing processes for possible further efficiency gains. We identified opportunities in the processing of installment agreements where the existing process could be further streamlined to reduce resources by standardizing case notes and reducing unnecessarily redundant data entry. We recommend that the Commissioner of Internal Revenue develop a set of standardized account entries and eliminate unnecessary redundancy when entering installment agreement data into accounts. We provided a draft of this report to the Acting Commissioner of Internal Revenue. IRS provided written comments on a draft of the report, which are reprinted in Appendix VI. IRS also suggested technical changes to the report, which we incorporated where appropriate. IRS did not state whether it concurred with the recommendation. However, IRS acknowledged that standardized account entries can sometimes lead to increased efficiencies and lower costs, and taxpayers and IRS can benefit by the elimination of redundancy in its processes. IRS stated that as it continues to evaluate the Compliance Suite to determine its full capabilities, it will (1) explore whether the introduction of standardized account entries into the IA process will yield increased efficiencies and lower costs; and (2) evaluate whether there are unnecessary redundancies in its current processes that can be eliminated without adversely affecting tax administration. GAO believes the recommendation remains valid as discussed in the report. We plan to send copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Acting Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Deputy Director for Management of the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix VII. Of the four types of IAs available to individual taxpayers, the most common type of IA by far is the streamlined. Figure 1 shows that during the 2013 filing season, the Internal Revenue Service (IRS) received most of its calls in the period leading up to and including the April 15th filing deadline, with the heaviest volume of calls in the early to mid-February timeframe. Importantly, the figure below shows that access to live assistance begins to decline sharply in the weeks following the filing deadline. Table 7 shows that in 2013 use of IRS.gov increased by 26 percent compared to 2012. Use of the online tools such as the volunteer site locator, Where’s My Refund, electronic personal identification number requests, and interactive tax assistance tools continued to increase in 2013. IRS attributes declines in downloading forms and publications, and generally searching the website, to changes in how IRS tracks website use in these areas. Face-to-face service remains an important component of IRS’s efforts to serve taxpayers, particularly those with low incomes or limited proficiency in English. Table 8 shows that, for 2013, the total number of contacts at walk-in sites, Taxpayer Assistance Centers (TAC) staffed by IRS employees, is the lowest level for the 4 year period shown. Further, return preparation has steadily declined since 2010 due to IRS’s continuing efforts to reduce expensive return preparation services. Conversely, the total number of returns prepared at sites staffed by volunteers has increased since 2010. In addition to the individual named above, Joanna Stamatiades, Assistant Director; Shilpa Grover; Emily Gruenwald; Lois Hanshaw; LaKeshia Allen Horner; Kirsten Lauber; Natalie Maddox; Karen O’Conor; Anna Maria Ortiz; Kelly Rubin; and John Zombro made key contributions to this report. | The tax filing season is when IRS processes most tax returns and provides services including telephone, correspondence, and website assistance for tens of millions of taxpayers. IRS budgeted more than $2 billion for these activities in 2013. The filing season is also when IRS begins collecting delinquent taxes by, for example, approving installment agreements and checking for non-filers. GAO was asked to review the 2013 tax filing season. This report (1) assesses IRS's performance in processing tax returns and providing services to taxpayers; (2) describes the installment agreement process and assesses its efficiency; and (3) describes the process for detecting and notifying non-filers. To conduct the analyses, GAO obtained and compared IRS data from 2007 through 2013, reviewed pertinent IRS documents, observed IRS operations, and interviewed IRS officials and experts in tax administration, including tax preparation firms. Despite efficiency gains from processing more tax returns electronically, adding website services, and shifting resources from enforcement, the Internal Revenue Service (IRS) was unable to keep up with demand for telephone and correspondence services. Access to IRS's telephone assistors remained at 68 percent from 2012. The percentage of overage paper correspondence (over 45 days old) increased to 47 percent from 40 percent in 2012. In the face of similar trends, last year GAO reported that a dramatic revision in IRS's taxpayer service strategy was needed and recommended IRS take steps to better balance demand for services with available resources. GAO acknowledged this may require IRS to consider difficult tradeoffs, such as limiting some services. In response, IRS has proposed eliminating or reducing some services for 2014 such as answering basic tax law questions only during the filing season. However, IRS officials told GAO the proposed cuts may not be sufficient to stop the deterioration in services. Until IRS develops a strategy, it risks not communicating expectations about the level of services it can provide based on the resources available. IRS could use the strategy to facilitate a discussion with Congress and other stakeholders about the appropriate mix of service, level of performance, and resources. IRS offers options for installment agreements (IAs) to taxpayers who cannot fully pay their taxes when due. Taxpayers can enter into these agreements online, by phone, and by mail. In fiscal year 2012, IRS approved about 3.2 million new agreements and collected almost $10 billion. IRS devotes about 1,800 full-time equivalent staff to the program but it is not as efficient as it could be. GAO found opportunities to standardize account entries and reduce redundancy by eliminating dual entry of the same data on paper forms and into IRS's computers. IRS officials agreed that opportunities did exist to streamline the process. More standardized and less redundant data entry could reduce resource needs. IRS detects non-filers by matching third party information (i.e., Form W-2s) with tax returns. The first match is done in October, well after the mid-April tax filing deadline from the previous year, because of the time it takes to receive the third party information and process it. IRS sends notices to non-filers in November and December. GAO recommends that IRS develop standardized account entries and eliminate unnecessary redundancy in the installment agreement process. IRS did not state whether it concurred with our recommendation. GAO believes the recommendation remains valid as discussed in this report. In addition, GAO continues to believe the prior recommendation that IRS develop a strategy that defines appropriate levels of telephone and correspondence services based on an assessment of demand and resources among other things remains valid and should be addressed. |
Nuclear weapons have been and continue to be an essential part of the nation’s defense strategy. The end of the cold war resulted in a dramatic shift in how the nation maintains such weapons. Instead of designing, testing, and producing new nuclear weapons, the strategy has shifted to maintaining the existing nuclear weapons stockpile indefinitely and extending the operational lives of these weapons through refurbishment, without nuclear testing. Established by Congress in 2000 as a separately organized agency within DOE, NNSA has the primary mission of providing the United States with safe, secure, and reliable nuclear weapons in the absence of underground nuclear testing and maintaining core competencies in nuclear weapons science, technology, and engineering. To support this highly technical mission, NNSA relies on capabilities in several thousand facilities located at eight nuclear security enterprise sites that support weapons activities. These sites are owned by the government but managed and operated by private M&O contractors. Each site has specific responsibilities within the nuclear security enterprise, with six of them having important production missions (see fig. 1). NNSA reimburses its M&O contractors for the allowable costs incurred in carrying out NNSA’s missions. These include costs that can be directly identified with a specific NNSA program (known as direct costs)—for example, the costs for dismantling a retired weapon—and costs of activities that indirectly support a program (known as indirect costs), such as administrative activities. To ensure that NNSA programs are appropriately charged for incurred costs, M&O contractors’ accounting systems assign the direct costs associated with each program and collect similar types of indirect costs into pools and allocate them among the programs. Consistent with Cost Accounting Standards (CAS), M&O contractors must classify their costs as either direct or indirect and, once costs are classified, must consistently charge their costs to these classifications. M&O contractors are required to disclose their cost accounting practices in formal disclosure statements, which are updated annually and approved by NNSA officials. M&O contractors’ cost accounting practices cannot be readily compared with one another because contractors’ methods for accumulating and allocating indirect costs vary—that is, a cost classified as an indirect cost at one site may be classified as a direct cost at another. To obtain more consistent information about the support costs at DOE’s major contractor-operated facilities, in the mid-1990s, DOE’s Chief Financial Officer (CFO) created 22 standard categories of “functional support costs.” These categories include, for example, executive direction, information services, procurement, maintenance, and facilities management. Each of the 22 categories is defined to cover all related costs, regardless of whether contractors classify them as direct or indirect. From fiscal years 1997 through 2010, the CFO required the department’s primary contractors to annually report these costs. To oversee the quality of these data, contractors’ financial personnel peer reviewed the data for each facility once every few years. According to the CFO, functional cost data are derived, to the extent possible, from contractors’ existing accounting systems and overlaying financial structure, but contractors do not budget, accumulate, or distribute costs in their formal accounting systems in the same manner. Because of this, and because numerous site specific factors (missions, size, age, location of facilities) influence support costs, the CFO refers to functional costs as sufficient for trending costs at a given site over time but not necessarily for comparison across sites. NNSA officials and contractors have told us in the past that the collection of historical functional support cost data has also been marked by different definitions and interpretation of functional cost categories, as well as different data gathering methods. Because of these limitations, in 2011 DOE significantly revised its guidelines for the collection of contractor cost data. These guidelines now de-emphasize functional support cost reporting. NNSA’s Office of Defense Programs is responsible for NNSA’s weapons activities and oversees the sites’ M&O contractors. Federal site offices are located at each NNSA site to perform day-to-day oversight of these contractors’ activities. Federal site office managers serve important roles, in conjunction with the Office of Defense Programs, such as determining contract award fees and managing and accepting safety and security risks at their sites. The administration, through the legislatively mandated 2010 Nuclear Posture Review, established the nation’s nuclear weapons policy and strategy. This strategy seeks to maintain a safe and reliable but smaller nuclear deterrent than in the past. More specifically, the United States has agreed to reduce the size of its strategic nuclear weapon stockpile from a maximum of 2,200 to 1,550 weapons. This stockpile is composed of seven different weapons types, including air-delivered bombs, ballistic missile warheads, and cruise missile warheads. As the stockpile is being reduced, the administration pledged additional funds to modernize and operate the nuclear security enterprise, to include the refurbishment of weapons currently in the stockpile and the construction of important new production facilities to support these refurbishments. NNSA’s fiscal year 2012 Stockpile Stewardship and Management Plan provides details of nuclear security enterprise modernization and operations plans over the next two decades. During this period, NNSA estimates it will have funding needs of about $180 billion. In 2010, the administration pledged over $88 billion to fund the first decade of this plan. NNSA’s efforts to improve its operations and business practices predate the 2010 Nuclear Posture Review but are now an important component of NNSA’s modernization efforts. In 2008, NNSA established an acquisition strategy team that examined 11 different contracting options to reduce costs and improve operations. Eight of these options involved combining various production missions under a single M&O contract. Another three options looked at combining functional areas, such as safeguards and security, construction management, and information technology, at multiple sites under a single contract. To conduct its analysis of these options, the team did the following: (1) compiled and analyzed available historical functional support cost data for six of NNSA’s eight sites (see fig. 1); (2) attempted to normalize these data to account for discrepancies and anomalies in them, compared these normalized data across sites in an attempt to create a “common financial language,” and benchmarked this information against information from commercial nuclear industry mergers and acquisitions; (3) developed a set of major assumptions to frame the analysis; (4) compared the expected effects of the proposed consolidation with a status quo or baseline scenario where no M&O consolidation occurs; and (5) developed estimates for potential cost savings resulting for each option. The team completed its analysis in 2009 and, in March 2010, NNSA in large part adopted the acquisition strategy team’s primary recommendations and announced plans to undertake a two-part acquisition strategy. According to NNSA officials, NNSA rejected the team’s proposal to include the Los Alamos National Laboratory’s (LANL) production mission as a future option in the consolidated M&O contract on the grounds that LANL’s research and development mission were too diverse and complex to separate. The agency also decided not to pursue for, the time being, a proposal to consolidate the nonnuclear production carried out by the Kansas City Plant (KCP) and the Sandia National Laboratory (SNL). KCP expects to transition to a new facility by the end of 2012, which we reported on in October 2009. NNSA’s anticipated benefits as a result of its proposal to award a single contract for the management of Y-12 and Pantex will remain uncertain until NNSA makes decisions about the details of the contract and addresses several issues raised by NNSA officials, contractors, and members of Congress. Among these benefits, NNSA anticipates that increased efficiencies at those sites could save an estimated $895 million in nominal dollars over the next 10 years. Some cost savings seem likely under a single contract, but NNSA’s analysis suggests that efficiencies also could be achieved under existing contracts. In addition, NNSA’s estimated cost savings are uncertain due to issues relating to the methodology NNSA used to support its estimate and the adequacy of the cost data used. In addition, NNSA, contractor officials, and members of Congress have also raised a number of concerns that a single M&O contract could disrupt work at the sites, which if unaddressed, could ultimately affect the safety and reliability of the nuclear weapons stockpile. According to its analysis on the proposed acquisition strategy, NNSA expects that the proposed consolidation of the M&O work at its Y-12 and Pantex sites will increase efficiencies at those sites. These expected efficiencies are based on a combination of assumptions made by NNSA in its analysis of the proposal, input from private consultants with experience involving mergers and operations at commercial nuclear facilities, and discussions with staff at the sites. According to the analysis, these efficiencies are expected to result primarily from (1) more streamlined and uniform operations and (2) improved performance by the contractor. First, NNSA’s analysis indicates that consolidating the contracts will streamline and make more uniform training; human resources practices; and information systems, such as payroll, budget, and finance systems; and improve the comparability of management data at both sites. For example, there are over 100 different information technology systems and applications at the Y-12 and Pantex sites, and NNSA concluded in its analysis that merging some of those systems would lead to improved effectiveness, data integrity, and security. In addition, NNSA’s analysis concluded that a consolidation could have similar benefits in interactions with external parties, such as regulators and vendors, as those entities would need to coordinate with only one contractor instead of two or three. Second, NNSA’s analysis concluded that the one contractor overseeing the consolidated M&O work at these sites would improve performance as a single contractor would be able to implement best practices across its sites more easily. For example, a single M&O contract would allow the contractor to more readily share its processes and approaches to reduce costs, and efficiencies and commercial production practices could be more easily transferred among sites. NNSA’s analysis estimated that these efficiency gains and other improvements could eliminate about 1,000 full-time equivalent (FTE) support service jobs over the next 10 years at the Y-12 and Pantex sites. NNSA estimated that the elimination of these FTEs could lead to a savings of $895 million in nominal dollars over the same period. To calculate these estimates, NNSA’s projected baseline cost for operations over the next 10 years was $23 billion. NNSA and its private consultants then estimated the potential staff reductions that could occur by consolidating contracts at certain sites based on a comparison of current NNSA staffing levels with those in the commercial nuclear industry. This comparison did not involve identifying specific jobs at the particular NNSA site that would be eliminated, but rather it involved estimating how certain general job functions—such as management, security, and human resources—might be adjusted to more closely align with commercial nuclear industry levels. For example, an NNSA consultant estimated that FTEs associated with the CFO and human resources would be reduced by about 45 percent, and FTEs associated with information technology and procurement reduced by about 30 percent if the contracts were consolidated. NNSA’s analysis concluded that more than 1,000 of the nearly 10,000 contractor FTEs at the Y-12 and Pantex sites would no longer be needed after the consolidation and that these positions would be eliminated over a 5-year period. NNSA estimated that, over the next 10 years, these FTE reductions would reduce costs at these affected sites from $23 billion to about $22 billion, or about 4 percent of the total cost of operations at these sites, and about 1 percent of NNSA’s funding at all of its sites during that period. According to NNSA’s analysis, although efficiencies are expected as a result of consolidating contracts, NNSA could also achieve efficiencies through its existing M&O contracts. Specifically, the analysis included 18 recommended improved management practices that would make changes to the current management of all eight of its sites that could lead to process improvements and cost savings. These recommendations included substituting commercial best practices and industrial standards for DOE directives, standardizing security force equipment, improving enterprise-wide collection and analysis of costs, and streamlining contractor pension and health benefits plans. NNSA contractors we spoke with also said that many of the efficiencies expected under this strategy could be realized by NNSA under its existing contracting approach without a contract consolidation. For example, officials representing the contractors at Pantex and Y-12 both said their companies had begun implementing some of these changes at their respective sites and had seen efficiencies and savings already. We view these actions on the part of contractors as positive and a step in the right direction toward more effectively and efficiently managing NNSA contracts. However, NNSA has not identified in a systematic manner, how it plans to implement these 18 improved management practices at all of its sites. Without that implementation information, it is unclear whether NNSA is taking every opportunity to improve its contract management practices. In a 2010 testimony before the Senate Armed Services Committee, NNSA’s Administrator stated that, while many of the details still need to be worked out, the consolidated M&O contracting strategy can save taxpayers more than $895 million over the next decade. However, the sensitivity of NNSA’s key assumptions formulated in 2008 to 2009; the lack of details surrounding the scope of work of the contract; and the recognized problems in comparative, historical DOE and NNSA cost data, make realizing these savings uncertain. A key assumption used in NNSA’s cost savings calculation––estimating future costs under both status quo and contract consolidation scenarios–– relied on assumptions that may no longer be valid. For example, according to NNSA officials and documents we reviewed: NNSA’s analysis assumed that its baseline future funding and staffing levels at the sites would remain flat. This assumption is in sharp contrast to the commitment made by the administration in 2010 to request increased funding to modernize the nuclear security enterprise and the funding needs identified in the fiscal year 2012 Stockpile Stewardship and Management Plan. NNSA’s analysis assumed that commercial nuclear industry data would serve as a valid basis for comparison when forecasting staffing and funding levels at its sites under a consolidated contract. However, several NNSA and contractor officials questioned the use of these commercial data as a benchmark because they may not accurately reflect the work that occurs at NNSA sites. For example, in the case of security costs, an NNSA official said that the security needs and activities at Y-12 and Pantex, which handle nuclear weapons components and nuclear weapons, respectively, differ significantly from security needs at commercial nuclear facilities. NNSA’s analysis assumed that the contracts at the Y-12 and Pantex sites would not be extended upon the expiration of their terms at the end of 2010 and would instead be either recompeted or consolidated. However, because of delays in issuing an RFP for the consolidated M&O contract, NNSA was forced to extend the terms of the contracts for both of these sites for an additional 18 months in 2010, which will likely affect estimates of both the expected future costs and anticipated savings at those sites. In addition, since announcing its intention in March 2010 to consolidate M&O contracts at Y-12 and Pantex, NNSA did not announce the preliminary scope of work (through a draft RFP) to be included in its consolidated M&O contract until July 21, 2011—at the same time we were concluding our work. This timing limited our ability to review the calculation of estimated savings since it was unclear if NNSA’s draft RFP would align with the assumptions used in its analysis. According to NNSA officials, the recently released draft RFP for the consolidated M&O contract outlines more complete details about NNSA’s proposed contracting strategy. NNSA officials told us that the agency anticipates that industry feedback on the draft RFP, due by September 19, 2011, will be important in structuring the final RFP. Final RFPs include information such as the government’s requirement, anticipated terms and conditions of the contract, information required to be in the offeror’s proposal, and factors that will be used to evaluate the proposal and their relative importance. Unlike a draft RFP, which is one way the government promotes early exchanges of information with industry, a final RFP is intended to result in a contracting action. Furthermore, historic cost data were not readily available for NNSA to use in its cost analysis, requiring NNSA to create its own historical “financial common language” for its sites. More specifically, a key step in NNSA’s process to estimate savings—developing a comparative baseline of historical site costs––was a difficult and inexact process because DOE and NNSA contractors use different methods for tracking costs, and DOE’s functional support cost data are of limited use in comparing sites. NNSA sought to develop a clearer picture of potential cost savings across its sites, working with NNSA and contractor officials at its sites trying to resolve discrepancies and anomalies in the historical cost data. Using these “normalized” data, and other assumptions, NNSA arrived at its cost savings estimate of $895 million in nominal dollars, which it called “most likely.” DOE’s OCA, which was established within the CFO Office in 2008, also examined historical functional support costs but excluded some of the most questionable data and arrived at a cost savings estimate of $750 million over 10 years. Any cost savings estimate should be viewed as illustrative rather than precise because of the quality of the data. This is consistent with our Cost Estimating and Assessment Guide, which notes that specific “point” estimates are more uncertain at the beginning of a program because less is known about its detailed requirements and opportunity for change is greater. In discussions with GAO, NNSA officials agreed that actual savings will be more accurately determined with the release of a draft RFP, which will better define the scope of the work and, ultimately, by the execution of the contract. In addition to concerns over cost savings, a number of NNSA and contractor officials have raised issues about a consolidated M&O contract potentially disrupting the work at sites. These issues include: (1) uncertainty about actual staff reductions, (2) opposition from local constituents, (3) security force issues, (4) need for a federal oversight plan, and (5) potential for reducing the number of contractors willing and able to participate in the competition. A number of NNSA site officials that we spoke with said that they were skeptical that such large staff reductions were possible in a consolidated M&O contract. In fact, NNSA reported in its 2012 Stockpile Stewardship and Management Plan (an annual report to Congress on the status of NNSA’s efforts to manage and modernize the nuclear weapons enterprise) that the contractor staffing levels at the sites are currently too low and that further reductions are not plausible. The plan characterizes current contractor workforce levels as lacking robustness and depth and states that there is little or no redundancy in the contractor workforce. The plan noted, for example, that the production activities at the sites were already operating at minimum staff levels—some having recently eliminated some staff. Any further reductions would threaten the success of the mission of the sites. Some site officials also said that other indirect functions, such as security and oversight, would not experience any efficiency under a consolidated contract because those functions would still require the same number of staff at each specific site regardless of the management structure. For other indirect services, such as information technology and human resources, which make up a small portion of the total FTEs, the opinions of site officials were mixed, with some acknowledging the possibility of some reductions, while others were skeptical of any reductions in FTEs. NNSA announced in 2010 that the incoming contractor of the consolidated M&O contract would have the flexibility to restructure the workforce, which has led to employee concerns at both sites that may present challenges to NNSA. According to one NNSA official, although included in other DOE contracts, NNSA typically has not included such a provision of workforce flexibility in past contract restructuring; instead it has traditionally accepted the same terms as the previous contractor with regard to human resources issues. Restructuring the workforce now may be difficult because advocates representing current employees, including unions, have voiced opposition to any actions that negatively impact workers. As a result, opposition from some constituents and their representatives could complicate any attempts to consolidate the contracts if that consolidation includes staff reductions. For example, in response to these concerns, two members of Tennessee’s congressional delegation recently sent a letter to the Secretary of Energy asking him not to consolidate the contracts at these sites citing, among other reasons, concerns about the need to maintain a focused and skilled workforce. Even the prospect of a consolidation may already be having negative impacts on staffing. According to contractor officials at one site, some currently vacant support positions that could be eliminated under a consolidated M&O contract, such as a general counsel position, have been difficult to fill. NNSA’s analysis notes that employee concerns such as these could affect important site operations, though, according to site office officials, currently none have been reported. In addition, federal site office officials noted two concerns about how a single M&O contract will affect contractor guard forces, which at NNSA and DOE sites are known as protective forces. These forces are a key component of security at sites with special nuclear material, which is a high security risk. Y-12 and Pantex have over 1,000 protective forces combined. First, as we recently reported, these protective forces each operate under different contracts and contractors, have different pay and benefit structures, and are represented by different collective bargaining agreements. As such, site office officials told us combining these two protective forces under a single contract could be difficult. Second, the current M&O contractor at Pantex employs protective forces, and protective forces are employed at Y-12 under a direct protective force contract (i.e., a non-M&O contract). In addition to the Y-12 site, the same contractor provides, under a separate contract, protective forces for other important, nearby operations in DOE’s Oak Ridge Reservation, such as a major environmental cleanup of hazardous materials. It is unclear how protective forces will be provided for DOE’s Oak Ridge Reservation under a consolidated M&O contract. Furthermore, because of increased complexity under a consolidated contract, some NNSA officials said that federal oversight of a consolidated contract may need to be enhanced. NNSA’s analysis showed that effective federal oversight is crucial to realizing cost savings and performance in both current and future contracts and that its employees must be better equipped to manage the contractors under any type of contract. The analysis also recommended that NNSA better train its federal site officials to ensure accountability of its contractors. Federal officials NNSA interviewed as part of conducting its analysis also expressed the need to have federal oversight changes in place before the new contracts go into effect. However, NNSA’s plans to improve federal oversight of these contracts are still in the early stages of development. NNSA recently awarded a contract to study the structure, roles, and responsibilities of federal site office oversight that will include oversight of the proposed M&O contract consolidation; this study is expected to be completed in December 2012. Until NNSA has the results of its federal site office study, including information on federal workforce needs, it cannot finalize plans and begin to prepare the federal site offices for the transition to the new contracts. In its response to our draft report, NNSA said that it will develop a site office structure prior to contract award. According to NNSA’s analysis, and some NNSA officials and contractors, it is likely that the number of contractors willing and able to participate in the competition will decrease (compared with competition for separate contracts) due to the large scope of diverse and complicated work being consolidated, resulting in fewer contractors that may have the interest or capability to execute the contract successfully. As part of this review, we found that previous NNSA contract competitions during the last 10 years attracted an average of three contractors or contracting teams. According to some NNSA officials and contractors, it is quite possible that there will only be a single offeror for a consolidated contract, although this offeror will likely consist of a consortium of companies with specialized technical, management, and administrative expertise to perform the work required by the large contract scope. An NNSA official suggested that such a consortium could preserve the benefits of competition by involving the strongest firms. After reviewing NNSA’s proposal, DOE’s OCA reported, however, that a decrease in the number of competitors interested in competing for this contract could cause costs to actually increase over the long-term because NNSA may be forced to choose from only one or two contractors. Recently, the Office of Management and Budget also warned that competitions that yield only one offer deprive agencies of the ability to consider alternative solutions in a reasoned and structured manner. NNSA has identified several potential benefits associated with awarding a single, enterprise-wide construction contract, but a number of issues have also been raised by NNSA and others. NNSA’s analysis identified some potential benefits, including a new dedicated nuclear security enterprise- wide focus on management of major construction projects to meet schedules, cost savings, and the implementation of uniform business practices in executing major projects. However, NNSA’s projected savings from a consolidated construction contract—approximately $24 million per year or $120 million in nominal dollars over a 5-year period—is uncertain, especially since it appears unlikely that some of NNSA’s major construction projects will be part of the contract. In addition, NNSA’s analysis did not include a formal assessment of the risks involved in this effort, as is recommended by federal standards for internal control. NNSA and others have also identified two potential concerns associated with the new contracting strategy, including (1) the need to closely integrate the work of the existing M&O and new construction contractor could necessitate increased federal oversight and (2) reduced industry interest in the contract if major projects are not included. NNSA’s analysis identified several potential benefits that could result from awarding a single construction contract. The potential benefits include the following: Allowing the M&O contractor to focus its resources on its core mission of managing and operating sites and having U.S. engineering and construction management contractors focus on construction. Having a dedicated nuclear security enterprise-wide focus on management of major construction projects to control costs and meet schedules. Implementing uniform business practices in executing major projects across the nuclear security enterprise. Realizing cost savings of about $120 million over a 5-year period, primarily because the eight M&O contactors will be able to reduce construction personnel. NNSA’s projected savings from a consolidated construction contract are uncertain. NNSA estimates the projected savings resulting from awarding such a contract at approximately $120 million in nominal dollars over a 5- year period, which is approximately 2 to 3 percent of the projected total construction costs. The cost savings are primarily achieved through the assumption that future M&O contractors will have less need to maintain a large cadre of construction personnel. However, actual cost savings resulting from implementing a consolidated construction contract strategy that NNSA developed are uncertain for three primary reasons. Specifically: NNSA does not have an accurate total cost baseline of its ongoing and planned construction projects. For example, we reported in February 2011 that NNSA had identified 15 ongoing capital improvement projects as necessary to ensure future viability of the Stockpile Stewardship Program but did not have estimated total costs or completion dates for all projects. As we also reported in November 2010, NNSA has a history of inaccurately estimating the cost of major construction projects, including recent inaccurate estimates for facilities included in the estimate for potential cost saving. For example, as we reported in November 2010, NNSA’s 2007 estimate for its Uranium Processing Facility (UPF) at Y-12 indicated the facility would cost from $1.4 to $3.5 billion in nominal dollars to construct—more than double its 2004 estimate of $600 million to $1.1 billion. In 2010, NNSA again adjusted its estimate for the UPF, estimating the facility will cost from $4.2 to $6.5 billion in nominal dollars to construct—double its 2007 estimate. Without an accurate total cost baseline of its ongoing and planned construction projects, it will be difficult for NNSA to accurately estimate savings. The consolidated construction contract may not include some of NNSA’s major construction projects. According to one NNSA official, NNSA’s projected savings from a consolidated construction contract assumes that all construction projects costing over $10 million dollars, excluding the Mixed Oxide Fuel Fabrication Facility, which is well under way at SRS, will be included in the contract. NNSA’s analysis assumed that, once the consolidated construction project is in place, about half of the M&O contractors’ construction personnel will no longer be needed. As with the draft RFP for the consolidated M&O contract, NNSA has delayed the release of the draft RFP for the consolidated construction contract, but, according to agency officials, plans to release it later in 2011. These officials told us that the agency anticipates that industry feedback on that RFP will be important in structuring the final RFP. However, an NNSA official associated with the contracting effort recently stated that the contract probably will not include the most expensive and significant construction projects planned for the next 10 years. More specifically, senior NNSA officials told us that it is unlikely that the construction contract will include UPF and the Chemistry and Metallurgy Research Replacement facility (CMRR) at LANL or some other major facilities because including them would disrupt ongoing design and construction carried out by M&O contractors. Collectively, these two facilities represent about 85 percent of NNSA’s total planned construction projects through fiscal year 2016. Other NNSA construction projects are also unlikely to be included in the consolidated contract. For example, the Pit Disassembly and Conversion Facility planned for SRS, may not be included in the scope of the consolidated contract, according to NNSA officials, because of this facility’s high cost and lack of a stable cost estimate. NNSA’s cost savings’ estimate was, according to an agency official, relatively cursory, given the lack of an accurate total cost baseline of its ongoing and planned construction projects and since the focus of the proposal is to improve project management. We found that NNSA, in this part of its acquisition strategy, did not employ best practices such as conducting a sensitivity analysis, identified in our Cost Estimating and Assessment Guide for developing reliable cost estimates. In addition, we note that NNSA’s analysis of a consolidated construction contract was far less extensive than its analysis of the consolidated M&O contracts, even though a consolidated construction contract could be worth over $8 billion over the next decade and represents a fundamental change for the nuclear security enterprise. As part of the contract analysis process, in April 2009, NNSA completed a review of construction management alternatives and developed a recommendation to issue an RFP for a consolidated construction contract to include all major construction projects, general projects, and facility infrastructure and revitalization projects. However, this review was largely based on expert judgment and did not include an in-depth analysis of potential risks resulting from awarding a single construction contract to one company for construction across the nuclear security enterprise. More specifically, NNSA did not conduct a formal assessment of the risks involved in this effort, such as risk analysis regarding the different roles and responsibilities between the M&O contractor and the construction contractor. One of the federal standards for internal control—risk assessment—states that management should assess the risks faced entity-wide, and at the activity level, from both external and internal sources, and that once risks have been identified, management should decide what actions should be taken to mitigate them. Risk identification methods may include, among other things, forecasting and strategic planning, and consideration of findings from audits and other assessments. NNSA did not develop potential mitigation strategies, according to one NNSA official, even though awarding a single M&O contract for its multiple sites and a single construction contract worth billions of dollars for construction projects across the nuclear security enterprise represents a fundamental change in the way NNSA conducts and manages projects. This is inconsistent with NNSA’s much more detailed analysis of its consolidated M&O contracting proposal and with best practices identified in our Cost Estimating and Assessment Guide, which is a compilation of cost-estimating best practices drawn from across industry and government. NNSA officials and contractors have also identified two issues associated with the new contracting strategy. First, according to NNSA officials and contractors, a chief potential challenge with a consolidated contract is the need to closely integrate the work of the M&O and construction contractors. NNSA will have to develop ways to ensure that current M&O contractors and the winning construction contractor(s) successfully coordinate their respective missions to prevent the disruption of important activities, such as weapons refurbishments, and to allow construction projects, some of which will be located near or in sensitive ongoing site operations, to be completed on schedule and at cost. For example, LANL plans to undertake a number of construction projects near or within its major plutonium operations in the next decade. As a result, both NNSA officials and contractors told us there will be a need for increased federal oversight and coordination to manage and integrate the M&O and construction contractors’ activities associated with these projects. To this end, NNSA is developing a course for its site offices on how to conduct oversight under the new contracting strategy. To facilitate coordination, NNSA management has also committed to requesting budget increases to hire temporary employees at the site offices to help integrate and manage the M&O and construction contractors’ activities. However, as discussed earlier, NNSA will not complete a study of federal site office structure and roles and responsibilities until December 2012. Until NNSA has the results of its federal site office study, including needs under the new contracting strategy for the federal workforce, it cannot finalize plans and begin to prepare the federal site offices for the transition to the new strategy. Second, NNSA’s analysis concluded that excluding major construction projects may reduce the number and quality of competitors willing and able to bid because the contract would be less profitable. A recent NNSA report on the contractor perspectives of a consolidated construction contract found that inclusion of the UPF in the consolidated construction contract is key to drawing strong interest from the best construction firms and that the scope of the contract would determine the level of competition for it. This is because the top U.S. engineering and construction companies may only be interested in the contract if it includes the higher profit large construction projects. This NNSA report, which captures the feedback from the construction contractors concerning the contract strategy for UPF, noted that the competitive landscape for a contract competition for a construction contract that does not include UPF could be impacted with the top engineering and constructions firms possibly not participating. The report further notes that excluding UPF in the consolidated construction contract would send the signal to the contracting community of “business as usual” and would not represent a significant commitment by NNSA of a commitment to improve the management of large construction projects. As the U.S. nuclear stockpile is being reduced, NNSA is to receive additional funds to modernize and operate the nuclear security enterprise. The funds will be used, in part, to refurbish most of the weapon types currently in the stockpile and to construct important new production facilities to support these refurbishments. NNSA envisions an integrated, interdependent nuclear security enterprise characterized by, among other things, fewer, more uniform contracts with multisite incentives and more uniform business practices. Since contractors execute the vast majority of the agency’s mission, it is reasonable for NNSA to focus its attention on the types, structure, and management of its contracts. Thus, to improve its operations and business practices, NNSA proposed, in 2010, consolidating the M&O contracts for two significant nuclear production sites—Y-12 and Pantex—and awarding a single contract for complex- wide construction. NNSA’s analysis that supported these proposals noted that simply changing contract types and structures will produce little effect unless NNSA better manages its contracts. NNSA’s analysis also identified 18 improved management practices—some of which could be accomplished now through existing contracts—such as improving enterprise-wide collection and analysis of costs that could lead to greater efficiencies regardless of the contracting strategy employed. In fact, officials representing the contractors at Pantex and Y-12 both said their companies had begun implementing some of these changes at their respective sites and had seen efficiencies and savings. In our view, these actions on the part of contractors are positive and a step in the right direction toward more effectively and efficiently managing NNSA contracts. However, NNSA has not identified in a systematic manner how it plans to implement these 18 improved management practices at all of its sites. Without such an approach or plan, it is unclear whether NNSA is taking every opportunity to improve management practices. NNSA has committed to pursuing its two-part acquisition strategy, but until NNSA undertakes certain actions, the strategy will not be completely defined, and its benefits will remain uncertain. These actions include incorporating industry feedback on its recently released draft RFP for the proposed Y-12 and Pantex M&O contract; releasing a draft RFP for the enterprise-wide construction proposal; updating its analysis using industry feedback, current budget projections, and project cost estimates; and developing an integrated federal site office structure applicable to both proposals to effectively manage and oversee their implementation with clearly identified roles and responsibilities. For example, until NNSA releases a draft RFP for the enterprise-wide construction proposal, NNSA cannot assess industry interest and will not begin to prepare the final RFP. Furthermore, NNSA did not conduct a formal assessment of the risks involved in the consolidation of its construction contracts consistent with federal standards of internal control, such as potential conflicts between the M&O contractor and the construction contractor. It will remain difficult to accurately assess whether NNSA will realize its goals of more efficient and effective operations through the implementation of the proposed acquisition strategy without more information. Consistent with cost-estimating best practices, such information should specify the costs, risks, and benefits expected enterprise-wide and at each site for both proposed consolidated contracts. In addition, NNSA will not complete a study of federal site office structure and roles and responsibilities until December 2012. Without the results of this study, NNSA cannot finalize plans and begin to prepare the federal site offices for the transition to the new contracts. We recommend that the Secretary of Energy take the following four actions: In order to manage NNSA’s contracts as effectively and efficiently as possible the Secretary of Energy should direct the Administrator of NNSA to take the following action: Develop a plan for implementing the 18 improved management practices identified by its analysis, as appropriate, to improve its current contract management practices. If NNSA continues to pursue its two-part acquisition strategy, the Secretary of Energy should direct the Administrator of NNSA to take the following actions to better define and inform the agency’s strategy: Issue a draft RFP for the enterprise-wide construction proposal. Using updated information gathered through the draft RFPs and recent budget projections and cost estimates, analyze the consolidated M&O proposal and the enterprise-wide construction proposal. Consistent with federal standards for internal control and cost-estimating best practices, this analysis should assess the costs, risks, and benefits expected enterprise wide and at each site. This analysis should be used by NNSA as it prepares its final RFPs for each proposal. Using the results of the federal site office study, develop an integrated federal site office structure applicable to both proposals to prepare the site offices before the transition to the new contracts. We provided NNSA with a draft of this report for its review and comment. NNSA provided written comments to the draft report—in which it generally agreed with our findings and recommendations—and technical comments, which we have incorporated as appropriate. NNSA’s commented that it “does not agree that similar cost efficiencies could be obtained without a contract consolidation” and that we should adjust statements in the report related to these efficiencies. In response, we removed the word “similar.” However, consistent with our recommendation in this report, NNSA agreed to develop a plan for implementing the 18 recommendations outlined in its analysis to improve current contract management practices. This, in our view, indicates NNSA’s agreement that the efficiencies gained in doing so would enhance its ability to carry out the mission at NNSA’s various sites, regardless of a contract consolidation. This is also consistent with NNSA’s own analysis, which stated that actions can be taken under the current contracts to improve the effectiveness and efficiency of operations at the individual sites. In its comments, NNSA also stated that it may not issue a draft RFP for the enterprise-wide construction proposal but may, instead, issue another form of solicitation, such as a final RFP, or sealed bid, by the end of September 2011. However, given the delays associated with the issuance of its draft RFP for the consolidated M&O contract and given the benefits outlined in our report of issuing a draft RFP, we continue to recommend, if NNSA pursues this part of its acquisition strategy, that the agency issue a draft RFP for the enterprise-wide construction contract. Consistent with the Federal Acquisition Regulation, a draft RFP will help provide information on NNSA’s requirements and industry capabilities and may enhance NNSA’s ability to obtain quality supplies and services, at reasonable prices, and increase efficiency in proposal preparation, proposal evaluation, negotiation, and contract award. As we also recommended, information gathered through the draft RFP, when combined with recent budget projections and cost estimates, should be used by NNSA to assess, in ways consistent with federal standards for internal control and cost-estimating best practices the costs, risks, and benefits of NNSA’s proposal expected enterprise-wide and at each NNSA site. The full text of NNSA’s comments is reproduced as appendix I in this report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, the Director of the Office of Management and Budget, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the individual named above, Jonathan Gill, Assistant Director; Jonathan Kucskar; Jeff Larson; Mehrzad Nadji; Alison O’Neill; Tim Persons; Peter Ruedel; Ron Schwenn; Vasiliki Theodoropoulos; and Alyssa Weir made key contributions to this report. | The National Nuclear Security Administration (NNSA)--a semiautonomous agency within the Department of Energy (DOE)-- proposed in March 2010 a new acquisition strategy that includes consolidating the management and operating (M&O) contracts for two of its eight sites--the Y-12 National Security Complex (Y-12) in Tennessee and the Pantex Plant in Texas--and consolidating all construction projects for all of its sites under a single, enterprise-wide contract. NNSA anticipates that this strategy will reduce costs, enhance mission performance, and improve construction management. NNSA's sites are overseen by colocated federal site offices. GAO was asked to assess NNSA's preliminary proposals for (1) a consolidated M&O contract for Y-12 and Pantex and (2) an enterprise-wide construction contract. GAO reviewed analyses supporting NNSA's acquisition strategy; examined agency directives and guidance; and interviewed DOE, NNSA, and contractor officials. Based on the analysis supporting its proposed acquisition strategy, NNSA expects that the proposed consolidation of the M&O work at its Y-12 and Pantex Plants will increase efficiencies and save $895 million in nominal dollars, primarily through efficiency gains and other improvements in support services (i.e., integrated budget and finance systems, more uniform training and human resources practices), that could result in the potential elimination of about 1,000 support service jobs over the next 10 years. NNSA selected these sites because both have M&O contracts with terms that expire in 2012, as well as similar nuclear production operations. Anticipated savings from this proposed consolidation, however, are uncertain because of the assumptions NNSA used when calculating these savings, the limited details available about the actual work that will be consolidated, and the adequacy of historical data used in the analysis. NNSA officials said that savings will be more accurately determined as industry provides feedback on the recently released draft request for proposal. In addition to cost savings, a number of NNSA and contractor officials have raised other issues with a consolidated M&O contract proposal, including uncertainty about the number of actual staff reductions that can be achieved and the need for a federal oversight plan for the new consolidated contract. In addition, NNSA's analysis suggests that efficiencies may also be achieved under its existing contracts through improved management practices. However, NNSA has not developed a plan for implementing these improved management practices at all of its sites. NNSA also anticipates several potential benefits, including cost savings, associated with awarding a single, enterprise-wide construction contract. It is uncertain, however, whether these benefits will be realized because of a number of issues. For example, NNSA's projected savings from a consolidated construction contract--approximately $120 million in nominal dollars over a 5- year period--are uncertain because NNSA lacks an accurate total cost baseline of its ongoing and planned construction projects and because it is likely that the construction contract will exclude major projects, such as the Uranium Processing Facility and Chemistry and Metallurgy Research Replacement facility, out of concern that this consolidated contract would disrupt ongoing design and construction efforts. Collectively, these two facilities represent about 85 percent of NNSA's total planned construction projects through fiscal year 2016. In addition, NNSA has not conducted, consistent with federal standards of internal control and cost-estimating best practices, an assessment of risks associated with awarding an enterprise-wide construction contract, such as costs and benefits expected enterprise-wide and at each site for both proposed consolidated contracts. NNSA officials and contractors said that NNSA may need increased federal oversight to integrate the work of existing M&O and consolidated construction contractors. GAO recommends, among other things, that NNSA develop a plan for implementing the improved management practices identified by its analysis and assess the costs, risks, and benefits of the consolidated construction contract to better define and inform its acquisition strategy and to take appropriate future actions. NNSA generally agreed with GAO's findings and recommendations. |
For fiscal year 2015, VA estimated it received $59.2 billion in appropriations, including collections, to fund health care services for veterans, manage and administer VA’s health care system, and operate and maintain the VA health care system’s capital infrastructure. VA estimated that in fiscal year 2015 it provided health care services— including inpatient services, outpatient services, and prescription drugs— to 6.7 million eligible patients. For calendar year 2015, the Medicare Trustees estimated that CMS paid MA plans about $155 billion to provide coverage for 16.4 million Medicare beneficiaries. Beneficiaries of MA can enroll in one of several different plan types, including health maintenance organizations (HMO), private fee-for-service (PFFS) plans, preferred provider organizations (PPO), and regional PPOs. Medicare pays MA plans a capitated PMPM amount. This amount is based in part on a plan’s bid, which is its projection of the revenue it requires to provide a beneficiary with services that are covered under Medicare FFS, and a benchmark, which CMS generally calculates from average per capita Medicare FFS spending in the plan’s service area and other factors. If a plan’s bid is higher than the benchmark, Medicare pays the plan the amount of the benchmark, and the plan must charge beneficiaries a premium to collect the amount by which the bid exceeds the benchmark. If the plan’s bid is lower than the benchmark, Medicare pays the plan the amount of the bid and makes an additional payment to the plan called a rebate. Plans may use this rebate to fund benefits not covered under Medicare FFS. CMS uses risk scores to adjust PMPM payments to MA plans to account for beneficiaries’ health status and other factors, a process known as risk adjustment. For beneficiaries enrolled in MA, risk scores are generally determined on the basis of diagnosis codes submitted for each beneficiary, among other factors, and are adjusted annually to account for changes in diagnoses from the previous calendar year. In addition, risk scores for beneficiaries who experience long-term stays of more than 90 days are calculated differently to account for the differences in expected health expenditures. While risk scores are based on diagnoses from the previous year, changes to the risk score to account for long-term hospital stays of more than 90 days are reflected in the calendar year when the stay occurred. The Patient Protection and Affordable Care Act (PPACA) changed how benchmarks are calculated so that they will be more closely aligned with Medicare FFS spending. Specifically, the benchmark changes, which are to be phased in from 2012 through 2017, will result in benchmarks tied to a percentage of per capita Medicare FFS spending in each county. In general, for those counties in the highest Medicare FFS spending quartile, benchmarks will be equal to 95 percent of county per capita Medicare FFS spending, and for those counties in the lowest Medicare FFS spending quartile, benchmarks will be equal to 115 percent of per capita Medicare FFS spending. Prior to 2012, benchmarks in all counties were at least as high as per capita Medicare FFS spending, but were often much higher. For example, while counties generally had benchmarks that were derived from per capita county Medicare FFS spending, the benchmarks were generally increased annually by a minimum update equal to the national growth rate percentage in Medicare FFS spending. In cases where the growth rate used to update the benchmark was greater than the rate at which per capita Medicare FFS spending grew within a county, it would result in a benchmark that was higher than the average per capita county Medicare FFS spending rate. In addition, some urban and rural counties had benchmarks that were “floor” rates, which were set above per capita county Medicare FFS spending rates to encourage insurers to offer plans in the areas. According to a CMS study reported in the 2010 MA Advance Notice, approximately 96 percent of counties had benchmarks that were set based on a minimum update or were floor rates. Especially in counties with a relatively high proportion of veterans, average per capita Medicare FFS spending may be low if many veterans receive health care services from VA instead of Medicare providers. Because benchmarks are calculated based in part on Medicare FFS spending, MA payments may be lower in such counties and may not reflect Medicare’s expected cost of caring for nonveterans. CMS is required to estimate, on a per capita basis, the amount of additional Medicare FFS payments that would have been made in a county if Medicare-eligible veterans had not received services from VA. If needed, CMS is also required to make a corresponding MA payment adjustment. To address these requirements, CMS reported the results of its study analyzing the cost impact of removing veterans eligible to receive services from VA on 2009 Medicare FFS county rates in the 2010 MA Advance Notice. CMS reported that, on average, removing veterans from the calculation of counties’ per capita Medicare FFS spending rate had minimal impact on per capita spending and that the differences in expenditures between all Medicare beneficiaries and nonveterans were more attributable to normal, random variation than to distinctly different spending for the two populations. Based on CMS’s study results, the agency concluded that no adjustment for VA spending on Medicare- covered services was necessary to 2010 through 2016 MA payments. In 2016, CMS updated its 2009 study using more recent data and determined that an adjustment would be necessary for 2017. VA provided about $2.4 billion in Medicare-covered inpatient and outpatient services to the 833,684 MA-enrolled veterans in fiscal year 2010. In total, VA provided approximately 61,000 inpatient services and 8.2 million outpatient services to veterans enrolled in MA plans. During that same time period, CMS paid MA plans $8.3 billion to provide all Medicare-covered services to veterans enrolled in an MA plan. VA’s provision of services to MA-enrolled veterans resulted in overall payments to MA plans that were likely lower than they otherwise would have been if veterans had obtained all of their Medicare-covered services through Medicare FFS providers and MA plans. Specifically, because VA provides services to MA-enrolled veterans, the three components that determine payments to MA plans—benchmarks, bids, and risk scores— are likely lower than they otherwise would be, which results in lower overall payments to MA plans. Benchmarks—Because benchmarks are generally calculated in part from per capita county Medicare FFS spending rates, any VA spending on Medicare-covered services for veterans enrolled in Medicare FFS would be excluded from the benchmark calculation. As a result, the benchmark would be lower and, in turn, payments to MA plans would also be lower. This would be particularly true following the implementation of the PPACA revisions to the benchmark calculation—to be phased in from 2012 through 2017—as the PPACA revisions further strengthened the link between the benchmark and average per capita county Medicare FFS spending rates. Bids—MA payments also may be lower to the extent that MA plans set bids based on historical experience. MA plan bids may reflect the fact that in previous years enrolled veterans received some Medicare- covered services from VA instead of the MA plan. If so, MA plan bids would be lower and, in turn, MA payments would also be lower. Risk scores—VA’s provision of Medicare-covered services may result in lower risk scores because, like benchmarks, they are calibrated based on Medicare FFS spending for beneficiaries with specific diagnoses identified by Medicare. As a result, any VA spending on Medicare-covered services for veterans enrolled in Medicare FFS that is related to these diagnoses would be excluded when the model is calibrated. In addition, MA plans would generally not have access to diagnoses made by VA. Therefore, when VA identifies and treats a diagnosis not identified by the veteran’s MA plan, it would not be incorporated into the veteran’s risk score. Because PMPM payments to MA plans are risk-adjusted, a lower risk score would result in lower payments to MA plans. Although VA spending on Medicare-covered services likely results in lower CMS payments to MA plans, the extent to which these payments reflect the expected utilization of services by the MA population remains uncertain. Specifically, payment amounts may still be too high or could even be too low, depending on the utilization of VA services by veterans enrolled in MA plans and veterans enrolled in Medicare FFS. As noted earlier, both benchmarks and risk scores are generally calibrated based on veterans and nonveterans enrolled in Medicare FFS. However, veterans enrolled in MA plans may differ in the proportion of services they receive from VA compared to veterans enrolled in Medicare FFS, which would affect the appropriateness of payments to MA plans. For example, payments to MA plans may be too high if veterans enrolled in MA receive a greater proportion of their services from VA relative to veterans enrolled in Medicare FFS. Under this scenario, the benchmark would reflect the higher use of Medicare services by Medicare FFS beneficiaries who are receiving fewer of their services from VA than are veterans enrolled in MA. As a result, the benchmark may be too high and, in turn, payments to MA plans may be too high. This effect of a higher benchmark may be at least partially offset by a risk score that is too high. In contrast, payments to MA plans may be too low if veterans enrolled in MA receive a lesser proportion of their services from VA relative to veterans enrolled in Medicare FFS. Under this scenario, the benchmarks may be too low and may result in MA plans being underpaid, although the effect may be partially offset by risk scores that are too low. To assess whether there are service utilization differences between the MA and Medicare FFS veteran populations that result in payments to MA plans that are too high or too low, data on the services veterans receive from Medicare FFS, MA, and VA would be needed. Data on veterans’ use of services through Medicare FFS and VA health care are available from CMS and VA, respectively. However, CMS does not currently have validated data that could be used to determine veterans’ use of services through MA. CMS began collecting data from MA plans on diagnoses and services provided to beneficiaries starting in January 2012. We reported in July 2014 that CMS had taken some, but not all, appropriate actions to ensure that these data—known as MA encounter data—are complete and accurate. At that time, we recommended that CMS complete all the steps necessary to validate the data, including performing statistical analyses, reviewing medical records, and providing MA organizations with summary reports on CMS’s findings. CMS agreed with the recommendation, but as of August 2015, had not completed all steps needed to validate the encounter data. CMS determined that no adjustment to 2010 through 2016 MA payments was needed to account for the provision of Medicare-covered services by VA, but used a methodology that had certain shortcomings that could have affected MA payments. CMS is required to estimate, on a per capita basis, the amount of additional payments that would have been made in a county if Medicare-eligible veterans had not received services from VA and, if needed, to make a corresponding adjustment to MA payments. If CMS determined that an MA payment adjustment was necessary, it would make the adjustment by using a modified version of per capita county Medicare FFS spending rates that are adjusted to account for the effect of VA spending on Medicare-covered services. Per capita county Medicare FFS spending rates serve as the basis of the benchmarks used in determining MA payment rates. To determine whether an adjustment was needed, CMS obtained data from VA showing veterans who are enrolled in VA health care and Medicare FFS (that is, enrollment data). CMS then estimated the effect of VA spending on Medicare FFS spending by calculating average per capita county Medicare FFS spending for nonveterans and comparing it to the average per capita county Medicare FFS spending for all Medicare FFS beneficiaries, after adjusting for beneficiaries’ risk. However, CMS’s methodology did not account for two factors that could have important effects on the results: (1) services provided by and diagnoses made by VA but not identified by Medicare and (2) changes to the benchmark calculation under PPACA. First, because CMS used only Medicare FFS utilization and diagnosis data in its study, the agency’s methodology did not account for services provided by and diagnoses made by VA—which could result in inaccurate estimates of how VA spending on services for Medicare FFS-enrolled veterans affects per capita county Medicare FFS spending. Only VA’s utilization and diagnosis data can account for services provided by and diagnoses made by VA. Without this information, CMS’s estimate of how VA spending affects per capita county Medicare FFS spending rates may be inaccurate. Specifically, estimates of per capita county Medicare FFS spending for all beneficiaries, including veterans, may be too low because services provided by VA would not be accounted for in Medicare FFS spending. Excluding those services could have the effect of deflating veterans’ risk-adjusted Medicare FFS spending and therefore total per capita county Medicare FFS spending. Conversely, estimates of per capita county Medicare FFS spending for all beneficiaries, including veterans, may be too high because excluding diagnoses identified only by VA could result in Medicare risk scores that are too low, which would have the effect of inflating veterans’ risk-adjusted Medicare FFS spending and therefore total per capita county Medicare FFS spending. Thus, depending on the number and mix of services provided by and the diagnoses made by VA, risk-adjusted Medicare FFS spending for veterans may either be higher or lower than it would be if CMS accounted for VA-provided services and diagnoses. Second, because CMS’s study was done in 2009, it did not account for changes to the benchmark calculation that occurred under PPACA and that are to be phased in from 2012 through 2017. CMS noted in 2009 that only 45 of the 3,127 counties nationwide would have had per capita county Medicare FFS spending rate increases after accounting for VA spending. According to CMS, the number of affected counties was as low as it was in part because many counties had payment rate minimums, which often resulted in benchmarks that were higher than per capita county Medicare FFS spending. However, as noted earlier in this report, PPACA revised the benchmark calculation to more closely align benchmarks with average per capita county Medicare FFS spending rates. As these revised benchmark calculations are implemented, counties will no longer have benchmarks set based on minimum updates or floor rates. Because CMS did not update its 2009 study when determining whether an adjustment was necessary through 2016, the agency lacked accurate information on the number of additional counties in which VA spending on Medicare-covered services would have made a difference in per capita county Medicare FFS spending rates. When CMS updated its 2009 study to determine whether an MA payment adjustment was needed for 2017, it used the same methodology, albeit with more recent data. Doing so allowed CMS to account for the revised benchmark calculations implemented under PPACA. However, CMS cannot address the other limitation we identified without additional data. Specifically, CMS cannot account for services provided by and diagnoses made by VA. Officials said that they did not intend to incorporate VA utilization and diagnoses data into their analysis because they did not currently have such data and that incorporating these data would introduce additional uncertainty into the analysis. For example, CMS officials noted that there would be challenges associated with how much Medicare would have spent if the covered services had been obtained from Medicare providers instead of VA. We agree that CMS would face challenges incorporating VA data into its analysis, but if an adjustment is needed and not made or if the adjustment made is too low, the PMPM payment may be too high for veterans and too low for nonveterans. Depending on the mix of veterans and nonveterans enrolled by individual MA plans, this could result in some plans being paid too much and others too little. Both CMS and VA officials told us that the agencies have a data use agreement in place that allows them to share some data, but this does not include data on services VA provides to Medicare beneficiaries. According to VA, as of December 2015, CMS has not requested its utilization and diagnosis data. Federal standards for internal control call for management to have the operational data it needs to meet agency goals to effectively and efficiently use resources and to help ensure compliance with laws and regulations. In this case, without VA data on diagnoses and utilization, CMS may be increasing the risk that it is not effectively meeting the requirement to adjust payments to MA plans, as appropriate, to account for VA spending on services for Medicare beneficiaries. If CMS revises its study methodology and determines that an adjustment to the benchmark to account for VA spending is needed, it may need to make additional MA payment adjustments to ensure that payments are equitable for individual MA plans. A benchmark adjustment would increase payments for nonveterans and would address the possibility that payments to MA plans with a high proportion of nonveterans would be too low. However, if CMS makes a benchmark adjustment, it would also increase MA payments for veterans. While the resulting higher payment to MA plans for nonveterans may be appropriate, higher payments for veterans may not be because veterans may be receiving some services from VA. In that case, payments to MA plans that enroll veterans would be too high, with the degree of overpayment increasing as the proportion of veterans enrolled by plans increases. To ensure that payments to MA plans are equitable regardless of differences in the demographic characteristics of the plans’ enrollees, CMS is authorized to adjust payments to MA plans based on such risk factors that it determines to be appropriate. Therefore, if CMS determines that an adjustment to the benchmark to account for VA spending is needed and the adjustment results in payments to MA plans that are too high for veterans, additional adjustments to payments to MA plans could be necessary. Given that veterans enrolled in an MA plan and the VA health care system can receive Medicare-covered services from either source, it is important to consider how the provision of services by VA affects payments to MA plans. In fiscal year 2010, VA provided $2.4 billion worth of inpatient and outpatient services to MA-enrolled veterans, which likely resulted in lower overall payments to MA plans. However, the appropriateness of these lower payments is uncertain, given potential differences in the proportion of services veterans enrolled in MA plans and Medicare FFS receive from VA. An estimate of the differences between the two populations of veterans would enable CMS to determine if additional actions are needed to improve the accuracy of PMPM payments. To this end, we recommended in July 2014 that CMS should validate the MA encounter data, which would be needed to determine if there are differences in utilization of services between veterans in MA and Medicare FFS. In addition, it is important to ensure that VA spending on Medicare- covered services does not result in inequitable payments to individual MA plans for veterans and nonveterans. While CMS is required to adjust MA payments to account for VA spending on Medicare-covered services, as appropriate, the agency determined that no adjustment to the benchmark, which is based in part on per capita county Medicare FFS spending, was necessary for years 2010 through 2016. CMS updated the study it used to make this determination in 2016 and determined that an adjustment was necessary for 2017. However, both CMS’s 2009 study and its 2016 study were limited because the agency did not have VA utilization and diagnoses data. Adjusting the study’s methodology to incorporate these data could change the study’s findings and result in CMS making a larger adjustment to the benchmark in future years. Such a benchmark adjustment could improve the accuracy of payments for nonveterans. However, a benchmark adjustment could also result in or exacerbate payments to MA plans that are too high for veterans, so additional MA payment adjustments could become necessary. We recommend that the Secretary of Health and Human Services direct the Administrator of CMS to take the following two actions: Assess the feasibility of updating the agency’s study on the effect of VA-provided Medicare-covered services on per capita county Medicare FFS spending rates by obtaining VA utilization and diagnosis data for veterans enrolled in Medicare FFS under its existing data use agreement or by other means as necessary. If CMS makes an adjustment to the benchmark to account for VA spending on Medicare-covered services, the agency should assess whether an additional adjustment to MA payments is needed to ensure that payments to MA plans are equitable for veterans and nonveterans. We provided a draft of this product to VA and the Department of Health and Human Services (HHS). HHS provided written comments on the draft, which are reprinted in appendix II. Both VA and HHS provided technical comments, which we incorporated as appropriate. In its comments, HHS concurred with one of our two recommendations. HHS agreed with our recommendation that if CMS makes an adjustment to the benchmark to account for VA spending on Medicare-covered services, it should assess whether an additional adjustment to MA payments is needed to ensure that payments to MA plans are equitable for veterans and nonveterans. HHS acknowledged that CMS is required to estimate, on an annual basis, the amount of additional Medicare FFS payments that would have been made in a county if Medicare-eligible veterans had not received services from VA and, if necessary, to make a corresponding MA payment adjustment. In the 2017 MA Advance Notice, CMS provided the results of its updated analysis, which used the same methodology as its 2010 analysis, but with more recent data. Based on its findings, CMS plans to make an adjustment to 2017 MA payment rates to account for VA spending on Medicare-covered services. In its comments, HHS stated that CMS will assess whether an additional adjustment to MA plan payments is needed to ensure that payments to MA plans are equitable for veterans and nonveterans. We encourage CMS to complete its assessment prior to finalizing its 2017 payments to ensure that payments to MA plans will be equitable when the adjustment to account for VA spending on Medicare-covered services is made. HHS did not concur with our recommendation that CMS should assess the feasibility of updating the agency’s study on the effect of VA-provided Medicare-covered services on per capita county Medicare FFS spending rates by obtaining VA utilization and diagnosis data for veterans enrolled in Medicare FFS. HHS stated that CMS uses Medicare FFS spending rates when setting the benchmark, which excludes services provided by VA facilities. In addition, HHS stated that incorporating VA utilization and diagnosis data into CMS’s analysis may not materially improve the analysis and the resulting adjustment. HHS indicated that it will continue to review the need for incorporating additional data or for methodology changes in the future. As we note in the report, only VA’s utilization and diagnosis data can account for services provided by and diagnoses made by VA. Depending on the number and mix of services provided by and the diagnoses made by VA, risk-adjusted Medicare FFS spending for veterans may either be higher or lower than it would be if CMS accounted for VA-provided services and diagnoses. Therefore, relying exclusively on Medicare FFS spending to estimate the effect of VA spending on Medicare FFS-enrolled veterans could result in an inaccurate estimate of how VA spending on services for Medicare FFS-enrolled veterans affects per capita county Medicare FFS spending. While there may be challenges associated with incorporating VA utilization and diagnosis data into CMS’s analysis, we maintain that CMS should work to do so given the implications that not incorporating the data may have on the accuracy of payment to MA plans. We continue to believe that an important first step would be for CMS to assess the feasibility of incorporating VA utilization and diagnosis data in a way that can overcome the challenges identified by CMS and potentially lead to a more accurate adjustment. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix describes the scope and methodology used to (1) estimate the amount that the Department of Veterans Affairs (VA) spends to provide Medicare-covered services to veterans enrolled in Medicare Advantage (MA) plans and how VA spending on these services affects Centers for Medicare & Medicaid Services (CMS) payments to MA plans; and (2) evaluate the extent to which CMS has the data it needs to determine an appropriate adjustment, if any, to MA payments to account for VA’s provision of Medicare-covered services to MA-enrolled veterans. To estimate the amount that VA spends to provide Medicare-covered services to veterans enrolled in MA plans, we first identified veterans with at least 1 month of overlapping enrollment in an MA plan and in VA health care in fiscal year 2010. VA provided us with an enrollment file that included veterans enrolled in VA health care for at least 1 month in fiscal year 2010 and whom VA had identified as having at least 1 month of Medicare private plan enrollment. To determine months of MA enrollment in fiscal year 2010, we matched the VA enrollment file to Medicare’s calendar year 2009 and 2010 Denominator Files based on whether beneficiaries had the same Social Security number and either the same date of birth, sex, or both. We excluded those beneficiaries who did not have at least 1 month of overlapping MA and VA health care enrollment. In addition, we excluded veterans in the VA enrollment file that did not have a VA enrollment start date, were listed as having died prior to fiscal year 2010, or were not enrolled in one of the four most common MA plan types. After all exclusions, we identified 833,684 veterans with at least 1 month of overlapping enrollment in an MA plan and VA health care in fiscal year 2010. We identified all inpatient and outpatient services provided by VA to those veterans in our population during fiscal year 2010. VA can provide inpatient and outpatient services directly at one of its medical facilities or it can contract for care, known as VA care in the community; we received inpatient and outpatient utilization files for both types of VA-provided care. We excluded prescription drug services from our analysis, as payments to MA plans for coverage of Part D services are determined differently than are payments for other Medicare-covered services. We also excluded services that were received during a month when the veteran was not enrolled in both VA health care and an MA plan. We considered an inpatient stay, which can last multiple days, to be during a month when the veteran was enrolled in both VA health care and an MA plan if 1 or more days of the stay occurred during a month in which the veteran was enrolled in VA health care and an MA plan. In some instances, hospital stays had an admittance date prior to fiscal year 2010 or a discharge date after it, and in those cases, we included only the portion of the stay that occurred during fiscal year 2010. We excluded those inpatient and outpatient services that were provided by VA but were not covered by Medicare. For inpatient services directly provided by VA, we used the category of care assigned to each service by VA to exclude service categories not covered by Medicare, such as intermediate and domiciliary care. In addition, we excluded services provided by VA that went beyond Medicare benefit limits. Because MA plans may have different benefit limits than Medicare fee-for-service (FFS), we analyzed the benefits offered by a sample of 45 MA plans for 2014 for services covered by Medicare FFS that have benefit limits. We identified the most common benefit limits for those services and used those as our benefit limits for VA services. In cases where some or all MA plans had service categories with lifetime reserve days (e.g., inpatient days beyond the 90 days Medicare covers per benefit period, up to an additional 60 days per lifetime), we made the assumption that beneficiaries had 25 percent of their lifetime reserve days remaining. For inpatient services provided through VA care in the community, we excluded hospice services; services with cancelled payments; and services with a classification of dental, contract halfway house, pharmacy, reimbursement, or travel. For outpatient services directly provided by VA, we excluded services that were not included in the Medicare physician fee schedule; ambulance fee schedule; clinical lab fee schedule; durable medical equipment, prosthetics/orthotics, and supplies fee schedule; anesthesiology fee schedule; or ambulatory surgical center fee schedule. We also excluded services that had a Medicare physician fee schedule status code indicating they were a deleted code, a noncovered service, had restricted coverage, or were excluded from the physician fee schedule by regulation. For outpatient services provided through VA care in the community, we made the same exclusions as for outpatient services provided by VA and also excluded hospice care services and services with cancelled payments. We calculated total VA spending and CMS payments to MA plans for beneficiaries for months in which they were enrolled in both VA health care and an MA plan in fiscal year 2010 and evaluated how, if at all, VA spending on these services affects CMS payments to MA plans. To calculate VA’s estimated spending, we assigned all Medicare-covered services directly provided by VA a cost, using VA’s average cost data; and for services provided through VA care in the community, we used the amount that VA disbursed to the service provider. We calculated total MA spending for veterans enrolled in MA and VA using actual CMS payments to MA plans for our population in fiscal year 2010. To evaluate how VA spending on Medicare-covered services affects CMS payments to MA plans, we reviewed CMS documentation and interviewed CMS officials. To evaluate the extent to which CMS has the data it needs to determine an appropriate adjustment, we reviewed CMS documentation and interviewed CMS officials. As part of this effort, we also evaluated CMS’s methodology for a study it used as the basis of its decision to not adjust county per capita Medicare FFS spending rates for VA spending on Medicare-covered services. Our evaluation was based on a review of CMS documentation and an interview with CMS officials. To assess the reliability of the data we used in our analyses, we reviewed related documentation, interviewed knowledgeable officials from CMS and VA, and performed appropriate electronic data checks. This assessment allowed us to determine that the data were reliable for our objectives. We conducted this performance audit from July 2013 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Gregory Giusto, Assistant Director; Christine Brudevold; Christine Davis; Jacquelyn N. Hamilton; Dan Lee; Elizabeth T. Morrison; Christina C. Serna; and Luis Serna made key contributions to this report. | Veterans enrolled in Medicare can also enroll in the VA health care system and may receive Medicare-covered services from either their Medicare source of coverage or VA. Payments to MA plans are based in part on Medicare FFS spending and may be lower than they otherwise would be if veterans enrolled in Medicare FFS receive some of their services from VA. Because this could result in payments that are too low for some MA plans, CMS is required to adjust payments to MA plans to account for VA spending, as appropriate. CMS determined an adjustment was needed for 2017, but not for 2010 through 2016. GAO was asked to examine how VA's provision of Medicare-covered services to Medicare beneficiaries affects payments to MA plans. GAO (1) estimated VA spending on Medicare-covered services and how VA spending affects payments to MA plans and (2) evaluated whether CMS has the data it needs to adjust payments to MA plans, as appropriate. GAO used CMS and VA data to develop an estimate of VA spending on Medicare-covered services. GAO reviewed CMS documentation and interviewed CMS and VA officials. In fiscal year 2010, the Department of Veterans Affairs (VA) health care system provided $2.4 billion in inpatient and outpatient services to the 833,684 veterans enrolled in Medicare Advantage (MA), a private plan alternative to Medicare fee-for-service (FFS). While the Centers for Medicare & Medicaid Services (CMS), an agency within the Department of Health and Human Services (HHS), generally pays Medicare FFS providers separately for each service provided, MA plans receive a monthly payment from CMS to provide all services covered under Medicare FFS. These monthly payments are based in part on a bidding target, known as a benchmark, and risk scores, which are used to adjust the payment amount to account for beneficiary demographic characteristics and health conditions. Both the benchmark and risk scores are calibrated based on Medicare FFS spending. Therefore, VA's provision of Medicare-covered services to veterans enrolled in Medicare FFS likely resulted in lower Medicare FFS spending and, in turn, lower overall payments to MA plans. However, the extent to which these payments reflect the expected utilization of services by the MA population remains uncertain. Specifically, payment amounts may still be too high or could even be too low, depending on the utilization of VA services by veterans enrolled in MA plans and veterans enrolled in Medicare FFS. If, for example, veterans enrolled in MA receive a greater proportion of their services from VA relative to veterans enrolled in Medicare FFS, then the benchmark may be too high. Conversely, payments may be too low if MA-enrolled veterans tended to receive fewer Medicare-covered services from VA relative to veterans enrolled in Medicare FFS. Assessing these possible differences would require data on the services veterans receive from MA. CMS began collecting these data in 2012 but, as of August 2015, had yet to take all the steps necessary to validate the accuracy of the data, as GAO has previously recommended. CMS also lacks data on VA diagnoses and utilization that may improve its methodology for determining if an adjustment to the benchmark is needed to account for VA's provision of Medicare-covered services to veterans enrolled in Medicare FFS. Federal standards for internal control call for management to have the operational data it needs to meet agency goals to effectively and efficiently use resources and to help ensure compliance with laws and regulations. While CMS determined that no adjustment was necessary for 2010 through 2016 based on a 2009 study it performed, CMS's methodology did not account for services provided by and diagnoses made by VA, which can only be identified using VA's data. CMS officials updated the agency's study in 2016 using the same methodology, but with more recent data. CMS officials told GAO that they did not plan to incorporate VA utilization and diagnoses data into their analysis because (1) they do not currently have such data and (2) incorporating these data would introduce additional uncertainty into the analysis. However, if an adjustment is needed but not made or if an adjustment is too low due to limitations with CMS's methodology, it could result in some plans being paid too much and others too little. If CMS does revise its methodology and determines that an adjustment to the benchmark is necessary, it may need to make additional adjustments to MA plan payments, as discussed in this report. CMS should (1) assess the feasibility of revising its methodology for determining if an adjustment to the benchmark is needed by obtaining diagnoses and utilization data from VA and (2) make any additional adjustments to MA plan payments as appropriate. HHS disagreed with the first recommendation, but agreed with the second. GAO maintains that VA data may improve CMS's analysis. |
Credit unions can be federally or state-chartered, which determines their primary regulator for safety and soundness and also their options for deposit insurance. Federally chartered credit unions are regulated by NCUA and must be federally insured by the National Credit Union Share Insurance Fund, which is administered by NCUA and provides up to $250,000 of insurance per depositor for each account ownership type. State-chartered credit unions are usually regulated by credit union supervisors in their respective state. These credit unions can be federally insured (and thus also supervised) by NCUA or, in some states, can choose to be privately insured. As of February 2017, ASI was the only company providing private primary deposit insurance. ASI provides up to $250,000 of insurance per account (rather than per depositor for each ownership type, as with NCUA). Deposit insurance covers deposit products such as checking and savings accounts, money market deposit accounts, and certificates of deposit. It does not cover other financial products, such as investments in stocks, bonds, or mutual funds. The vast majority of credit unions are federally insured. As seen in figure 1, in 2015, there were more than 6,000 federally insured credit unions with more than $1 trillion in insured deposits, and 125 privately insured credit unions with $13 billion in insured deposits. The mix of asset sizes is largely similar for federally and privately insured credit unions, and the majority of both have assets of less than $100 million. Between 2011 and 2015, the number of federally and privately insured credit unions declined by about 15 percent, due largely to mergers and liquidations. Some credit unions chose to convert between private and federal deposit insurance and appendix II contains information about the reasons that some credit unions switch insurers. ASI is a private, not-for-profit company, headquartered in Ohio. The company is governed by Ohio law and licensed by the Ohio Department of Insurance, and its primary regulators are the Ohio Departments of Insurance and Commerce, although regulators in the other eight states in which ASI operates also have an oversight role. ASI has provided deposit insurance since 1974 and the company is owned by the credit unions for which it provides deposit insurance. The company does not normally charge premiums, which are common in the insurance industry, but instead requires its credit unions to maintain a capital contribution with the company, adjusted annually, equal to a rate of 1.3 percent of the credit union’s total insured deposits. In addition, ASI has the authority to charge special premium assessments under certain conditions with regulator approval, as it did in 2009–2013. ASI is overseen by a board of directors that is made up of six chief executives from the credit unions it insures, as well as one ASI management representative. Quarterly, according to ASI management, ASI’s board of directors meets to review and monitor the company’s financial statements, investment activities, risk management practices, information technology issues, and sales and marketing activities. An independent auditor annually audits and renders an opinion on ASI’s consolidated financial statements prepared in accordance with generally accepted accounting principles. Additionally, ASI retains an independent actuarial firm to conduct a capital adequacy study (at least every 3 years), annually review and help estimate loss reserves, and render an annual actuarial opinion on the adequacy of its loss reserves. Federal law requires that any depository institution that does not have federal deposit insurance clearly and conspicuously disclose that the institution is not federally insured. CFPB and the Federal Trade Commission (FTC) are the federal entities responsible for enforcing these requirements. In December 2011, CFPB issued an interim final rule restating the implementing regulation, which had been promulgated by FTC. This regulation, known as Regulation I, contains the disclosure requirements for credit unions that do not have federal deposit insurance. CFPB published a final rule in April 2016, which adopted its 2011 interim final rule without changes. Regulation I requires disclosure that an institution does not have federal deposit insurance (1) at locations where deposits are normally received (stations or windows) except enumerated exceptions, (2) on the institution’s main Internet page (website), (3) in all advertising except enumerated exceptions, and (4) in periodic statements and account records. Regulation I generally requires depository institutions to obtain a written acknowledgment from depositors that the institution does not have federal deposit insurance. The FAST Act amended the Federal Home Loan Bank Act to permit privately insured credit unions to apply for membership in a Federal Home Loan Bank (FHLBank) and, if approved, obtain the benefits of membership, including access to loans (known as advances). The FHLBank System is a government-sponsored enterprise, composed of 11 regional banks. Federally insured credit unions have been allowed to apply for membership since 1989; other members of the FHLBank System include commercial banks, thrifts, and insurance companies. The FHLBank of Cincinnati approved ASI as a member in June 2011. The Federal Housing Finance Agency (FHFA) regulates the FHLBanks and issued a proposed rule in September 2016 to implement provisions of the FAST Act. By law, certain types of prospective FHLBank members must have at least 10 percent of their assets in residential mortgage loans to be eligible. As of December 31, 2015, FHFA estimated that 78 of the 125 privately insured credit unions met this eligibility criterion. As of December 31, 2016, the FHLBanks had approved 16 privately insured credit unions for membership. The Ohio Department of Insurance’s most recent examination of ASI, which covered 2008–2012, did not identify any deficiencies in ASI’s financial condition and determined that ASI’s reserves for losses were consistent with Ohio’s legal requirements and were adequate and appropriate. According to Ohio Department of Insurance staff, the department has not identified any issues and was not aware of any problems with ASI’s loss reserves in at least the past 10 years. They noted that ASI is classified as a nonpriority insurer by the department, which means that the company is considered low-risk and does not require enhanced oversight. This determination was based on factors such as ASI’s Insurance Regulatory Information System (IRIS) ratios and management competency. As a result of this classification, the department conducts full-scope examinations of ASI every 5 years, rather than annually or every 3 years as conducted for insurers deemed riskier. Ohio Department of Insurance staff told us that as part of their full-scope examinations every 5 years, the examination team reviews ASI’s audited financial statements, analyzes estimates for loss reserves, and evaluates any risks to the company by reviewing legal issues, corporate governance, and management. In particular, the department’s actuary performs an analysis of ASI’s information to derive the department’s own estimate for ASI’s loss reserves. The department compares its derived estimate to ASI’s held reserves, as well as to the range of estimates reported by ASI’s third-party actuarial firm. Staff said that if the department were to identify significant differences in the estimates, it would request additional information from ASI to understand the reasons for the differences. Additionally, the department uses the IRIS ratios to aid in evaluation of the adequacy of ASI’s loss reserves. According to representatives from the National Association of Insurance Commissioners (NAIC), the procedures the department uses for assessing ASI’s capital, including its loss reserves estimates, are consistent with NAIC’s guidelines for conducting such assessments. Ohio Department of Insurance staff also told us that on an annual basis they review the statement of actuarial opinion of ASI’s loss reserve estimates (as rendered by the third-party actuarial firm and discussed later in this report) and ASI’s audited financial statements, and also compute financial ratios. According to the staff, as part of this review, analysts review ASI’s capital position and monitor ASI’s asset quality to ensure they did not deteriorate significantly in a given period. Under Ohio law, ASI is required to maintain at least $5 million in capital, and as of December 31, 2015, ASI’s capital was roughly $219 million. In addition, Ohio Department of Insurance staff said they consider any risks that could affect ASI’s financial condition. For example, they said they evaluate risk in terms of growth, underwriting, how the company invested its assets, and any legal concerns. In addition, the department analyzes the risk-based capital ratio. They also said that a company’s risk-based capital ratio must be at least 200 percent of its calculated authorized control level risk-based capital. From 2009 through 2015, according to examination records from the Ohio Department of Insurance, ASI’s risk- based capital ratio was well above this standard. Ohio Department of Insurance staff told us that ASI management is very transparent about disclosing risk that new credit unions may pose to the company. The staff said that they engage in quarterly discussions with ASI management about the company’s quarterly financial statements and the credit unions for which ASI is considering providing deposit insurance coverage. Ohio Department of Insurance staff said that their concern with ASI, as with any insurer, generally has been the risk posed by macroeconomic issues. For example, they stated fluctuations in the economy could pose significant concerns for insurers such as ASI. Therefore, a decline in the economy could have a significant (negative) impact on a company like ASI. The staff further noted that while the frequency of claims for losses for ASI would be low, the severity of such losses could potentially be high, which could pose a risk to the company. According to ASI management, roughly 2 percent of its privately insured credit unions failed during or since the 2007–2009 financial crisis (as compared to roughly 2 percent of federally insured credit unions, according to NCUA). In 2009, ASI reported that almost all of its loss expense was related to just two of its insured credit unions, both in Nevada. One of these credit unions merged with another. The second troubled credit union had approximately $1 billion in total assets when it received assistance from ASI. Department staff told us that during and just after the financial crisis, they monitored ASI more frequently and met monthly with ASI management to discuss the company’s exposures and potential losses, but the department never determined there was a need to conduct an additional full-scope examination. In addition to the Ohio Department of Insurance’s oversight, the Ohio Department of Commerce annually performs a risk-based safety and soundness examination of ASI in collaboration with the eight other state credit union supervisors that regulate privately insured credit unions. However, the Ohio Department of Commerce could not share with us the results of these examinations because, as interpreted by the department, it is prohibited by law from providing details about its examination findings to third parties other than those specified in the regulations. According to Ohio Department of Commerce staff, their annual safety and soundness examination of ASI focuses on risk areas similar to those reviewed during the examination of a credit union. As a part of this process, the Ohio Department of Commerce reviews ASI’s audited financial statements, statement of actuarial opinion, and reports from ASI’s internal system used to monitor insured credit unions. Ohio Department of Commerce staff told us that on a quarterly basis, examiners review quarterly financial statements and monitor any troubled credit unions that ASI insures. As well as participating in the Ohio Department of Commerce’s annual examination of ASI, the eight other state credit union supervisors told us that they monitor ASI’s financial condition on an annual or quarterly basis. This process generally involves a review of ASI’s annual audited or quarterly unaudited financial statements and its actuarial reports. None of the eight state supervisors with whom we spoke raised concerns about ASI’s financial condition at the time of our review. But one state credit union supervisor expressed concern that during volatile economic times, ASI might not be able to cover losses once it had exhausted its capital because ASI is not backed by the full faith and credit of the U.S. government and has no access to state guaranty funds. According to the National Conference of Insurance Guaranty Funds—whose funds provide protection for various property and casualty lines of insurance written by its member insurers––private deposit insurers are not covered. Additionally, representatives from the state credit union supervisors also told us that none of the states in which ASI operates, including Ohio, had a state guaranty fund to assist in covering losses or credit union member deposits if ASI ran into financial difficulties. However, in the event of potential impairment of ASI’s funding, Ohio law allows ASI to charge a special assessment, with regulator approval, against the credit unions it insures. Moreover, FHFA reviews information about ASI as part of its oversight of the FHLBanks. As noted earlier, ASI is a member of the FHLBank of Cincinnati and bank representatives told us that they monitor ASI’s financial condition by reviewing ASI’s annual audited financial statements, statutory quarterly financial filings, and reports on ASI’s loss reserves. FHLBanks protect against credit risk on advances by requiring members to pledge collateral. Representatives from the FHLBank of Cincinnati told us ASI, like all FHLBank members (including privately insured credit unions), must pledge collateral to receive advances. According to FHFA staff, while FHFA may review ASI information as part of its supervision of FHLBanks, FHFA has no supervisory authority over ASI and no plans to independently assess the company’s financial condition. The FAST Act does require ASI to provide FHFA a copy of its annual audit. The audit must be conducted by an independent auditor and must include an assessment by the auditor that ASI follows generally accepted accounting principles and has set aside sufficient reserves for losses. This FAST Act requirement allows FHFA to review the independent auditor’s opinion to confirm that ASI has met these requirements. FHFA staff told us FHFA planned to use the ASI audited financial statements to prepare for its next annual examination of the FHLBank of Cincinnati. ASI has several processes in place to mitigate risk and help prevent and control losses to the company. ASI management told us that applicant credit unions undergo an insurability assessment that includes a review of the credit union’s financial data, corporate governance, and CAMEL rating, and an evaluation of its operating policies and procedures. Additionally, the company continuously monitors the financial condition of the credit unions the company insures. ASI management said that quarterly they compare their credit unions against federally insured credit unions in terms of capital adequacy, earnings, and liquidity. The company conducts an examination of about 70 percent of its credit unions annually, and the rest on a 2–3 year cycle. ASI management noted that they conduct most of their examinations jointly with state credit union supervisors. For credit unions with at least $100 million in assets, ASI has a process of enhanced monitoring, which includes quarterly reviews, as well as on-site reviews annually or semiannually. As needed, ASI can issue a corrective action, such as advancing funds to an insured credit union on a short-term basis to aid in the credit union’s liquidity needs. ASI retains an independent actuarial firm to conduct analyses for the company. The actuarial firm conducts a study of the adequacy of ASI’s capital every 3 years, which looks at the company as a whole and its ability to pay present and future claims for losses experienced by the credit unions it insures, under different economic scenarios; and an annual study of ASI’s loss experience to help estimate loss reserves and render an annual statement of actuarial opinion on the adequacy of its loss reserves. The four most recent capital adequacy studies, which covered calendar years 2009–2015, indicated that ASI’s ability to pay claims was strong. The 2010 capital adequacy study—conducted near the end of the financial crisis—indicated ASI’s ability to pay claims was strong, but it also reported that ASI’s ability to pay claims had decreased. For the most recent capital adequacy study, the actuarial firm’s analysis found that ASI’s ability to pay claims was strong under each of three economic scenarios (expansion, recession, and depression). For example, the actuarial firm estimated the probability that ASI could withstand a 1-year and 5-year recession as 99.7 percent and 97.3 percent, respectively. According to staff from the actuarial firm, the capital adequacy study serves as a financial model to assist ASI management in its decision making. They noted ASI’s management is as important as the study’s findings because even with adequate capital, a company could fail based on mismanagement or fraud, which cannot be modeled. According to staff from the actuarial firm, ASI could face difficulty paying claims for losses if one or more of its largest credit unions were to suffer severe losses. The firm reported that as of December 31, 2015, ASI had $218 million in assets (cash and investments) readily available to pay claims, but as of year-end 2015, 14 of its credit unions each had more than that amount in total insured deposits. However, the actuarial staff told us they factored this risk into their analysis and that the larger the credit union (by asset size), the smaller the probability of a severe loss (expressed as a percentage of the credit union’s total assets). Additionally, the actuarial firm analyzed the capital adequacy of ASI’s wholly owned subsidiary, Excess Share Insurance Corporation, which can affect ASI’s financial condition because ASI offers it various funding sources and a guarantee. The actuarial firm’s 2016 study showed the subsidiary’s ability to pay claims under the three economic scenarios was strong. ASI management told us they believe the risk posed by its subsidiary to be small, and that multiple adverse events would have to occur simultaneously for it to impair ASI’s financial condition. To transfer some of this risk, the subsidiary carries a reinsurance policy for its excess insurance line of business. The actuarial studies also noted that ASI has other sources of funding to help pay claims, including special assessments, lines of credit, and increases to the capital contribution rate it charges. For example, during and after the 2007–2009 financial crisis, ASI (1) charged its insured credit unions a special premium assessment each year in 2009–2013; (2) borrowed $22 million from its line of credit to pay initial claims in 2009 (which according to ASI management was repaid in full within 6 months); and (3) increased the credit unions’ capital contributions rate in 2010, from a rate ranging between 1 percent and 1.3 percent to a rate of 1.3 percent of total insured deposits, which ASI management told us enhanced the company’s capital adequacy. Each of the actuarial firm’s annual loss reserve studies conducted during 2011–2015 found that ASI’s reserves for losses were reasonable and consistent with amounts computed based on actuarial standards of practice, and met the requirements of Ohio insurance laws. ASI maintains a reserve for losses to cover its estimated unpaid liability for reported and unreported loss claims. To assist management with its determination of loss reserves, the actuarial firm annually analyzes ASI’s loss reserve experience and reviews the assumptions ASI uses to determine its reserves for losses. The reserve studies identified some potential risks—for example, the possibility that some of ASI’s credit unions could cancel their deposit insurance coverage and withdraw their capital contributions, which would reduce ASI’s capital (but also reduce its exposure to potential losses). In its 2016 loss reserve study, the actuarial firm stated that it did not believe that significant risks and uncertainties were present that could result in material adverse deviation of ASI’s loss reserves. The firm based its conclusion on the presence of certain favorable factors that offset the risks and uncertainties identified in previous years. These factors included the low ratio of the company’s held reserves to its capital, and that ASI’s held reserves were at the high end of the actuarially- determined range of reserves estimated to be reasonable. However, the actuarial firm staff stated that the absence of such risks and uncertainties did not imply that factors could not be identified in the future that could have a significant influence on ASI’s reserves. ASI’s risk profile depends in large part on the financial condition of the privately insured credit unions that it insures. We reviewed the CAMEL ratings (which regulators use to rate a credit union’s performance and risk profile) of privately insured credit unions, and compared them to those of federally insured credit unions. We found that, in the aggregate, privately and federally insured credit unions had similar CAMEL ratings during 2006–2015. For example, as seen in figure 2, roughly the same percentages of privately and federally insured credit unions were rated satisfactory (CAMEL ratings of 1 or 2). For both groups, the percentage of troubled credit unions (CAMEL ratings of 4 or 5) peaked in 2011 and then declined. These similarities remained roughly the same (for both satisfactory and troubled credit unions) when we reviewed the percentage of assets in credit unions by CAMEL rating rather than percentage of individual credit unions. For further review, we also selected one indicator in each of five categories––capital adequacy, asset quality, loss coverage, profitability, and liquidity––regulators commonly use to assess the financial health of credit unions. The median values for all of these indicators were similar for privately and federally insured credit unions from 2011–2015. The sizes of privately and federally insured credit unions also were roughly similar. In 2015, the majority of insured credit unions had less than $100 million in total assets (see table 1). For privately and federally insured credit unions, respectively, the median total assets were roughly $34 million and $27 million, and the median numbers of members were roughly 4,300 and 3,200. However, our analysis shows that privately insured credit unions have higher geographic and deposit concentration than federally insured credit unions, which can present risks. Specifically, Privately insured credit unions are much less geographically diverse than federally insured credit unions because they operate solely in nine states. Forty-two percent of ASI-insured credit unions are in Ohio and an additional 30 percent are in Illinois (18 percent) and Indiana (12 percent). This geographic concentration may create risks for ASI because economic downturns are sometimes concentrated in particular regions of the country. NCUA staff noted that previous private deposit insurers have failed mostly as a result of severe regional economic shocks (or in some cases a single major fraud). The total insured deposits of privately insured credit unions are concentrated in a much smaller number of credit unions than for federally insured credit unions. In 2015, ASI’s 2 largest credit unions (by total assets) represented 15 percent of its total insured deposits, and its 10 largest represented 54 percent of its insured deposits. In comparison, NCUA’s 10 largest insured credit unions (by total assets) made up 15 percent of total insured deposits in 2015. This concentration of insured deposits may be viewed as a risk to ASI because, as discussed previously, ASI could face difficulty paying claims for losses if one or more of its largest credit unions were to suffer severe losses. Privately insured credit unions we reviewed largely complied with requirements to disclose that they are not federally insured. But a lack of specificity in Regulation I provisions that relate to disclosure location (drive-through windows), format (signage dimensions and font size), and advertising (printed materials) may have contributed to some variations we saw in compliance with disclosure rules. Disclosure signage at teller and drive-through windows. The 47 privately insured credit unions we visited were largely in compliance with CFPB’s requirement for disclosures at each station or window where deposits are normally received. For example, 45 of the 47 credit unions displayed a disclosure at teller windows. Of the two that did not display signs at teller windows (both of which were small employer-based credit unions), one had a disclosure on the front door and the other had a disclosure on a bulletin board outside the credit union, but still within the employer’s building. However, 7 of the 17 credit unions we visited that had drive-through windows did not have disclosures at the window (see fig. 3). While Regulation I states disclosures are needed at each station or window where deposits are normally received, it does not specifically cite drive-through windows. In contrast, the regulation specifically excludes certain other places of deposit from requiring the disclosure. For example, it states that disclosure is not needed at automated teller machines or point-of-sale terminals. CFPB staff told us that, in their view, a plain reading of Regulation I would include a drive-through window as a “station or window where deposits are normally received,” and thus require disclosure. We also observed that the dimensions and font sizes of the disclosure signage varied among credit unions, with some having signage too small to be easily read, or not placed conspicuously. At 28 of 47 credit unions we visited the signs measured smaller than 3 by 7 inches. The sign we commonly observed measured 2-¼ inches by 4 inches, which is larger than a business card, but smaller than an index card (see fig. 4). Additionally, in more than half the credit unions we visited, we found the font size of the disclosures was too small to be easily read when standing at the teller window. Further, at 7 of 47 credit unions, disclosures were placed where they were not easily noticed. For example, one was placed on a windowsill across the room, another at a teller station covered with other materials, and another at the bottom of an 8 by 10 inch sign containing a lot of other information about the credit union’s policies. CFPB does not provide official signage to privately insured credit unions and Regulation I does not specify signage dimensions or font size requirements. Instead, Regulation I states the disclosures must be “clear and conspicuous and presented in a simple and easy-to-understand format, type size, and manner” but does not provide definitions, parameters, or illustrative examples of what would constitute simple and easy to understand. By comparison, NCUA provides official signs to federally insured credit unions to display at each station or window where insured account funds or deposits are normally received. NCUA’s regulation notes credit unions should not alter the font size of the official sign when used for this purpose. The sign itself, which measures 3 by 7 inches. can be ordered and downloaded from NCUA’s website. Disclosures on websites. We also reviewed 102 privately insured credit union websites and found that almost all of these websites complied with CFPB’s requirement to disclose on their main Internet page that the institution is not federally insured. Three credit unions did not have the disclosure on their main Internet page (each of the three had the disclosure on a different page of its website). However, on many websites (28 of 99) the disclosures were not easily seen or readable. For example, the overall space the disclosure occupied or its placement next to or between colorful or larger graphics made it difficult to notice the disclosures in these cases. Additionally, we observed that more than half the websites (60 of 99) used a font size that was smaller than that used for the other text on the same webpage (see fig. 5). CFPB’s Regulation I states that the website disclosures, like all other required advertising and premises disclosures, should be “clear and conspicuous and presented in a simple and easy to understand format, type size, and manner,” but CFPB does not define these terms or specify font size requirements for websites. In comparison, NCUA’s regulation for federally insured credit unions specifies that the disclosure must be in a size and print that is clearly legible and no smaller than the smallest font size used elsewhere. Disclosures in advertising (printed materials). On our visits to privately insured credit unions we obtained printed materials (such as brochures, promotional flyers, and newsletters which could be considered advertisements), and 8 of the 36 credit unions from which we obtained samples of printed materials had at least one item that did not contain a disclosure. Regulation I states “all advertisements” except those specifically enumerated must disclose a lack of federal deposit insurance, but the regulation does not define what constitutes an advertisement. CFPB staff told us the agency does not have any guidance or commentary on what constitutes “all advertisements.” In comparison, NCUA’s regulation for federally insured credit unions defines advertising, and also provides examples. In NCUA’s regulation, an advertisement is “a commercial message, in any medium, that is designed to attract public attention or patronage to a product or business.” Furthermore, NCUA’s regulation specifies that advertising includes print, electronic, or broadcast media, displays and signs, stationery, and other promotional material. Disclosures in periodic statements, account records, and signature cards. Representatives of all nine state credit union supervisors with whom we spoke told us that privately insured credit unions were generally compliant with the requirements to (1) disclose on periodic statements and certain other account records a lack of federal deposit insurance, and (2) obtain written acknowledgment from depositors on this lack of federal insurance, as is generally required. The state credit union supervisors said they checked a sample of periodic statements, account records, and signature cards for new accounts as part of their routine examinations of privately insured credit unions. Reviews of compliance. Overall, compliance levels with disclosure requirements have improved since our 2003 review of privately insured credit unions, which included an assessment of their compliance with federal disclosure rules. In 2003, we found that 36 of 57 credit unions had the required disclosures on premises. Similarly, in 2003, 39 of 78 websites and 93 of 227 printed materials we reviewed had the required disclosures. CFPB has not had any findings, observations, or evaluations regarding privately insured credit unions’ disclosures. CFPB staff told us the agency has not received any complaints related to private deposit insurance. CFPB staff said they have reviewed privately insured credit unions’ websites at a very informal level and the websites seemed to be complying with Regulation I. As previously noted, CFPB shares enforcement authority for Regulation I with FTC. CFPB staff told us that state credit union supervisors and attorneys general also have the authority to enforce Regulation I, as necessary. The state credit union supervisors in the nine states with privately insured credit unions similarly told us that compliance with disclosure requirements has not been a problem in recent years. They said that their routine examinations of state-chartered credit unions check for disclosures on premises, on websites, in advertising materials, and, as noted earlier, by reviewing selected periodic statements, account records, and signature cards. They said that if examiners observe noncompliance with disclosure requirements, they cite it as an examination finding and expect the credit union to promptly correct the issue and display the proper signage or disclosure. While we generally found that compliance levels were high, Regulation I may be interpreted and enforced differently by different credit unions and state regulators. Without clarity on whether or not drive-through windows are required to have disclosures, some credit unions may continue to not display them at these windows. Additionally, without more clarity or guidance around dimensions and font sizes for disclosures, the disclosures may be too small to be easily read or noticed. Further, there may continue to be confusion about what constitutes “advertising” and whether certain printed materials are required to include disclosures. As a result, the state credit union supervisors and the credit unions themselves may face challenges consistently monitoring and complying with Regulation I. In turn, credit union members may not always be consistently and adequately informed that deposits are not federally insured. Deposit insurance helps protect depositors from losing their money in the event a financial institution fails. By law, any institution that does not have federal deposit insurance must clearly and conspicuously inform consumers that the institution is not federally insured and privately insured credit unions we reviewed largely complied with disclosure requirements. However, the instances we observed of missing disclosures or disclosures that were too small to be easily read or inconspicuous suggest that the lack of specificity in some provisions of Regulation I has led to inconsistencies in interpretation. By clarifying Regulation I, CFPB would facilitate state credit union supervisor monitoring and credit union compliance and would better ensure that consumers were informed that their deposits are not federally insured. We are making three recommendations to help state credit union supervisors and privately insured credit unions better interpret Regulation I and inform consumers when an institution is not federally insured. CFPB should issue guidance to (1) clarify whether drive-through windows require disclosures; (2) describe what constitutes clear and conspicuous disclosure, including minimum signage dimensions and font size for disclosures; and (3) explain and provide examples of which communications are advertising. We provided CFPB, FHFA, and NCUA with a draft of this report for review and comment. In its written comments, reproduced in appendix III, CFPB agreed with our recommendations. CFPB noted that the agency recognizes that providing guidance clarifying Regulation I may improve privately insured credit unions’ understanding of and compliance with the federal disclosure requirements. Additionally, CFPB stated that the agency intends to explore options that will most effectively provide guidance regarding Regulation I, such as issuing a bulletin that could be published in the Federal Register or posted on the agency’s website. CFPB, FHFA, and NCUA also provided technical comments, which we incorporated as appropriate. We also provided selected relevant portions of the draft to ASI, its third-party actuarial firm, the Ohio Departments of Insurance and Commerce, and the other eight state credit union supervisory authorities for their technical review, and we incorporated their comments as appropriate. We are sending copies of this report to the appropriate congressional committees, agencies, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report (1) discusses regulatory and other assessments of American Share Insurance (ASI), the sole private deposit insurer, and (2) examines the level of compliance with disclosure requirements by credit unions that do not have federal deposit insurance. The Fixing America’s Surface Transportation Act (FAST Act) includes a provision for us to review private deposit insurers and privately insured credit unions’ disclosure compliance in the United States. Our scope includes the nine states that permit credit unions to use private deposit insurance and have credit unions that have chosen to do so: Alabama, California, Idaho, Illinois, Indiana, Maryland, Nevada, Ohio, and Texas. Some credit unions in Puerto Rico are insured by a quasi-governmental entity––the Public Corporation for the Supervision and Insurance of Cooperatives––and these credit unions are not included in the scope of this report. In addition, this report does not compare ASI’s reserves and capital adequacy to those of the National Credit Union Administration’s (NCUA) National Credit Union Share Insurance Fund because the two entities have different legal requirements and risk profiles, use different models to help estimate reserves, and use different assumptions and methods to help determine capital adequacy. Based on this, we limited the scope of the report to solely cover regulatory and other assessments of ASI, instead of an analysis or comparison of the two entities. To gather information about ASI, we identified the company’s regulators and legal requirements and reviewed laws and regulations pertaining to the company. We reviewed Ohio law and implementing regulations, which establish the powers and authorities governing credit union guaranty corporations, such as ASI. We interviewed the company’s primary regulators (Ohio Departments of Insurance and Commerce), as well as representatives from the state credit union supervisors in the other eight states in which ASI operates. To determine how the Ohio Department of Insurance assessed ASI’s financial condition, we reviewed documentation such as the most recent examination report covering calendar years 2008–2012 and financial analyses of ASI covering calendar years 2013– 2015. We also compared the department’s process with the guidelines recommended by the National Association of Insurance Commissioners (NAIC) and confirmed with NAIC staff that the department’s procedures were consistent with NAIC guidelines. Finally, we interviewed staff at the Federal Housing Finance Agency (FHFA), which oversees the Federal Home Loan Bank (FHLBank) System, and the FHLBank of Cincinnati (of which ASI is a member) to determine their oversight role with regard to ASI and to obtain information about the number and status of privately insured credit unions applying for membership to the FHLBanks. We reviewed ASI’s annual reports and audited financial statements for 2008–2015 and other documentation, such as the company’s investment policy, its examination and insurance policy, and its application form and process for credit unions seeking private deposit insurance. We interviewed ASI management about the company’s history, governance structure, regulatory and financial reporting requirements, underwriting policies, and capital and reserves requirements. We also reviewed reports from the third-party actuarial firm that ASI retains, including the firm’s analyses of ASI’s capital adequacy for 2009–2015 (conducted every 3 years), annual analyses of ASI’s unpaid loss and loss adjustment expense for 2011–2015, and annual statements of actuarial opinion for 2011–2015, as well as the firm’s analysis of capital adequacy for ASI’s wholly-owned subsidiary for 2015. We interviewed the actuarial firm’s staff about their analyses related to these studies and obtained information about the assumptions and methods used. For the most recent capital adequacy study, our internal actuarial staff reviewed the actuarial firm’s modeling approach and certain key methods and assumptions, including those related to the three economic scenarios used in the study. Additionally, we inquired about the firm’s internal peer review process and steps taken to ensure the completeness and accuracy of the models used to assess ASI’s capital adequacy and reserves for losses. We did not conduct our own independent assessment of ASI’s capital adequacy and reserves for losses and therefore cannot make our own actuarial determination or opinion. To review information about the credit unions that ASI insures, we reviewed CAMEL ratings for privately and federally insured credit unions for 2006–2015. We also used financial data from SNL Financial (2011–2015) to analyze selected financial indicators for privately and federally insured credit unions. We selected these financial indicators for further review because they had been previously identified by regulators as metrics to assess a credit union’s financial health and used in prior reports looking at credit unions. For our previous work, we had obtained information from NCUA on the indicators it typically uses to assess credit unions’ financial health and we selected one indicator in each of the following five categories: capital adequacy, asset quality, loss coverage, profitability, and liquidity. We analyzed data from SNL Financial in September 2016 for year-end 2011–2015. We presented median rather than the mean because means can be skewed by extremely high or low values. We assessed the reliability of the CAMEL ratings for privately and federally insured credit unions, as well as the SNL Financial data for the five financial indicators, by requesting information about the underlying data, how they are collected, and data reliability testing. We found the data to be sufficiently reliable for the purposes of our review. To determine compliance with disclosure requirements, we identified disclosure requirements for credit unions that do not have federal deposit insurance by reviewing the Federal Deposit Insurance Act disclosure provisions and the Bureau of Consumer Financial Protection’s (CFPB) corresponding Regulation I. To review on-site disclosure requirements (at stations or windows), we selected a nonprobability sample of 53 privately insured credit unions and conducted in-person, unannounced site visits at 47 of these 53 (41 site visits to unique credit unions and 6 site visits to multiple locations of the 41 credit unions). We were unable to enter the other six credit unions, usually because they were closed when we attempted our visit. The sample was selected to ensure diversity across a number of criteria related to possible differences in compliance. We selected credit unions for their geographic diversity (credit unions in different regions of the country and different states), and to achieve a mix of credit unions of different asset sizes, main retail and branch locations, and urban and nonurban areas. We also took proximity to GAO offices into account as a secondary criterion. Because there may be variation in how state regulators and examiners check for compliance with disclosure requirements, we conducted site visits in four different states. We selected these states to obtain a mix of states in terms of numbers of privately insured credit unions––two with many (Ohio and Illinois), one with a moderate number (California), and one with few (Maryland)––and for geographic diversity. For the geographic distribution (by number and percentage) of all the privately insured credit unions across the nine states that have them, see table 2. We selected privately insured credit unions of varying sizes, as defined by total assets, and selected credit unions for site visits that were roughly representative of the overall population of credit unions. For instance, almost 75 percent of privately insured credit unions had total assets of less than $100 million and therefore the majority of credit unions we selected for our site visits did as well. We conducted site visits at both main retail and branch locations. We selected locations to include both urban and nonurban areas. We roughly defined “urban” as a downtown area where consumers are more likely to walk to the credit union and “nonurban” as an area where they are more likely to drive. Credit unions in nonurban areas were more likely to have drive-through teller windows. Staff conducted site visits between June and August 2016. On each visit, staff followed a protocol to help ensure consistency and completed a data collection instrument to record their observations. The protocol included checking for signs at teller and drive- through windows and observing sizes and clarity of signs, among other items. When possible, we obtained photographic evidence to document examples of disclosure signage. Two GAO analysts recorded their observations at each site, and any discrepancies were reconciled by discussions and photographic evidence where available. We aggregated the site visit data and present summary-level information in this report. To determine compliance with disclosure requirements for the credit unions’ main Internet page, we reviewed the websites of all 102 privately insured credit unions that had a website during the time of our review. Analysts followed a protocol to help ensure consistency of observations about the clarity, placement, and font size of disclosures observed and completed a data collection instrument for each credit union. A second analyst independently reviewed each credit union’s website to verify the accuracy of information collected by the first analyst. Any discrepancies between the two analysts were identified, discussed, and resolved by referring to the source websites. To determine compliance with disclosure requirements for advertising (printed materials), we obtained samples of printed materials (such as brochures, promotional flyers, and newsletters) from 36 of the 41 unique credit unions we visited where such printed materials were readily available. We assessed whether these printed materials had proper disclosures, taking into account the specified exclusions regarding advertising noted in Regulation I. To determine compliance with the requirement to (1) provide disclosures in periodic (monthly) statements and account records, and (2) get written acknowledgment from depositors that the institution does not have federal deposit insurance, we relied on testimonial evidence from the nine state credit union supervisors we interviewed because we determined that their compliance review in this area was adequate for our purposes—for example, each state reviews a sample of new accounts as part of the routine examination each credit union receives. Because CFPB is the federal entity responsible for issuing disclosure regulations, we interviewed CFPB staff about the agency’s oversight and findings related to compliance with these disclosure requirements. We compared CFPB’s disclosure requirements for credit unions that do not have federal insurance with those of NCUA for federally insured credit unions. Because privately insured credit unions are state-chartered, we interviewed the respective state credit union supervisors in each of the nine states about their annual examinations of privately insured credit unions, including their review of compliance with requirements to disclose a lack of federal insurance. We also interviewed representatives from the Credit Union National Association, National Association of State Credit Union Supervisors, and the Ohio Credit Union League to ask whether they were aware of any issues or concerns related to compliance with disclosure requirements for privately insured credit unions. To determine reasons why credit unions chose private or federal deposit insurance and to obtain views on the benefits and risks of each, we interviewed representatives from 10 credit unions that had switched to or from private insurance in recent years. We identified these credit unions by reviewing NCUA’s Insurance Activity Reports (from January 2008 to July 2016), which identify deposit insurance conversions, and then confirmed these conversions with NCUA and ASI. We interviewed representatives from five of the eight credit unions that converted from federal insurance provided by NCUA to private deposit insurance provided by ASI within the past 5 years. Additionally, we interviewed representatives from five credit unions that most recently converted from private (ASI) to federal (NCUA) deposit insurance. The conversions took place in 2008–2009. These selection criteria were chosen because representatives from credit unions that recently converted should be able to provide reasons why their credit union made the choice to switch from federal to private deposit insurance, or vice versa, and have the most up- to-date information. We also interviewed representatives from credit union trade associations, NCUA, and ASI about the reasons credit unions choose private versus federal deposit insurance. We conducted this performance audit from February 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Credit unions in the United States can be federally or state-chartered, which determines their primary regulator for safety and soundness and also their options for deposit insurance. Federally chartered credit unions are regulated by the National Credit Union Administration (NCUA) and must be federally insured by NCUA’s National Credit Union Share Insurance Fund, which provides up to $250,000 of insurance per depositor for each account ownership type. State-chartered credit unions are regulated by credit union supervisors in their respective state and also can be federally insured by NCUA or, in some states, choose a private insurer. American Share Insurance (ASI) is the sole insurer for private deposit insurance to credit unions and provides coverage up to $250,000 per account. Credit unions sometimes change deposit insurers—for example, converting between federal and private deposit insurance. According to information provided by NCUA, eight credit unions converted from federal to private deposit insurance in 2012–2016, and five converted from private to federal deposit insurance in 2008–2009 (the most recent such conversions). Representatives of some credit unions that converted from federal to private deposit insurance cited the following reasons: Greater coverage for members. ASI insures $250,000 per account, whereas NCUA insures $250,000 per depositer for each account ownership type. Thus, ASI provides more coverage for members with more than $250,000 in a particular deposit type because they can structure their deposits into multiple accounts of $250,000 or less. Reduced federal oversight. Representatives of some state- chartered credit unions described state regulation and oversight, including the examination process, as less burdensome than federal regulation and oversight. Cost savings. Credit union representatives said that private deposit insurance was less expensive than federal deposit insurance following the 2007-2009 financial crisis because ASI’s special premium assessments were lower than the premiums of the National Credit Union Share Insurance Fund, and state regulatory fees are less than those of NCUA. Comparable business models. A credit union representative noted that the credit union business model aligns well with ASI—both are not-for-profit organizations that exist to serve their members, and member credit unions sit on ASI’s board of directors. Representatives of some credit unions that converted from private to federal deposit insurance cited the following reasons: Full faith and credit of U.S. government. Deposits are backed by the full faith and credit of the U.S. government, which provides depositors with greater confidence and security. Concern about private insurer during financial crisis. A representative of one credit union told us the main reason it switched to NCUA insurance was because it was concerned that ASI might not survive the 2007–2009 financial crisis. Access to additional funding source. One credit union representative told us the credit union switched to federal deposit insurance in 2008 to allow it to join a Federal Home Loan Bank, which provided it with access to an additional funding source. In addition to the contact name above, Jason Bromberg (Assistant Director), Beth Faraguna (Analyst in Charge), Caitlin Cusati, Paul Foderaro, Alice Hur, Risto Laboski, Yola Lewis, Ned Malone, Scott McNulty, Marc Molino, Barbara Roesmann, Jessica Sandler, Frank Todisco, and Shana Wallace made key contributions to the report. | The Federal Deposit Insurance Act requires privately insured credit unions to disclose to consumers that they do not have federal deposit insurance and CFPB has implemented regulations on these requirements. The Fixing America's Surface Transportation Act includes a provision for GAO to review private deposit insurers and privately insured credit unions' compliance with disclosures. This report (1) discusses regulatory and other assessments of ASI, the sole private insurer, and (2) examines the level of compliance with disclosure requirements for privately insured credit unions. GAO reviewed documentation from and interviewed federal and state regulators, ASI management, and ASI's third-party actuarial firm. GAO reviewed certain key methods and assumptions used by the actuarial firm. GAO also analyzed regulatory ratings (2006–2015) and selected financial data (2011–2015) on privately and federally insured credit unions. In addition, GAO reviewed 102 websites for all privately insured credit unions that had websites, conducted unannounced site visits at 47 credit unions (selected based largely on asset size and geography), and reviewed printed materials from 36 of the credit unions it visited that had materials readily available. About 2 percent of credit unions (125) have private deposit insurance, which is provided by one company—American Share Insurance (ASI). Regulatory and other assessments have suggested that ASI's reserves have been adequate and that the company has had a strong ability to cover present and future losses for the credit unions it insures. The most recent examination of ASI by its primary regulator (Ohio Department of Insurance) determined that ASI's reserves for losses were adequate and appropriate and consistent with legal requirements. An independent actuarial firm hired by ASI reported that it had a strong ability to cover losses under different economic scenarios. The Ohio regulator and the actuarial firm both noted risk factors that could affect ASI's financial condition, including changes in macroeconomic conditions or major losses by the largest credit unions it insures. In the event of financial difficulties, Ohio law allows ASI to tap into additional sources of funding, including lines of credit and special assessments from its insured credit unions. Privately insured credit unions largely complied with the Bureau of Consumer Financial Protection (CFPB) requirements to disclose that they do not have federal deposit insurance. For instance, 45 of 47 credit unions GAO visited displayed required disclosures at teller windows (see fig.), and 99 of 102 websites GAO reviewed included the disclosure on their main Internet page, as required. However, 7 of 17 credit unions with drive-through windows that GAO visited did not have disclosure signs at these windows. Additionally, printed materials (such as brochures and flyers) GAO reviewed from 8 of 36 credit unions did not include disclosures. The regulations require all advertising to include a disclosure, but do not define what constitutes advertising. In some cases, disclosure signs or text size were too small to be easily read, or were not placed conspicuously. CFPB's regulations on disclosures for privately insured credit unions do not specify signage dimensions or font size. Without clear disclosure requirements, state credit union supervisors and credit unions may not be consistent in how they interpret disclosure requirements and some consumers may not be informed that their deposits are not federally insured. GAO recommends that CFPB issue guidance for privately insured credit unions to clarify whether drive-through windows require disclosure, describe what constitutes clear and conspicuous disclosure, including minimum signage dimensions and font size, and explain and provide examples of which communications are advertising. CFPB agreed with these recommendations. |
PPACA includes provisions that are designed to make health insurance more accessible and affordable for millions of Americans. These include provisions for establishing health insurance exchanges in each state, and enhancing processes for the annual review of health insurance premiums. To facilitate these activities, PPACA created new responsibilities for states and the federal government, and provided financial resources to states in the form of federal grant funding. PPACA mandated the establishment of exchanges—new health insurance marketplaces in each state through which qualified individuals and small businesses can compare, select, and purchase standardized health coverage from among participating issuers of health coverage. These exchanges must begin enrolling consumers by October 1, 2013, into coverage that begins January 1, 2014. Core exchange functions include determining eligibility and enrolling individuals, plan management (certifying qualified health plans), consumer assistance and outreach, and developing the necessary information technology (IT) infrastructure to support the exchange. These exchanges may be established and operated by a state itself as a “state-based exchange.” Such states must also establish a governing board and standards of conduct. Where a state is unable or unwilling to establish and operate an exchange, PPACA directs HHS to establish an exchange—referred to by HHS as a “federally facilitated exchange.” States in which a federally facilitated exchange will operate may also enter into arrangements with HHS to assist it with certain of the exchange’s plan management or consumer assistance functions. HHS refers to such exchanges as “partnership exchanges.” As of March 2013, HHS indicates that 18 states will establish their own state- based exchanges and 33 will have a federally facilitated exchange, of which 7 states are planning to have a partnership exchange. To assist states in developing exchanges, PPACA authorized HHS to award grants to states for the planning and establishment of insurance exchanges. PPACA did not provide a specific amount of exchange grant funding, but rather, appropriated to HHS, out of any moneys in the Treasury not otherwise appropriated, an amount necessary to make grant awards. In doing so, it directed HHS to determine the total amount of funding that it will make available to each state for each fiscal year. PPACA authorized HHS to award grants to states through December 2014, and, on the basis of this authority, HHS established four separate programs for awarding exchange grants to states: Planning Grants: Provided states with resources to conduct the initial research and planning needed to build an exchange and determine how it will be operated and governed. Once awarded, the grant funds were available for 1 year, and a state could only receive one grant. Early Innovator Grants: Provided funding to a state or group of states that were early leaders in building their exchanges to design and implement the IT infrastructure needed to operate the exchanges. All exchange IT components, including software and data models, developed with these grants could be adopted and modified by other states to fit their specific needs. Once awarded, the grant funds were available for 2 years, and a state or group of states could only receive one grant. Establishment Grants (Level 1): Provide funding to states pursuing any exchange model. Funding is designed to help states undertake additional exchange establishment activities, such as making legislative/regulatory changes, establishing IT systems, and consulting with key stakeholders. Once awarded, the grant funds are available for 1 year, and a state may apply for multiple grants. Establishment Grants (Level 2): Provide funding to states that have legal authority to implement an exchange and are further along in exchange development and pursuing a state-based exchange. Funding is designed to help states develop all exchange activities, including consumer and stakeholder engagement and support, eligibility and enrollment, plan management, and technology. Once awarded, the grant funds are available for up to 3 years, and a state can only receive one grant. Health insurance premium rates are generally established on the basis of actuarial estimates of the cost of providing coverage over a period to enrollees in a private health insurance plan. Insurance issuers generally submit these rates to states as a formula that describes how to calculate a premium for each person or family covered on the basis of factors such as age, gender, and geographic location. Individual states are primarily responsible for ensuring that the rates within their state are reasonable— that is, adequate, not excessive, reasonable in relation to benefits provided, and not unfairly discriminatory—and they do so by establishing standards and defining state insurance departments’ authority to help enforce them. Most states require carriers to submit rate filings to state insurance departments for review prior to implementation of new rates or rate changes, although the authority of the departments to approve or disapprove the filings can vary by state. Some state insurance departments have the authority to approve or disapprove rate filings before they go into effect, while others do not have any authority to approve or disapprove rate filings. Oversight of premium rates charged by insurance issuers historically has been primarily a state responsibility; however, PPACA established a role for HHS by requiring the Secretary of Health and Human Services to work with states to establish a process for the annual review of unreasonable premium increases in the individual and small group insurance markets. HHS has since issued regulations that established a threshold for determining whether rate increases proposed by insurance issuers require review, and requiring insurance issuers to report information to HHS on proposed rate increases. The regulations also establish criteria and a process by which HHS will determine whether a state has an effective rate review program and thus meets HHS’s standards for conducting the rate reviews. Under the regulations, an effective rate review program must, among other things, utilize sufficient data and documentation concerning rate increases to conduct an examination of the reasonableness of the proposed increases, and make a determination of the reasonableness of the rate increase under a standard set forth in state statute or regulation. If HHS determines that a state does not have an effective rate review program, then HHS will conduct the rate reviews. As of April 2013, HHS has determined that all but nine states have an effective rate review program for both the individual and small group insurance market. To assist states in reviewing premium rates, PPACA also established a 5-year premium rate review grant program beginning in 2010. PPACA appropriated $250 million for HHS to award grants to states from fiscal years 2010 through 2014. On the basis of this authority, HHS established two separate rate review grant programs: Cycle I: Provided states with assistance to enhance their rate review processes—for example, by ensuring that increases in health insurance premiums and rate filings are thoroughly evaluated and, to the extent permitted by law, approved or disapproved through a comprehensive rate review process. Once awarded, grant funds were available for 1 year, and a state could only receive one grant. Cycle II: Further assists states in improving and enhancing their rate review and reporting processes, and for meeting requirements of an effective rate review program. Once awarded, grant funds are available for up to 3 years depending on the date they are awarded, and a state may be able to receive more than one grant. (See app. I for further details on these grant types as well as the four types of establishment grants.) Federal competitive grants generally follow a life cycle that includes four stages and several activities within each stage, as seen in figure 1. The grant process begins with the preaward stage, when the public is notified of the grant opportunity through a funding announcement, and potential grantees must submit applications for agency review. In the award stage, the agency identifies successful applicants and awards funding. The implementation stage includes grantees drawing down funds, agency monitoring, and grantee reporting, which may include financial and performance information. The closeout stage includes preparation of final reports, financial reconciliation, and any required accounting for property. Audits may occur multiple times during the life cycle of the grant and after closeout. In CCIIO, officials, known as state or project officers, are assigned to specific grants, and are responsible for managing and overseeing the life cycle of grants. This includes reviewing grant applications and evaluating whether the projects funded by the grants are on schedule and meeting goals. For exchange grants, 17 CCIIO employees work as state officers, and for rate review grants, 2 CCIIO employees work as project officers. These state or project officers also work with grants management officials from CMS’s Office of Acquisition and Grants Management (OAGM) to oversee the financial and regulatory aspects of the grants. HHS’s process to award PPACA exchange and rate review grants to states involves soliciting, screening, and evaluating applications and making official grant awards. The steps include the announcement of grant opportunities; states’ preparation and submission of applications; application eligibility determinations; objective reviewers’ evaluation of applications; HHS officials’ evaluation of applications and corresponding follow-up, or budget negotiations, with states; final grant recommendations to HHS leadership; and final award decisions and issuance of official awards. CCIIO project officers, in collaboration with other HHS officials, review statutory requirements as well as federal regulations to develop an FOA to solicit applications for each exchange and rate review grant type. The FOA contains key items a state needs to review and understand prior to submitting an application. These include the program eligibility criteria, the amount of funding available for award, the types of activities that may be funded under the grants, the instructions for completing applications, and the process and criteria for evaluating applications. Once completed, the FOA is posted on the HHS website and Grants.gov, a website run by the federal government through which states and other entities can find and apply for federal grants. After posting the FOA, CCIIO project officers may conduct a conference call to provide guidance to interested states on items such as the grant review criteria, instructions on preparing project budget proposals, and other application procedures. Information on this call is provided in the FOA, and a transcript and recording of the call may be posted afterward on the HHS website. States must prepare and submit application materials to HHS through Grants.gov, as outlined in the FOA. The application must include the amount of federal grant funding being requested, as well as other materials including various federal forms; letters of support from the governor or other applicable state entities, or both; a project narrative; a work plan that contains milestones and time frames; a proposed budget that provides line-item costs for various categories of activities to be performed using grant funding; and an organizational chart of key state personnel. Upon receiving applications, CCIIO project officers and OAGM officials conduct an initial eligibility check for all grant applications by screening them on the basis of specific eligibility criteria described in the FOA and ensuring that they contain all required documents as described above. The eligibility criteria vary depending on the type of exchange or rate review grant being awarded. Table 1 below outlines the key eligibility criteria for each type of exchange and rate review grant. As the table shows, the eligibility criteria for Level 2 exchange Establishment grants and Cycle II rate review grants require greater commitments from states as compared to the criteria for other grants—for example, to receive Level 2 Establishment grants, states must commit to establishing a state- based exchange and complete specified steps associated with doing so, such as obtaining the necessary legal authority to establish and operate the exchange. To receive Cycle II grants, states must commit to developing effective rate review programs that meet HHS requirements. Once applications are deemed eligible, a panel of independent subject- matter experts meets to discuss the applications’ strengths and weaknesses and evaluate whether they meet grant program requirements. These reviewers are recruited by CCIIO project officers, who ensure that the reviewers are unaffiliated with the exchange and rate review programs but have experience in a wide range of relevant fields and together possess the subject-matter expertise needed to review the applications. For example, according to CCIIO officials, panels for exchange grant application cycles always contain reviewers with IT-related expertise, due to the significant role IT plays in exchange establishment. CCIIO project officers assign three reviewers to each application, but the total number of reviewers within a panel depends on the number of submitted applications and may therefore vary between application cycles. Once reviewers are selected, project officers provide them with instructions on the process and guidance on the relevant FOA and statute. In addition, OAGM officials indicated that they advise the reviewers on the proper procedures to follow in conducting their review. To then evaluate applications, the objective reviewers use various methods depending on grant type: For most exchange grants, reviewers rely on a scoring system outlined in the applicable FOA to assess the strengths and weaknesses of various sections of an application. These application sections are attached to specified review criteria and point ranges, with a maximum total score of 100 points. For example, the project narrative portion of a state’s Establishment grant application can be awarded up to 55 points, depending on factors such as the extent to which the state clearly describes how its progress toward exchange establishment to date has informed its current grant proposal (see table 2 below). The reviewers document their proposed total score for each assigned application as well as their assessments of the applications’ strengths and weaknesses. For rate review grants, reviewers do not utilize a scoring system. Rather, to assess the strengths and weaknesses of applications and determine whether they meet requirements, objective reviewers use a CCIIO-provided checklist that lists the requirements of the grant program. The reviewers discuss the applications and make recommendations as to whether the applications are strong enough to be funded or contain weaknesses that must be addressed prior to awarding the grant. CCIIO project officers and OAGM officials sit on the panel meetings but do not participate; rather, their role is to document decisions made during the meetings. For example, where applicable, the officials prepare a rank-order list, or a list of applicants ranked by objective review score. The officials also prepare summaries of each application’s strengths and weaknesses as discussed during the review panel (including objective review scores, where applicable), as well as summaries of recommendations stemming from the reviewers’ analysis of rate review grant applications. These summaries serve as official documentation of the objective review process. Questions or concerns flagged during this review, as well as those identified during the objective review, can also help inform HHS officials’ future oversight of states’ use of grant funds. increase of about $9.7 million. Budget negotiations on rate review grant applications submitted by August 2012 have resulted in a net increase of about $8.8 million, with changes ranging from an increase of about $3,000 to an increase of about $2.5 million. After budget review and negotiations for exchange and rate review grant applications have concluded, project officers conduct a final analysis of the results of previous reviews and prepare funding recommendation memos for HHS leadership. These memos contain summaries of the awarding process thus far, including the number of submitted applications as well as the original budget request, revised budget request (where applicable), and the final recommended award amount for each applicant. For example, the August 2011 funding memo describing decisions on exchange Establishment grant applications due in June 2011 indicated that 14 states applied for Level 1 grants. All applications were deemed eligible and thus were recommended to receive awards, with recommended funding amounts ranging from about $4.2 million to about $39.4 million (due to differences in the states’ proposed activities and budgets). The funding memos may also include scores from the objective review, where applicable, as well as high-level results from any budget negotiations conducted with states. In addition, according to HHS officials, before final awards are issued, CCIIO project officers typically recommend special terms and conditions that grantees must meet prior to receiving their full funding amount.These recommended terms and conditions are also included in the funding memo. For example, funding for contractors performing IT-related activities is initially restricted from all exchange Establishment grants until certain conditions, such as providing an itemized budget and justification for each contract, are met and sufficiently documented. In addition, Establishment grant applicants submitting applications that receive scores less than 70 during the objective review may have their entire funding amount restricted until the applicants meet certain requirements as specified by HHS, such as providing an updated work plan or budget proposal that more sufficiently meets requirements outlined in the FOA. For instance, the August 2011 Establishment grant funding memo indicated that 3 out of the 14 states’ applications received scores of less than 70, and that 2 of these states were able to address most of their application weaknesses during budget negotiations. The remaining state was recommended to have its entire funding amount restricted pending submission of updated application materials within 60 days of receiving the award notice. HHS leadership reviews the funding memo and provides the final sign-off on decisions regarding exchange and rate review grant applications, but according to CCIIO officials generally does not deviate from recommendations in the memo. The agency issues an official notice of grant award to each applicant, outlining the funding amount, project period, budget period, applicable terms and conditions, and administrative requirements such as financial and progress reports that grantees must submit on a regular basis throughout the course of their grant. As of March 27, 2013, HHS had awarded nearly $3.7 billion in exchange grants to states, much of which will be used to fund activities related to developing IT systems for states’ exchanges. HHS also awarded about $159 million in rate review grants to states, which to date has been used for five key activities related to enhancing states’ rate review processes, including enhancing the transparency of issuers’ rate review filings. Between September 2010, when exchange grants were first awarded, and March 27, 2013, HHS awarded 132 exchange grants totaling nearly These awards included Exchange Planning, $3.7 billion to 50 states.Early Innovator, and Level 1 and 2 Establishment grants. To date, the majority of funding (about $3.4 billion, or 92 percent) has been awarded in the form of Level 1 and Level 2 Establishment grants, while Exchange Planning grants make up approximately 1 percent of total exchange grant funding (see table 3). HHS oversees states’ use of grant funds by reviewing and analyzing state-reported information and conducting some limited verification of state data. HHS has several mechanisms to address identified concerns or noncompliance identified through routine monitoring and to respond to requests to amend grants’ terms. CCIIO’s regular oversight process for exchange and rate review grants consists of a variety of mechanisms through which project officers regularly review information reported by grantees as well as communicate with grantees. Additionally, this oversight is supplemented by independent verification through internal analysis and periodic reviews. CCIIO’s regular oversight mechanisms are listed below in table 5. As a condition of receiving an exchange or rate review grant, CCIIO requires grantees to prepare and submit regular progress reports covering programmatic activities, progress in meeting program goals, and details about expenditures. CCIIO requires exchange program grantees to provide progress reports describing the current status of their activities in areas such as making legislative/regulatory changes, establishing IT systems, building organizational infrastructure and staffing resources, establishing an operational budget and management plan, and consulting with key stakeholders. Originally, states were required to submit these reports quarterly, but officials indicated that they changed the reporting to semiannually to reduce the burden on states, since states were providing information and communicating with project officers frequently. As part of this progress report, CCIIO requires exchange program grantees to provide information on the amount of grant funds spent over the life of the grant across key budget categories. These categories include state personnel, travel, contractors and consultants. CCIIO also requires grantees to identify the individual contracts they have awarded with grant funds. As with exchange grants, CCIIO requires rate review grantees to provide quarterly progress reports, which include data on their rate review activities, the grantees’ original goals, deviations or changes to original goals, accomplishments to date, significant activities undertaken and planned, and any relevant issues or setbacks that occurred over the prior 12 months. These reports also include expenditure information similar to that reported by exchange grantees. Further, CCIIO requires that, each quarter, both exchange and rate review grantees provide financial reports that detail financial activities, including the amount of cash transactions grantees made with grant funds during the quarter. In addition to requiring regular reports, CCIIO project officers have regular phone communication with grantees to discuss grantee reports and activities, clarify guidance, and provide technical assistance to grantees with challenges they encounter, and, according to officials, thereby maintain an awareness of grantees’ ongoing activities. According to the standard operating procedures for both programs, project officers call each grantee at least quarterly. According to CCIIO officials, these contacts are in practice much more frequent than the minimum for many grantees under both programs. For example, according to CCIIO officials, project officers generally communicate at least twice per week with exchange grant recipients. As part of their ongoing monitoring, CCIIO officials regularly review and summarize the programmatic and financial information obtained from grantees’ progress reports and monitoring calls. For exchange grants, CCIIO officials indicated that each week, project officers submit internal project office summaries about the status of the states’ exchange implementation efforts. Additionally, each month, project officers also develop detailed narratives for exchange grants, which include information on how much grant funding the state has spent, the states’ progress, barriers they may face, and any action items to be taken over the next 30 days. For rate review grants, project officers also prepare weekly summaries of grantee activities, and quarterly they summarize each state’s progress in an Excel tracking sheet. This analysis includes information on how much the grantee has spent, grantee accomplishments, and issues requiring follow-up. Finally, on a quarterly basis program staff provide briefings to CCIIO leadership, and issues regarding grantees’ progress are discussed. According to CCIIO officials, project officers routinely oversee and assess exchange and rate review grantees’ financial activities by monitoring the amount and pace of the states’ drawdown of grants. Each week, OAGM staff provides project officers with reports from OAGM’s financial system on the amount of funding each grantee has withdrawn from the grant account, according to officials. Project officers use the reports to look for unusual events such as large drawdowns or no drawdowns. Withdrawals are reported at the overall grant level, not by specific expenditures or general categories of expenditures. According to CCIIO officials, if review of these reports highlights potential issues, they will follow up with grantees and determine whether further action is warranted. In addition to analysis of grantee drawdowns, CCIIO has other mechanisms that can provide independent assessments of grantees’ use of funds. CCIIO officials also indicated that they conduct site visits—on- site assessments of exchange and rate review grantees’ activities—and that this provides a measure of independent verification of grantee activities. CCIIO officials indicated that exchange grant site visits are in their early stages and have been utilized to provide technical assistance to help grantees establish their exchanges. For example, CCIIO officials indicated that as of April 11, 2013, they have conducted 26 technical assistance site visits at exchange grantees. CCIIO’s draft procedures for conducting site visits indicate that they will ultimately conduct a site visit to each state at least once prior to certification of a state’s exchange. The procedures indicate that the visit should be conducted by a team consisting of a project officer and key CCIIO and CMS staff, and the site visit team will review operational aspects of the proposed exchange including its enrollment process and financial management. Within 4 weeks of completing the site visit, the project officer should prepare a written report that will include a summary of the site visit, and recommendations to the state, if applicable. For rate review grantees, CCIIO’s procedures do not address the frequency of site visits, but indicate site visits are used to further engage grantees, monitor programmatic progress toward established milestones, track fiscal performance, ensure compliance with programmatic and statutory requirements, and mitigate programmatic risks. The procedures indicate that selecting grantees for site visits will be determined by a number of factors including the stage of programmatic implementation and complexity of the grantee’s rate review proposal, and the need for more hands-on auditing or budget review, or both. CCIIO officials indicated that as of April 11, 2013, CCIIO has conducted two site visits to rate review grantees. CCIIO requires each state receiving exchange grants to undergo three Establishment Reviews over the course of its grant period. CCIIO uses these reviews to assess states’ activities and provide systematic feedback on their progress towards development of an exchange. The reviews are conducted at certain readiness benchmarks, rather than specific times. Planning. The first review is the planning review generally in the first quarter after the grant is awarded. The state must demonstrate preliminary progress towards establishing an exchange, and receives feedback. The review results in a list of tasks for the state to complete before the design review. Design. The second review is the design review, which occurs after states have selected their key contractors for exchange establishment, typically about 6 to 9 months after the planning review. States are expected to have established business requirements and developed detailed plans and procedures for key activities for their exchange. Operational. The final operational review occurs after exchange development and implementation is complete, to test the exchange and demonstrate that it is ready to begin operation. After each of these reviews, CCIIO’s procedures require project officers to prepare a postestablishment report describing deliverables by both CCIIO and the grantee. These reports are designed to provide a summary of the progress the state has made in meeting the necessary requirements related to establishing an exchange. The report also serves as a guide in identifying action items and next steps to ensure adherence to mandatory timelines. According to CCIIO officials, in calendar year 2012 they completed planning reviews for 24 grantees and design reviews for 28 grantees, covering 31 of the 38 states with Level 1 Exchange Establishment grants. As of March 2013, they had not completed any operational reviews, but planned to complete them between August and September 2013. Finally, all recipients of these grants are required to obtain an A-133 Audit. According to CCIIO guidance, CCIIO reviews the audit for each grantee. According to CCIIO officials, they plan to use the A-133 Audit results as part of their Operational Establishment Reviews for exchange grants and as part of future Annual reviews for rate review grants. The guidance also calls for grantees to address any significant findings from the audit and to develop plans for mitigating future problems. CCIIO officials indicated that if CCIIO’s regular oversight process identifies instances when a grantee may not be complying with requirements of the grant, CCIIO uses a five-tier response to address them, in which CCIIO advances its response to the next tier if its earlier responses did not address the issue (see fig. 5). In the first tier, CCIIO officials indicated they discuss compliance issues with the exchange or rate review grantee and request a mitigation strategy, which CCIIO documents in the project records. The second tier calls for production of management assessment items by the state, such as a documented business plan to address the compliance issue in a specific time frame. In the third tier, CCIIO imposes conditions on the grant award, which identifies the reason for the condition and limits the grantee’s access to funds, until the grantee provides requested documentation. In the fourth tier, CCIIO restricts the grantee’s access to funds until it reviews and approves the grantee’s corrective action. CCIIO’s final action in the fifth tier is to terminate the grant. According to OAGM officials, to date they have not had to impose conditions or restrictions on grants based on their regular oversight. Further, OAGM officials said that so far they had not identified any misuse of grant funds on the basis of established program criteria. If a state seeks to make certain changes to the terms of its exchange or rate review grant, CCIIO requires that the state obtain prior approval, and has established procedures to review the appropriateness of any such requests. CCIIO refers to these as postaward actions, and they include instances such as when a state wants to alter substantially the allocation of funds between major activities funded by a grant (called a budget revision), or extend the time frames for performing grant activities without changing the award amount (called a no-cost extension). Eight types of routine, grantee-initiated postaward actions are described in table 6 below. For example, if a state wants to reallocate more than 25 percent of an exchange grant among budget categories, CCIIO requires the state to work with the appropriate project officer to obtain approval by OAGM. Under its procedures, CCIIO requires the state to provide supporting documentation to justify the proposed rebudgeting. The project officer will review the request and make a recommendation to the OAGM. If approved, OAGM will amend the grant agreement to reflect the revised budget. Officials indicated that the same general procedures apply to other types of postaward actions. Officials indicated that CCIIO also requires states in which the anticipated exchange type changes (e.g., from a state-based exchange to a federally facilitated exchange) to obtain approval for a change in the scope of services permitted under the state’s original grant or to terminate the grant. As of April 11, 2013, CCIIO officials indicated they are working with 11 states to determine the appropriate adjustments for their grants to reflect changes in the scope of services they will provide. For example, CCIIO officials indicated that Arizona was originally awarded funding to establish a state-based exchange, but the state subsequently decided to default to a federally facilitated exchange. CCIIO and the state are currently determining the extent to which the state will continue to participate in the grant program. We provided a draft of this report to HHS for its review and comment. HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Purpose Provided states with 1 year of funding to assist with background research and initial planning activities related to the potential implementation of a state-based exchange, including plans for stakeholder involvement, governance structure, technical infrastructure, and necessary policy actions related to the exchange. States could only receive one grant. Award amount $0-1 million; depends on states’ proposed activities and budget (along with the Department of Health and Human Services’ assessment of the proposal). Early Innovator Provided 2 years of funding to a select number of states or groups of states that demonstrated leadership in establishing state-based exchanges, in particular by beginning development of cutting-edge, cost-effective, consumer-friendly IT for their exchanges. Awards were intended to allow the states to develop rigorous IT models and best practices that could be adopted and tailored by other states. States could only receive one grant. Variable; depends on states’ proposed activities and budget (along with HHS’s assessment of the proposal). Provides up to 1 year of funding to support states’ continued progress in carrying out activities in connection with a state-based or federally facilitated exchange, including a partnership exchange. Funding is awarded to help states undertake specific establishment activities relevant to a state’s chosen exchange model. For example, states pursing state- based exchanges may receive funding for activities within 12 categories, including legal authority and governance, consumer and stakeholder engagement and support, and plan management. States preparing to support federally facilitated exchanges are eligible to use grant funding for a subset of these activities, as outlined in the funding opportunity announcement. States may receive multiple grants. Variable; depends on states’ proposed activities and budget (along with HHS’s assessment of the proposal). Provides up to 3 years of funding to states that are further in their exchange establishment process and are specifically establishing state-based exchanges. Funding is awarded to help states undertake all exchange activities. To be eligible, states must have met certain milestones, including (1) obtaining the necessary legal authority to establish and operate the exchange; (2) establishing a governance structure for the exchange; and (3) submitting (to HHS) an initial plan for funding the long-term operational costs of the exchange. States may only receive one grant. Variable; depends on states’ proposed activities and budget (along with HHS’s assessment of the proposal). Purpose Provided 1 year of funding to states or U.S. territories to help develop or enhance their rate review processes as well as their processes for reporting their rate increase patterns to HHS. States/territories could only receive one grant. Provides up to 3 years of funding (depending on the date of award) to further assist states or U.S. territories with developing or enhancing their rate review and reporting processes, with the specific purpose of helping states meet HHS’s criteria for effective rate review programs. To be eligible, states that at the time of application do not have effective rate review programs in their individual or small group health insurance markets, or both, must commit to using grant funds to develop effective programs within 12 months of receiving the grant. States that at the time of application meet the effective rate review requirements must commit to using grant funds to further enhance their rate review programs. States are eligible for a second Cycle II grant if they have drawn down at least 60 percent of their previous Cycle II grant by August 1, 2013, and if HHS determines that sufficient funding is available after all eligible applications are considered for an initial Cycle II award. Additionally, California is eligible for two Cycle II grants because it has two regulatory agencies that are each primarily responsible for regulating a portion of the state’s private health insurance market. Total award amounts are made up of the following subawards: Baseline award: $3 million (for grants awarded in 2011) or $2 million (for grants awarded after 2011). Workload award: variable; depends on states’ population and the number of health insurance issuers with 5 percent or more market share in the state. Performance award: approximately $600,000 (for grants awarded in 2011) or $400,000 (for grants awarded after 2011); given to states that have the legal authority to disapprove unreasonable rate increases in their individual or small group markets. Amount of award returned to HHS as of March 27, 2013 (dollars) In addition to the contact named above, Randy DiRosa, Assistant Director; Priyanka Sethi Bansal; David Lichtenfeld; Laurie Pachter; and Stephen Ulrich made key contributions to this report. | PPACA required the establishment of health insurance exchanges and a process for the annual review of unreasonable increases in insurance premiums charged by issuers of health coverage in each state. To assist states in establishing exchanges and in enhancing their ability to review issuers premium rate increases, the law established new grant programs under which HHS is authorized to award grants to states through 2014. The law appropriated an unspecified amount of funds for exchange grants, and appropriated $250 million to HHS for rate review grants. GAO was asked to provide information on HHSs processes to award and oversee these grants. In this report, GAO describes (1) the process HHS uses to award exchange and rate review grants to states; (2) the amounts of grants and key activities states funded through the grants; and (3) HHSs process for overseeing states use of the grants. GAO reviewed laws, regulations, and HHSs procedures that established the processes for awarding the grants. GAO obtained and analyzed data on all exchange and rate review grants awarded from August 2010 through March 2013. GAO also reviewed HHSs procedures for overseeing the grants, and interviewed officials responsible for grants oversight. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. The Department of Health and Human Services (HHS) has a structured process for awarding Patient Protection and Affordable Care Act (PPACA) exchange and rate review grants to states. These grants are designed to help states establish exchanges--new health insurance marketplaces through which individuals and small businesses can obtain insurance--and review issuers' proposed rate increases. The grant award process consists of a series of steps during which the agency solicits, screens, and evaluates grant applications, and then makes funding awards. Once HHS deems that applications meet program eligibility criteria, applications go through various reviews, including a review by independent experts and HHS officials. On the basis of these reviews, HHS determines whether states' proposed activities are allowable, and if so, whether the associated requests for grant funding are reasonable. Based on recommendations from the reviews, HHS determines whether to award grants to states, and if so, the amounts of any grants to be awarded. As of March 27, 2013, HHS had awarded about $3.8 billion in PPACA exchange and rate review grants that states have used or plan to use to develop exchanges and enhance rate review capabilities. This includes nearly $3.7 billion in exchange grants awarded to 49 states and the District of Columbia. Among states that have received exchange grants, the amount of funding provided to states ranges from $0.8 million (Wyoming) to about $911 million (California). Approximately half the states were awarded under $30 million in exchange grant funding, while 10 states were awarded over $100 million. As of February 2013, states had drawn down approximately $380 million of their exchange grant funds. GAO's review of a subset of exchange grantee financial reports indicated that nearly 80 percent of expenditures have been for contracts and consulting services, much of which states spent on key activities for developing exchange information technology systems. HHS also awarded about $159 million in rate review grants to 46 states and the District of Columbia, much of which has funded five key activities, including expanding the scope of rate review programs and enhancing the transparency of the rate review process. HHS's process for overseeing states' use of PPACA grant funds consists of several mechanisms. The agency regularly monitors states' grant activities though its review of program and financial information reported by states, as well as ongoing communication with grantees. HHS's process also includes mechanisms to periodically verify state-reported information, including its analysis of states' withdrawal of grant funds and site visits. To date, however, use of site visits has been limited. HHS has a number of mechanisms it can utilize, such as restricting a grantee's access to funds, if its monitoring identifies concerns or compliance issues, but agency officials indicated they have not identified any misuse of grant funds or compliance issues to date. |
EAS, the nation’s primary alerting system, provides capacity for the United States to issue alerts and warnings to the public in response to emergencies. FEMA is responsible for administering EAS at the national level, while FCC manages EAS participation by media-related communications service providers. FCC provides technical standards and support for EAS, rules for its operation, and enforcement within the over- the-air broadcast, cable, and satellite broadcasting industries. Presidential, or national-level, EAS alerts use a hierarchical distribution system to relay important emergency messages. As the entry point for national level EAS messages, FEMA is responsible for distributing presidential EAS alerts to National Primary stations, often referred to as Primary Entry Point (PEP) stations. Broadcasts of these national-level alerts are relayed by the PEP stations across the country to radio and television stations that rebroadcast the message to other broadcast stations and cable systems until all EAS participants have been alerted. This retransmission of alerts from EAS participant to EAS participant is commonly referred to as a “daisy chain” distribution system. FCC rules require EAS participants to install FCC-certified EAS equipment. Radio and television broadcast stations, cable companies, wireless cable companies, direct broadcast satellite, and satellite radio generally must participate in the system and transmit alerts initiated by the President. State and local governments determine the content and transmission procedures of their alerts, in conjunction with local broadcast radio and television stations. These procedures are specified in state EAS plans filed with FCC. In 2007, FCC adopted a Further Notice of Proposed Rulemaking to explore EAS-related issues, such as how non-English speakers may best be served by national, state, and local EAS; and to reexamine the best way to make EAS accessible to persons with disabilities. Organizations that participate in EAS planning and administration include the Primary Entry Point Administrative Council (PEPAC), the Society of Broadcast Engineers, and associations such as the National Association of Broadcasters and individual state broadcasting associations. States and localities organize emergency communications committees whose members often include representatives from broadcasters or local television and radio stations. These committees agree on the chain of command and other procedures for activating EAS alerts. In June 2006, the President issued Executive Order 13407, entitled Public Alert and Warning System, effecting a policy that the U.S. have a comprehensive integrated alert and warning system, and detailing the responsibilities of the Secretary of Homeland Security in meeting this requirement. The order also specified the level of support expected from other departments and agencies in meeting the requirements for a more robust federal warning system. The Secretary of Homeland Security was ordered to “ensure an orderly and effective transition” from current capabilities to the system described by the executive order, and to report on the implementation of the system within 90 days of the order, and on at least a yearly basis, thereafter. The FEMA IPAWS program was initiated in 2004 and the development and implementation of IPAWS has become the programmatic mechanism to carry out the executive order. IPAWS is defined by FEMA as a “system of systems,” which is intended to eventually integrate existing and new alert systems, including EAS. That is, EAS is expected to be superseded as the nation’s primary alert function by IPAWS, with EAS acting as one of its component parts and as one of IPAWS’s mechanisms to disseminate alerts. Another intended partner system is NOAA’s National Weather Radio (NWR). NWR broadcasts National Weather Service forecasts and all-hazard warnings. Non-weather emergency messages are broadcast over NWR at the request of federal, state, and local officials in time-critical situations when public safety is involved. The Warning, Alert, and Response Network Act of 2006 (WARN Act) required FCC to adopt relevant technical standards, protocols, procedures, and other technical requirements to enable commercial mobile service providers (wireless providers) to issue emergency alerts. The act established an advisory panel called the Commercial Mobile Service Alert Advisory Committee (CMSAAC), to recommend the technical specifications and protocols that will govern wireless providers that participate in emergency alerting. The CMSAAC was chaired by then-FCC Chairman Kevin J. Martin and included 42 other members, representing stakeholders in all levels of government and the private sector. FCC adopted most of the recommendations made by the committee regarding the required capabilities of wireless providers to transmit alerts, as well as the proposal to develop a Commercial Mobile Alert System (CMAS). Figure 1 displays the conceptual architecture of IPAWS, with EAS, NWR, and CMAS as mechanisms for disseminating alerts. The Common Alerting Protocol (CAP) is an open, non-proprietary digital message format being used as a standard for new, digitized alert networks using multiple technologies. CAP is compatible with multiple applications and telecommunication methods, and has been developed for use by emergency management officials in sending all types of alert messages. CAP can be used as a single input to activate multiple warning systems, and is capable of geographic targeting and multilingual messaging. FEMA—required by the executive order to adopt alert standards and protocols—intends to adopt CAP and to publish its IPAWS CAP v1.1-EAS Profile (CAP Profile) standard. In an FCC report and order released in July 2007, FCC promulgated new rules, including a requirement for all mandatory EAS participants to accept messages using CAP, no later than 180 days after FEMA adopts the CAP standard. EAS remains the primary national-level public alert system and serves as a valuable public alert and warning tool. It remains available as a mechanism for the President to issue national warnings, and it allows state and local governments to generate weather warnings, America’s Missing: Broadcast Emergency Response (AMBER) Alerts, and other public emergency communications. Nonetheless, as we previously reported, EAS exhibits longstanding weaknesses that continue to limit its effectiveness. In particular, the reliability of the national-level relay system—which would be critical if the President were to issue a national-level alert— remains questionable due to a lack of redundancy among key broadcasters, gaps in coverage, insufficient testing of the relay system, and inadequate training of personnel. Further, EAS alerts have limited coverage, dissemination means, and geographical specificity. FEMA has several projects under way to address some of these weaknesses, but little progress has been made and EAS remains effectively unchanged since our last report, issued in March 2007. Although EAS was established to allow the President to communicate with the public, it primarily serves as a means of disseminating emergency alert and warning information at the state and local level. EAS has never been used to transmit a national-level alert, but instead has evolved into an important public alert and warning tool for state and local governments. State and local emergency operations managers can request activation of EAS for state and local public alert and warning needs. EAS participants transmit state and local alerts via radio and television or other media facilities, such as cable or satellite. These alerts include weather warnings, AMBER Alerts, or other emergency communications, such as evacuation notices. EAS participants may decide individually whether to transmit alerts that originate at the state or local level. Approximately 90 percent of all EAS messages are weather alerts generated by NOAA’s National Weather Service (NWS). NWS broadcasts forecasts, warnings, watches, and other non-weather hazard information, and supplies such information to broadcast and cable entry points designated in approved EAS state and local plans. EAS has longstanding weaknesses that have not been resolved since we reported on them in March 2007. These weaknesses continue to limit the effectiveness of EAS and include (1) a lack of redundancy, (2) gaps in coverage, (3) a lack of testing, and (4) inadequate training for EAS participants. In addition to these weaknesses, EAS is also hampered by how alerts are disseminated to the public. Lack of redundancy. FEMA lacks alternative means of reaching EAS participants should its primary connection fail. Specifically, FEMA can distribute national-level alerts to 35 PEP stations (which serve as the entry points for Presidential alerts) and to 860 public radio stations across the country via EAS phone lines and satellite connectivity, respectively. However, FEMA lacks an alternative means of reaching these participants if those primary connections fail. Furthermore, if a primary connection to a PEP station failed, all of the other EAS participants that rely on that station via the daisy chain relay system would fail to receive alerts. This lack of redundancy could have serious consequences. For example, if a PEP station were disabled during a disaster in a major metropolitan area, an EAS alert would likely fail to reach a sizable portion of the population because FEMA, potentially, would not have access to other stations in that area. Gaps in coverage. Gaps in PEP station broadcast coverage could hinder the successful dissemination of EAS alerts, as some broadcast stations might have difficulty in monitoring their assigned PEP station because the station is geographically distant. Some states, such as Maine, are not covered at all by the PEP system and would have to pick up a national- level message from an alternate source, such as Public Radio. This might not be a fully reliable option, however. Unlike PEP stations, public radio stations do not necessarily have extra fuel and generators on-site to help ensure continuous operations following a disaster. Some broadcasters we contacted expressed concern that other factors might impede their ability to receive alerts from PEP stations. For example, some PEP broadcast signals are too weak to overcome geographical impediments, such as mountains; due to interference, broadcast signals generally do not travel as well during the day as at night and, therefore, have inconsistent EAS coverage; and high definition radio signals can overpower or distort PEP broadcast signals. Some states, such as Washington, have developed systems to augment the PEP network to ensure that EAS messages are disseminated throughout the state, but not every state has taken such action. As shown in figure 2, PEP daytime broadcast coverage leaves large geographic areas uncovered by EAS. FEMA officials noted that there is a significant difference between daytime and nighttime coverage. FEMA estimated that 82 percent and 75 percent of the population are covered by nighttime and daytime PEP signals, respectively. Lack of testing. FEMA does not perform ongoing national-level tests of the daisy chain relay system to ensure that it would work as intended during a national-level alert. FCC requires stations to test their EAS equipment and FEMA is required to perform weekly tests of connections to the 35 PEP stations, but there is no requirement for a national-level test of the relay system. In January 2007, in response to our ongoing work at that time, FEMA conducted a national-level EAS test. According to FEMA, the test demonstrated an effective satellite connection to public radio stations. However, three PEP stations failed to receive and effectively rebroadcast the national-level test message due to hardware and software issues, which FEMA stated have since been resolved. FEMA has not held another test since 2007, although DHS agreed with the intent of our previous recommendation that FEMA develop and implement a plan to verify the dependability and effectiveness of the relay distribution system. DHS had also stated that FEMA would begin to conduct new quarterly “over-the-air” tests, but these have not taken place. In addition, FEMA has no plans for testing the relay distribution system. Consequently, there is no assurance the national-level relay would work should the President need to activate EAS to communicate with the American people. The recent failure of an accidental national-level alert suggests that problems remain in the relay system. In this incident, a national-level (Presidential) alert, intended as a test, was inadvertently initiated in Illinois. Despite this false alarm, as intended for a national-level alert, the broadcast airways were “seized.” However, the alert failed to be properly disseminated by all EAS participants. In particular, cable companies, which should disseminate such an alert in an emergency situation, failed to receive it. According to FEMA officials and industry stakeholders, the failure was due to a malfunction of cable providers’ EAS equipment. While FEMA officials say this situation has since been rectified, no testing has been done to confirm that the equipment used by cable companies would work properly. Coupled with the results from the January 2007 test, these events raise concerns about the national-level relay system and further highlight the need for additional testing. Inadequate training for EAS participants. Another longstanding weakness of EAS is inadequate training for EAS participants, both in using EAS equipment and in drafting EAS messages. In 2007, we reported that several EAS stakeholders, including state and local officials, identified inadequate training as a limitation of EAS and cited a need for additional instruction in equipment use and message creation. Our current work indicates that such training is still lacking. For example, a state official told us that users and message originators need additional training to know how to properly craft and initiate a message, especially since emergency managers vary in their level of expertise. Similarly, a number of respondents to our state survey of emergency managers cited a need for training. For example, one state emergency management representative suggested that training courses be established for emergency managers, broadcasters, and cable providers. To address training inadequacies, we previously recommended that FEMA develop a plan to verify that EAS participants have the training and technical skills to issue effective EAS alerts. DHS agreed with the intent of the recommendation and noted that FEMA would improve training for EAS operators, as well as make the system more user-friendly. According to FEMA, it is currently analyzing and assessing EAS operator training needs, but it has not yet implemented any new training initiatives. In addition to the aforementioned weaknesses, EAS is also hampered by how alerts are disseminated to the public. Much as gaps exist with PEP- station coverage and through weaknesses in EAS participants’ distribution, large portions of the population remain uncovered because EAS relies on certain media, such as radio and television broadcast, to provide alerts. Specifically, EAS’s reliance on broadcast and other media currently exclude other communications devices, such as cell phones. In addition, it remains difficult for EAS to reach distinct segments of the population. For example, alerts are typically provided only in English and alerting mechanisms provide unequal access for persons with disabilities. In particular, individuals with hearing and vision disabilities may be subject to inconsistent aural and visual information in EAS alerts. Further, effective public alerting via EAS is also hindered by its limited ability to target alert messages to specific geographic locations. For example, a local emergency manager told us that a message generated by his county would be automatically sent to 10 neighboring counties, potentially causing unnecessary alarm for alert recipients in surrounding areas. FEMA officials stated that projects are under way to address some of EAS’s operational weaknesses. For example, to improve EAS coverage, FEMA is planning to expand the number of PEP stations from 35 to 69 by 2011. However, since the PEP expansion effort was initiated in 2006, of three PEP stations scheduled for addition in 2007, FEMA completed only one in 2008, with the other two only partially completed. At the time of our review, FEMA had selected six additional area locations for new PEP stations and initiated negotiations with radio stations in three of the areas. FEMA cited several challenges in expanding the number of PEPs. Specifically, officials told us that the process is often slowed by negotiations with broadcast stations, soil sampling, and construction. In addition, PEPAC representatives said regulatory procedures, such as those requiring stations to house fuel on-site, lengthen the process. To add redundancy and improve the reliability of the relay system, FEMA is also developing the Digital Emergency Alert System (DEAS). DEAS is expected to provide additional connections with EAS entry points—the PEP stations—and provide for the direct transmission of a voice, video, or text alert to stations using the public broadcast system satellite network. According to FEMA, DEAS was successfully piloted twice. However, despite concluding the pilots in 2007, FEMA had not begun implementing DEAS at the time of our review. FEMA planned to deploy DEAS in 2008 to 13 states and 1 territory, including those that participated in the second pilot; followed in 2009 by a deployment to 16 additional states prone to weather hazards, and then to all states. However, DEAS deployment did not occur and FEMA did not provide an explanation for this delay. Currently, FEMA plans to implement DEAS using a phased approach beginning in mid-2009. Other FEMA initiatives related to EAS include the integration of XM satellite transmission paths and the implementation of CAP. Specifically, FEMA plans to deliver national-level EAS messages from FEMA to PEP stations by establishing a satellite connection via XM Satellite Radio to complement its existing phone connection. FEMA targeted completion of XM satellite connectivity to key EAS sites by 2007; however, it is now scheduled for completion in 2009. While no reason was provided for the apparent delay, FEMA noted that it is currently working on certification and accreditation issues. Separately, FEMA is working to develop and implement the CAP Profile; however, FEMA has not defined how CAP will work within EAS, including how EAS participants will rebroadcast a CAP message. Further, CAP is not currently capable of carrying a live audio message. Alternatively, to satisfy the requirement to carry a presidential message, EAS participants will be required to link to a FEMA server to stream audio messages. Finally, broadcasters expressed concern that CAP is not ready to be used with EAS, thereby leading broadcasters to question its utility, as well as the necessity of obtaining CAP-compliant equipment. FEMA began initiatives related to IPAWS in 2004, yet national-level alert capabilities have remained unchanged and new standards and technologies have not been adopted. IPAWS has operated without a consistent strategic vision and has been adversely affected by shifting program vision, lack of continuity in planning and program direction, and poorly organized program information from which to make management decisions. Therefore, as state and local governments are forging ahead with their own alert systems, IPAWS program implementation has stalled and many of its functional goals have yet to reach operational capacity. Additionally, FEMA’s investment in the IPAWS pilot projects—seed initiatives intended to test alert technologies and form the foundation of IPAWS—has resulted in few lessons learned and few advances in alert and warning systems. Furthermore, FEMA does not report on IPAWS spending or progress in achieving goals, which limits transparency and accountability for program results. Although IPAWS has existed since 2004 with the original objective of modernizing and integrating public alert and emergency warning systems across federal, state, and local governments, national-level alert system capabilities remain unchanged and have yet to be integrated. In June 2006, Executive Order 13407 specified the responsibilities of DHS and FEMA with respect to a public alert and warning system, establishing 10 functions for the Secretary of Homeland Security. Since the executive order, FEMA has launched or continued, under the IPAWS program, several projects intended to address the 10 functions specified in the order. Table 1 displays the functions of the executive order, FEMA’s ongoing projects aimed at satisfying those functions, and the status and progress of the projects. While there are IPAWS projects under way designed to meet the requirements of the executive order, these projects have shown little progress. Many IPAWS initiatives have been ongoing for several years with little functional contribution to the improvement or modernization of public alert and warning. In fact, some of the projects cited by FEMA as initiatives satisfying the requirements of the executive order have been under development since the inception of IPAWS and have yet to be completed. For example, one intention of IPAWS is to integrate various alert systems into a “system of systems.” There are both federally- and locally-operated alert systems, which are to be integrated under IPAWS. On the federal level, NOAA directs the NWS alerting network; and at the state level, 42 emergency management directors who responded to our survey reported existing or planned systems in their state. According to FEMA briefings and documents, it is the intention that local, state, and tribal systems be interoperable with IPAWS; however, states and localities operate their own distinct systems. In fact, of the 42 state survey respondents with alert systems, only 11 have systems that have been integrated with or are automatically triggered by the existing EAS. At present, the extent of efforts to integrate state and local systems is a nationwide inventory of systems and there are not yet any architectural or logistical plans to integrate these systems. In effect, as deployment of state and local alert systems continues, integration into the IPAWS system could become increasingly complicated and difficult. As another example, FEMA’s efforts to have IPAWS deliver warnings through diverse media have been limited. As early as 2005, FEMA planned efforts to provide warning messages to subscribers via email and to telephones, text message devices, cell phones, pagers, and Internet desktops. These capabilities were tested under various IPAWS pilot projects, but the development and implementation of the methods were discontinued when the pilots were completed. At present, IPAWS efforts to expand alert dissemination through methods other than standard radio and television broadcast are limited to participation in CMAS, a cellular broadcast text alert initiative. FEMA has accepted the responsibility for collecting and disseminating alerts, but it is unclear, at this point, how CMAS will be integrated with IPAWS. FEMA has missed numerous timelines that it set for IPAWS initiatives. Various projects were originally intended to be completed to form the foundation of IPAWS, but have experienced delays and are still not yet functional. Figure 3 demonstrates some of the IPAWS programs whose timelines were surpassed and have still yet to be realized. Aside from implementing an integrated public alert and warning system, FEMA has responsibility for providing training on and testing of alert and warning systems; public education on use and access to the system; and consultation, coordination, and cooperation with public and private sector stakeholders. According to emergency management and industry stakeholders, FEMA has not sufficiently met this responsibility. About half of the state survey respondents reported that FEMA had not provided them with a clear understanding of the IPAWS vision or program goals and 66 percent were somewhat or very dissatisfied with FEMA’s level of consultation and coordination. Ultimately, among states, there is a general lack of satisfaction with FEMA’s outreach and a clouded understanding of what IPAWS actually is. Although survey respondents generally evaluated FEMA as having done little to no outreach, education, and coordination, FEMA has made recent progress in these efforts. FEMA officials have convened federal, industry, and practitioners’ “working groups” to discuss the adoption of the CAP Profile, which they plan to expand to include broader discussions about public alert and warning. In interviews, public and private sector stakeholders have expressed frustration with the lack of communication and coordination with IPAWS in the past, but have also noted recent improvements. FEMA’s efforts to create an integrated and modernized alert and warning system have been affected by (1) shifting program vision, (2) a lack of continuity in planning and program direction, (3) a lack of collection or organization of program information from which to make management decisions, and (4) staff turnover. Shifting program vision. The IPAWS program vision has changed several times, slowing progress toward an integrated system. FEMA originally planned to build an infrastructure to deliver state and local alerts through multiple pathways. However, according to FEMA officials, in the second quarter of calendar year 2007, the vision changed to focus exclusively on dissemination of presidential messages and setting alert and warning technical standards. In early 2009, the vision of the program shifted to again focus on a comprehensive system that included infrastructure for state and local alerts. Figure 4 shows the evolution of the IPAWS vision. Lack of continuity in planning and program direction. FEMA’s efforts to create an integrated and modernized alert and warning system have encountered difficulties in program planning and management. As we have reported, effective project planning involves establishing and maintaining plans; defining the project mission, scope and activities; and determining overall budget and schedule, key deliverables, and milestones for key deliverables. It also involves ensuring that the project team has the skills and knowledge needed to manage the project and obtaining stakeholder commitment to the project plan. Furthermore, agencies can use performance information to make various types of management decisions to improve programs and results. Although the executive order requires an implementation plan to be updated yearly, from early 2007 through June 2009, the IPAWS effort operated without a designated implementation plan and no specific processes for systems development and deployment. The current implementation plan, completed in June 2009, does not adequately satisfy the project management and planning practices essential for effective program execution. In contrast to the plan from 2006, this plan provides few program details. Additionally, the new plan includes only a vague overview of IPAWS initiatives, few definitive milestones toward reaching program goals, and a lack of clarity in how IPAWS systems will be integrated. Other planning documentation that exist—consisting mostly of briefing slides outlining IPAWS initiatives or broad, conceptual program requirements—indicate a lack of continuous overall strategic vision with disparate projects not tied together by a cohesive plan. Lack of collection or organization of program information from which to make management decisions. We found organized IPAWS program information that officials might use for decisionmaking and establishing project plans is also lacking. Throughout the course of our work, FEMA officials told us that many key IPAWS documents did not exist or were irretrievable. Moreover, a consultant at FEMA who is assessing IPAWS has found that there is no cogent organization system to locate program information, that information exists in multiple locations across FEMA office spaces, and that data searches on program information take an inordinate amount of time and effort. The consultant also found more robust and realistic documented internal controls are necessary. We requested documentation on FEMA and DHS reporting requirements or performance measures for which the IPAWS program prepared documented updates of its progress. However, neither FEMA nor DHS regularly report on IPAWS. FEMA was able to provide a performance information worksheet and spreadsheet, but this documentation provided only vague program parameters, without progress updates on reaching specific goals or milestones. FEMA has taken steps to assess the IPAWS program and has contracted with a consultant to perform a full assessment of the IPAWS program and to implement internal controls and performance measures. However, the absence of accurate periodic reporting on IPAWS leaves valuable program information unavailable. Such information would help increase program transparency, establish greater program accountability, and assure a reasonable assessment of return on financial investments. Additionally, periodic reporting on IPAWS would provide FEMA’s private sector partners and those in government at the federal, state, and local level with information necessary to help establish an integrated alert and warning system. Such reporting would also assist the Congress as it oversees issues related to public alert and warning. Staff turnover. Progress toward an integrated alert system has also been slowed by frequent changes in organizational leadership of the IPAWS program office and other staffing related issues. During our review, IPAWS was operating under an acting director—its third director since the program began—and at the time of our review was searching for a permanent director. According to FEMA, a new director took charge of the program on August 3, 2009. Additionally, according to FEMA officials, high turnover of program staff has made it difficult to consistently manage IPAWS programs. FEMA’s heavy use of contract employees has also resulted in concerns from stakeholders. In one state, emergency management officials participated in an IPAWS project, relying solely on contract staff without actually knowing that FEMA was involved. Another state official said that IPAWS is dominated with outside contractors who do not fully understand alert and warning needs. At the program office itself, there is a preponderance of contract staff. As of June 2009, 27 contractor staff and 5 FEMA IPAWS staff positions were filled out of 11 noncontract full-time equivalent positions that were available. To demonstrate the integration and expansion of new alerting technologies, and to work toward the functionality described in the executive order, FEMA has implemented a series of IPAWS pilot projects, but they have ended inconclusively, with few documented lessons learned. The IPAWS pilots were first introduced in 2004, prior to Executive Order 13407, and focused on testing various alerting systems in different areas of the country with the intent that successfully piloted technologies could eventually be used in a fully integrated “system of systems.” At various stages of our work, FEMA provided different accounts of the number and breadth of the pilot projects, as well as inconsistent documentation on the goals, costs, and results of the full IPAWS pilot programs. Specifically, FEMA documents and interviews revealed inconsistent information on the purpose of the pilot programs and how they supported broader IPAWS goals. According to FEMA officials, the pilot projects were intended to be discrete tests of alert capability, with clear goals and specified durations. However, there is a dearth of documentation describing the actual plans and results of the pilots. Although we requested reports documenting the plans, lessons learned, and technological or operational outcomes, for most pilot projects, such formal documentation was never produced. Rather, the extent of the documentation FEMA provided on the pilots includes general briefing slides with broad program descriptions. FEMA equipment deployed during the pilots was left, for the most part, unused with pilot participants. According to FEMA officials, there is an inventory accounting for the equipment and the equipment is intended to be repurposed in the future. As a result of the lack of project assessments, reporting, and documentation, it is unclear which aspects of the IPAWS projects, if any, are currently being utilized or are planned to be utilized in the future or whether the projects informed actions or decisions with respect to IPAWS programming. Initial findings from an IPAWS program assessment, performed by a FEMA consultant, revealed that, in most cases, key project deliverables, for which the government contracted, could not be accounted for. FEMA’s consultant was unable to locate or verify the status of deliverables for 18 of the 28 projects it identified. The consultant was able to verify only partial completion of 6 other projects, while the status of deliverables for the 4 other projects was incomplete, ongoing, not available, or unknown. Responses from our survey of state emergency management directors indicate that most of the 12 states that reported participating in the pilot projects reacted unfavorably when asked about the outcomes and lessons learned from the pilots. Lack of coordination, poor management, incomplete execution, and short project duration were cited, among other things, as lessons learned or outcomes from the pilots. Figure 5 identifies the states who reported participating in IPAWS pilot projects, including select open-ended feedback provided by states. Some states cited positive outcomes and were generally more optimistic about their participation. For example, one state was encouraged by the promise of new alerting technology being pilot tested and said that the pilot technologies proved effective and reliable and should be components of an overall strategy. Another emphasized the need for additional capabilities to alert and warn citizens during emergencies. FEMA faces coordination issues in successfully implementing IPAWS. While there is broad consensus regarding the need for coordination among diverse stakeholders, many stakeholders we contacted generally lack specific knowledge of IPAWS and would like more opportunities to interact with FEMA on public alert and warning issues. FEMA also faces technical challenges related to integrating state and local systems; adopting standards; and the development of geo-targeted, risk-based, multilingual, and disabled population alerting capabilities. These elements are required aspects of the public alert and warning system and are crucial to IPAWS implementation, yet remain largely unresolved. To effectively develop and implement IPAWS, FEMA depends on the efforts and expertise of diverse stakeholders, yet coordination was cited as the primary challenge facing the implementation of IPAWS. Specifically, FEMA relies on partners such as the DHS Directorate for Science and Technology, NOAA, the Organization for Advancement of Structured Information Standards, and CTIA - The Wireless Association, an association of wireless telecommunications providers, which is involved in the development of CMAS. In addition, given that the IPAWS vision relies heavily upon state and local investment in CAP-based warning systems, organizations at these levels of government have a range of interests in IPAWS planning efforts. In fact, many respondents to our state survey seek opportunities to contribute to IPAWS planning and consider collaboration among all levels of government to be imperative to the delivery of public alerts and warnings. Executive Order 13407, in laying out FEMA’s IPAWS role, recognized the need for stakeholder involvement for an effective alert system to work and required FEMA to “consult, coordinate, and cooperate with the private sector, including communications media organizations, and federal, state, territorial, tribal and local governmental authorities, including emergency response providers.” Many stakeholders we contacted from various levels of government and the private sector expressed support for a collaborative, consensus-based forum that could increase the flow of information and best represent stakeholder interests. In March 2007, we recommended that FEMA establish a forum for stakeholders involved with emergency communications to discuss issues related to IPAWS, with representation from federal agencies, state and local governments, private industry, and the affected consumer community. DHS agreed with the intent of this recommendation, noting that FEMA would continue to work with stakeholders through meetings, conferences, and other forums. As recently as May 2008, FEMA said it intended to create a stakeholder subcommittee under an appropriate DHS departmental advisory committee in compliance with the Federal Advisory Committee Act (FACA). Also, FEMA informed us of plans to establish state advisory committees. However, FEMA subsequently told us that neither the federal advisory subcommittee nor state advisory committees have been implemented and there are no current plans to establish such groups. While there is broad consensus regarding the need for coordination, FEMA’s efforts to date have been insufficient, according to many stakeholders we contacted. For example, broadcaster associations and local government officials were unaware of IPAWS program goals and milestones. Also, the majority of our state survey respondents received little to no information from FEMA and communicated with FEMA to little or no extent. Further, only one survey respondent indicated that FEMA’s communication and coordination efforts have provided him with a clear understanding of the IPAWS vision and program goals, with the majority of respondents having little or no understanding of the program. Local officials we contacted had little to no communication with FEMA, were generally unaware of the IPAWS program, and overall, lacked an understanding of the CAP alert standard. FEMA officials acknowledged that they have, thus far, insufficiently engaged state-level stakeholders and have recently taken steps to increase their communication and collaboration efforts. Many of FEMA’s recent initiatives are driven by its July 2008 – September 2009 IPAWS Stakeholder Engagement Plan, which describes FEMA’s strategy for establishing, developing, and maintaining partnerships with alert and warning stakeholders. Among other things, this plan details FEMA’s intent to continue its participation in alert and warning and emergency management conferences; to engage relevant congressional committees; to build relationships with FEMA Regions, which can pass information to state and local government officials; and to build relationships with other organizations and media outlets. In addition, FEMA launched an updated Web site in March 2009, which allows users to submit questions regarding the IPAWS program. The Web site lacks detailed program information, however. In figure 6, we display the survey responses of state emergency management directors on the extent to which they believe FEMA officials adequately communicated or coordinated with state and local governments with respect to public alert and warning programs. States largely reported inadequate levels of training, testing, and alert exercises, as well as inadequate public education efforts. The survey results indicate that a set of stakeholders crucial to establishing a public alert and warning system do not believe FEMA is communicating and collaborating adequately. According to our survey results, overall, 31 states were either somewhat dissatisfied or very dissatisfied with the level of consultation, coordination, and communication shown by FEMA with respect to its public alert and warning programs. Many stakeholders we contacted and respondents to our state survey desire greater dialogue with FEMA and want FEMA to better coordinate its efforts with public and private partners. Some of these sentiments were echoed by federal partners, such as NOAA, which noted that coordination could be improved. The DHS Office of Science and Technology, which cited its relationship with FEMA as a primary challenge to developing an integrated alert system, told us that FEMA’s frequent periods of transition made planning difficult. To improve federal coordination and assert its lead role in federal alert and warning programs, FEMA has established memorandums of agreement and understanding with federal partners and meets regularly with them. The IPAWS stakeholder engagement plans call for FEMA to host annual alert and warning summits with its federal partners, starting as soon as the first quarter of fiscal year 2009, and FEMA also formed three working groups to review and validate requirements for the CAP Profile. FEMA said it plans to eventually expand the scope of these working groups beyond CAP to solicit feedback on the continued development and implementation of IPAWS. Although some of FEMA’s communication and coordination efforts appear promising, their effectiveness is not yet clear. For example, one survey respondent has been encouraged by his state’s inclusion in FEMA’s Practitioner’s Working Group, yet nonetheless reported a lack of communication and coordination with FEMA. The scope and range of stakeholder involvement in each new effort is limited and, as such, FEMA remains without a mechanism to bring together all interested stakeholders. This raises the possibility that many stakeholders will remain uninformed of FEMA’s efforts pertaining to IPAWS and leaves FEMA without a means to thoroughly collaborate on a range of alert and warning issues. FEMA faces an array of technical and other challenges in developing and implementing IPAWS that have not been fully addressed. These challenges include (1) integrating systems; (2) adopting CAP standards; and (3) developing and implementing geo-targeted and risk-based alerting, alerts for individuals with disabilities, and multilingual alerts. Integrating IPAWS with state and local systems. As states and local governments develop and deploy their own alert systems, a key challenge, according to stakeholders, is the integration of IPAWS with these disparate alert systems. In our survey, 42 states responded that they have implemented or plan to implement alternate methods to disseminate emergency alert information outside of EAS, such as email or text alerts. Current and planned capabilities vary widely, however. In addition, 20 state respondents indicated that their states have no plans to wait for information or guidance from the federal government before investing in emergency alert and warning systems. States that are moving ahead without federal guidance and investing in alert methods cited a responsibility to provide their citizens with emergency communications. The prevalence of alternate alerting methods at the state level that lack compatibility with federal investments could lead to increased diversity among systems, making future integration more difficult. To address challenges related to integrating IPAWS with state and local alert systems, FEMA is planning to inventory and evaluate federal, state, local, territorial, and tribal alert and warning capabilities by surveying approximately 3,500 emergency operations centers. This effort, initiated in late 2008, is required by the 2006 executive order, and will be carried out over a 3-year period ending in 2012, according to FEMA. Adopting CAP standards. Integration of IPAWS and state and local systems hinges largely on whether these systems use the same alert standards. FEMA intends to adopt CAP as the standard by which information will be transported among alerting systems. Currently, only 10 respondents to our state survey are using CAP, indicating that the prospect for seamless integration of existing systems may be limited. However, 42 states responding to our survey plan to use CAP in future investments in emergency alert and warning equipment, suggesting that the use of CAP will expand. Several survey respondents cited funding as an obstacle to CAP usage and system integration, reporting that further investment would be required to make necessary system upgrades. Additional challenges to integration exist at the local level due to the potential diversity among systems and similar financial constraints. For example, one state survey respondent indicated that cities are purchasing notification systems, yet the state has set no standards for such systems. FEMA officials acknowledged that federal grant programs would likely be necessary to support IPAWS deployment, and are currently exploring the use of grant programs to make funding available to states and local jurisdictions for the procurement of CAP-compliant equipment. Developing and implementing tailored alerting: geo-targeted and risk- based alerting, alerts for individuals with disabilities, and multilingual alerts. CAP messages can be disseminated with the multiple streams of information necessary to facilitate tailored alerting, and according to FEMA, adoption of the CAP Profile is the first step in developing the flexibility to provide such alerts. However, the current CAP Profile under consideration does not address multiple languages or special needs. Risk- based alerting—that is, the capability to tailor alerts based on a person’s threat risk or level of danger—is a requirement of the executive order, yet FEMA does not have plans to address this functionality. FEMA noted that state and local practitioners have developed innovative warning systems and methods for alerting those with disabilities and non-English speakers. To address these challenges, in 2009 and 2010, FEMA intends to plan, engineer, and implement capabilities to extend alerts to these groups. Emergency communications are critical in crisis management and for protecting the public in situations of war, terrorist attack, or natural disaster; yet, FEMA has made limited progress in implementing a comprehensive, integrated alert system as is the policy of the federal government. Management turnover, inadequate planning, and a lack of stakeholder coordination have delayed implementation of IPAWS and left the nation dependent on an antiquated, unreliable national alert system. FEMA’s delays also appear to have made IPAWS implementation more difficult in the absence of federal leadership as states have forged ahead and invested in their own alert and warning systems. The IPAWS program has been slowed and has suffered setbacks due to a lack of consistent program goals, clear performance measures, and program management information. In the absence of systematic performance measures, project milestones and schedules have been left undefined and little progress has been made in achieving the objectives of Executive Order 13407, which called for a comprehensive solution in the way public alert and warning is conducted in the United States. In order that IPAWS achieve the federal government’s public alert and warning goals, it is essential that FEMA define the specific steps necessary in realizing a modernized and integrated alert system. While the executive order requires an implementation plan to be updated yearly, separately, periodic reporting on progress toward achieving an integrated public alert and warning system would improve program transparency and accountability. Such reporting would match program goals to their respective timelines and provide government and private sector stakeholders with information necessary to help establish an integrated alert and warning system. EAS, one of the mainstays for public alerting, and the only operational aspect of IPAWS, has remained largely unchanged since our previous review in 2007. Although projects are under way to address longstanding weaknesses, the lack of reliability of alert distribution and dissemination functions to the public limits EAS’s effectiveness. Specifically, a lack of training and national-level testing raises questions about whether the relay system would actually work during a national-level emergency. Previously, we recommended that FEMA work in conjunction with FCC to develop and implement a plan to verify (1) the dependability and effectiveness of the EAS relay distribution system, which is used to disseminate national- level alerts; and (2) that EAS participants have the training and technical skills to issue effective EAS alerts. As sufficient action on EAS testing and training has not been taken and since IPAWS is years away from full implementation, these recommendations remain applicable to help ensure EAS is capable of operating as intended. Further, as IPAWS is developed and deployed, it is important that the dependability of those systems be verified and that IPAWS participants are adequately trained. Effectively implementing an integrated alert system will require collaboration among a broad spectrum of stakeholders, including those at the federal, state, and local levels; private industry; and the affected consumer community. Executive Order 13407 requires such collaboration. As states and localities invest in their own alert systems in advance of IPAWS deployment, it is critical that FEMA coordinate with stakeholders to help facilitate the integration of these alert systems. We previously recommended that FEMA establish a forum for the diverse stakeholders involved with emergency communications to discuss emerging and other issues related to the implementation of an integrated public alert and warning system. While FEMA has established stakeholder “working groups,” a robust forum of diverse public alert stakeholders does not exist and further action on stakeholder engagement is necessary. As technology continues to evolve and states implement their own systems, it is all the more relevant that a permanent collaborative body be put in place to support the development and implementation of IPAWS. In order that the public alert and warning system be conceived of, designed, and implemented, we recommend that the Secretary of Homeland Security direct the Administrator, FEMA to take the following actions: To improve program management and align IPAWS’s vision with the requirements established in the executive order, implement processes for systems development and deployment, including (1) updating IPAWS strategic goals and milestones, implementation plans, and performance measures; (2) prioritizing projects in consultation with stakeholders; and (3) creating the necessary documentation on system design and specific release schedules for IPAWS. To improve program transparency and accountability, report periodically to the Congress and the Secretary of Homeland Security on progress toward achieving an integrated public alert and warning system. The report should include information on ongoing IPAWS projects, financial information on program expenditures, and status updates in achieving performance measures and reaching milestones. To help ensure system dependability, as IPAWS is developed and deployed, establish and implement a plan to verify (1) the dependability and effectiveness of systems used to disseminate alerts, and (2) that IPAWS participants have the training and technical skills to make use of IPAWS infrastructure and to issue effective public alerts. We provided a draft of this report to DHS and FCC for their review and comment. In its comments, DHS focused on the report’s recommendations and indicated that it agrees with all our recommendations to improve public alert and warning. DHS provided examples of actions aimed at addressing our recommendations. In particular, FEMA said it is developing an IPAWS Strategic Plan and that existing plans have been modified to align IPAWS with the requirements of Executive Order 13407. FEMA believes IPAWS has documentation and processes for system design and that detailed requirements for specific IPAWS components have been coordinated extensively with federal, industry, and public stakeholders. FEMA noted that it will continue to brief Congress on IPAWS and provide project status updates to DHS on its progress in achieving milestones. FEMA also said as IPAWS is developed, it plans to include testing methods to ensure deployed systems are dependable and effective, and to coordinate with FCC on enforcing new testing procedures without placing an undue burden on industry. FEMA further said that it is working with its Emergency Management Institute to develop specific training for stakeholders. Although FEMA noted preexisting IPAWS actions as addressing documentation and processes on system design, our performance audit concluded that such actions do not amount to a specific definition of the steps necessary in realizing a modernized and integrated alert system. Additionally, the actions detailed in DHS’s response do not fully address the need for documented plans and a schedule for IPAWS’s implementation. Regarding previous reporting on IPAWS projects, expenditures, and status in achieving performance measures and reaching milestones, while FEMA noted that they brief House and Senate appropriation subcommittees, we found during our review that briefing information about IPAWS’s overall progress, project status, and program expenditures was vague or non-existent. Given the important role that state and local governments and private-sector stakeholders have in IPAWS, the intent of our recommendation is that periodic reporting information on IPAWS be more broadly available so that stakeholders are fully aware of the IPAWS program’s direction and progress. See appendix II for written comments from DHS. In addition to the above comments, DHS and FCC both provided technical comments that we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of DHS, the Chairman of the FCC, and interested congressional committees. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this report are to provide information on issues relating to public alert and warning, the Emergency Alert System (EAS), and the Federal Emergency Management Agency’s (FEMA) Integrated Public Alert and Warning program (IPAWS); specifically, (1) the operational capability of the nation’s current EAS, (2) the progress made in FEMA’s efforts to modernize and integrate alert and warning systems, and (3) the issues and challenges involved in implementing an integrated public alert and warning system. To obtain information related to all three objectives of this report, we conducted a Web-based survey of state emergency management directors. We asked them questions related to public alert and warning capabilities at the state or local level, the current status and integration of EAS with other alert capabilities, the use of the Common Alerting Protocol (CAP), the FEMA IPAWS program, and the level of coordination between FEMA and state officials. The survey was deployed by email to all 50 states and the District of Columbia and was conducted in March and April 2009. Contact information for each state was provided by the National Emergency Management Association (NEMA). We obtained contact information for state directors for whom NEMA did not have correct contact information directly from state departments. We obtained responses from 47 survey recipients (92 percent). Despite repeated inquiries, the emergency management directors from California, New Mexico, Tennessee, and Texas did not respond to the survey. The survey was also deployed, in text form, to select local emergency management agency officials whose contact information was provided to us as a part of the state survey. The results of local surveys were not included in the state survey results in this report. Additionally, this report does not contain all of the results from the survey. The survey and a more complete tabulation of the results can be viewed by accessing the following link: http://www.gao.gov/cgi-bin/getrpt?GAO-09-880SP. To obtain information on the operational capability of the current EAS and the progress that has been made in FEMA’s efforts to plan and implement a modernized and integrated alert and warning system, we reviewed and analyzed relevant documentation and literature, interviewed public and private sector stakeholders, and collected information from our survey of state emergency management agencies. We examined federal agency documentation, including planning, program status, and financial documents; agency orders and rules; testimony statements; and briefings from FEMA and the National Oceanic and Atmospheric Administration (NOAA). We also reviewed relevant literature on public alert and warning from public and private sector stakeholders, including the Congressional Research Service and various industry consortia. We interviewed federal officials from FEMA, FCC, the Department of Homeland Security (DHS), and NOAA. We also spoke with representatives of state and local emergency management offices, industry stakeholder organizations, and public and private sector alert and warning experts. Stakeholders we interviewed included the Society of Broadcast Engineers, the Primary Entry Point Advisory Committee, the National Center for Accessible Media, the Association of Public Safety Communications Officers, the Emergency Interoperability Consortium, the EAS-CAP Industry Group, the Association of Public Television Stations, the Telecommunications Industry Association, and CTIA - The Wireless Association. To obtain information on the challenges of implementing FEMA’s IPAWS, we interviewed a broad set of government and industry stakeholders, as indicated above, and obtained information through our survey of state of emergency management directors. In addition to stakeholders previously mentioned, we conducted interviews with state and local officials and with organizations involved with the public alert and warning, such as the International Association of Emergency Managers. We also conducted interviews with officials from state participants in FEMA’s IPAWS pilot programs and state emergency managers. Additionally, we interviewed private sector stakeholders and participants in public alert and warning, including broadcasters, the wireless industry, emergency alert technology companies, and consumer advocacy groups. Information on IPAWS challenges from each state was collected as part of our survey of state emergency management directors and select local emergency management officials. We conducted this review from September 2008 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our review objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, other key contributors to this report were Sally Moino, Assistant Director; Ryan D’Amore; Simon Galed; Andrew Stavisky; and Mindi Weisenbloom. | A comprehensive system to alert the American people in times of hazard allows people to take action to save lives. The Federal Emergency Management Agency (FEMA) is responsible for the current Emergency Alert System (EAS) and the development of the new Integrated Public Alert and Warning System (IPAWS). In this requested report, GAO examined (1) the current status of EAS, (2) the progress made by FEMA in implementing an integrated alert and warning system, and (3) the challenges involved in implementing an integrated alert and warning system. GAO conducted a survey of states, reviewed FEMA and other documentation, and interviewed industry stakeholders and officials from federal agencies responsible for public alerting. As the primary national-level public warning system, EAS is an important alert tool, but it exhibits longstanding weaknesses that limit its effectiveness. EAS allows state and local officials limited ability to produce public alerts via television and radio. Weaknesses with EAS include lack of reliability of the message distribution system; gaps in coverage; insufficient testing; and inadequate training of personnel. Further, EAS provides little capability to alert specific geographic areas. EAS does not ensure message delivery for individuals with hearing and vision disabilities, and non-English speakers. FEMA has projects under way to address some of these weaknesses with EAS. However, to date, little progress has been made and EAS remains largely unchanged since GAO's previous review, completed in March 2007. As a result, EAS does not fulfill the need for a reliable, comprehensive alert system. Initiated in 2004, FEMA's IPAWS program is intended to integrate new and existing alert capabilities, including EAS, into a comprehensive "system of systems." However, national-level alert capabilities have remained unchanged and new technologies have not been adopted. IPAWS efforts have been affected by shifting program goals, lack of continuity in planning, staff turnover, and poorly organized program information from which to make management decisions. The vision of IPAWS has changed twice over the course of the program and strategic goals and milestones are not clearly defined, as IPAWS operated without an implementation plan from early 2007 through June 2009. Consequently, as state and local governments are forging ahead with their own alert systems, IPAWS program implementation has stalled and many of the functional goals of IPAWS, such as geo-targeting of messages and dissemination through redundant pathways to multiple devices, have yet to reach operational capacity. FEMA conducted a series of pilot projects without systematically assessing outcomes or lessons learned and without substantially advancing alert and warning systems. FEMA does not periodically report on IPAWS progress, therefore, program transparency and accountability are lacking. FEMA faces coordination issues and technical challenges in developing and implementing IPAWS. Effective public warning depends on the cooperation of stakeholders, such as emergency managers and the telecommunications industry, yet many stakeholders GAO contacted knew little about IPAWS and expressed the need for better coordination with FEMA. FEMA has taken steps to improve its coordination efforts, but the scope of stakeholder involvement is limited. FEMA also faces technical challenges related to systems integration, standards development, the development of geo-targeted and multilingual alerts, and alerts for individuals with disabilities. For example, the standard intended to facilitate integration of systems is still under development and is not widely used. As a result of these coordination and technical hurdles, integration with state and local systems will likely be a significant challenge due to potential incompatibility, and FEMA does not yet have logistical plans to integrate these systems. |
Within VA Central Office, VHA’s Primary Care Services Office develops policies related to the management of primary care—including the recording and reporting of primary care panel size data—and VHA’s Primary Care Operations Office is responsible for executing policies related to primary care delivery and monitoring primary care. VHA’s Office of Finance develops policies related to the recording and reporting of primary care encounter and expenditure data. Each of VA’s 21 networks is responsible for overseeing the facilities within their network, and this responsibility includes overseeing facilities’ management of primary care. (See fig. 1.) Based on a review of studies, VA established a baseline panel size of 1,200 patients at any given time for a full-time primary care physician provider. The Primary Care Services Office adjusts the baseline panel size for each facility based on a model VA officials said they developed in 2003 that uses data reported by facilities—including data on the number of FTE providers, support staff, and exam rooms—and projections on the average number of primary care visits. These projections are based on patient characteristics, such as the proportion of patients with chronic conditions. VA refers to the adjusted baseline for each facility as the “modeled panel size,” which in fiscal year 2014 ranged from 1,140 to 1,338 across VA’s facilities. VA generally updates the modeled panel size annually for each facility. VA’s handbook on primary care management requires that facilities record and report primary care data using the Primary Care Management Module (PCMM) software. These data include the number of patients, FTE providers, support staff, and exam rooms, and the reported and modeled panel size. Each facility maintains its own PCMM software and is required to update its panel size data on an ongoing basis in PCMM, which electronically reports facilities’ data to a separate national database maintained by the Veterans Support Service Center. This national database allows the Primary Care Operations Office and VA’s networks to review the data. An encounter is a professional contact between a patient and a provider who has the primary responsibility for diagnosing, evaluating, and treating the patient’s condition. In addition to individual office visits, there are other types of encounters, such as telephone visits and group visits. Each facility identifies and tracks all of its expenditures associated with primary care encounters. Facilities transmit their encounter and expenditure data using the Decision Support System, which is maintained by the Office of Finance. This office is responsible for collecting and maintaining financial information for VA’s cost accounting—which identifies and assesses the costs of programs at the national, network, and facility levels—and for budgetary purposes. We found that VA lacks reliable data on primary care panel sizes across its facilities because the data that facilities record and report to VA Central Office and networks are sometimes inaccurate. Because reliable reported panel sizes were not available for all facilities, we calculated actual panel sizes at six of seven selected facilities and compared them to each facility’s modeled panel size for fiscal year 2014. We found that actual panel sizes across the six facilities varied from 23 percent below to 11 percent above their respective modeled panel size. Moreover, we found that VA Central Office and networks do not have effective oversight processes for verifying and using facilities’ panel size data to monitor facilities’ management of primary care. We found that VA lacks reliable data on primary care panel sizes across its 150 facilities because the data that facilities record in the PCMM software and report to the Primary Care Operations Office and to networks are sometimes inaccurate. Federal internal control standards state that reliable information is needed to determine whether an agency is meeting its goals for accountability for effective and efficient use of resources. However, our review of the reported panel size data for all of VA’s facilities for fiscal year 2014 revealed missing values as well as values that appeared to be unreasonably high or low, which raised concerns about these data. Officials from the Primary Care Operations Office, whom we interviewed about the reliability of these data, agreed that inaccuracies exist in the way facilities report data elements in PCMM, such as the number of patients assigned to primary care panels and the number of FTE providers, support staff, and exam rooms. Primary Care Operations Office officials pointed out that because the data are self- reported, facilities can and sometimes do record the data inaccurately or in a manner that does not follow VA’s policy on panel management. For example, the officials stated that some facilities may not count support staff and exam rooms as outlined in VA’s policy. These officials also stated that PCMM has limitations that may affect the reliability of facilities’ reported panel size data. For example, officials explained that the software makes it difficult for facilities to ensure that inactive patients (i.e., those who have not seen their primary care provider within the preceding two years or have died) are removed from providers’ panels. We identified similar inaccuracies in our more in depth review of panel size data reported by the seven selected facilities. Specifically, at three facilities we found inaccuracies in the reported number of FTE primary care providers and the reported number of patients, which impacted the facilities’ reported or modeled panel sizes. For example, the number of FTE primary care providers reported by one of these facilities was too low because the facility incorrectly recorded each FTE provider as only 90 percent of a FTE. We did not identify inaccuracies in the data reported by the remaining four facilities. (See table 1.) Because some medical facilities’ reported panel size data are unreliable, VA Central Office and network officials cannot readily determine each facility’s average primary care panel size nor compare these panel sizes to each facility’s modeled panel size to help ensure that care is being delivered in a timely manner to a reasonable number of patients. Moreover, having unreliable data can misinform VA in other aspects as well. For example, because VA’s model is based on historical data reported by facilities, unreliable data may result in VA’s modeled panel size being too high or too low for certain facilities. Also, if facilities are using unreliable data to manage their primary care panels—for example, using the data to assign patients to primary care providers—the facilities may be misinformed about the available capacity on primary care providers’ panels—information that is key to determining facilities’ staffing and other resource needs. Primary Care Operations Office officials told us that they intend to address data reliability issues over time. Specifically, the Primary Care Operations Office is in the process of implementing new software, called web-PCMM, which officials believe will address some concerns about the reliability of the data because the software features controls to help ensure that facilities record and report the data accurately and consistently. For example, web-PCMM will automatically remove inactive patients from providers’ panels. In preparation for the implementation of web-PCMM, Primary Care Operations Office officials said they have been training network and facility staff on the features and capabilities of the new software and instructing facility staff to review and correct their panel size data to help improve data accuracy. It is not yet known the extent to which the new software will actually address the data reliability issues because facilities will continue to self-report data. The Primary Care Operations Office started piloting the new software at selected facilities in 2014 and had planned to implement it agency-wide after resolving software interoperability issues identified during the pilot. However, officials said that implementation is currently on hold because of a lack of funding, and the officials could not provide an updated timeframe for its system-wide implementation. According to these officials, VA has spent about $8.8 million through July 2015 on the development and implementation of web-PCMM and requires an additional $1.5 million to implement it agency-wide. Because reliable data on reported panel sizes were not available for all of VA’s facilities at the time of our review, we calculated actual panel sizes at six of the seven selected facilities using updated data from these facilities and correcting for the inaccuracies we found at two facilities. We compared the actual panel size to each facility’s modeled panel size for fiscal year 2014. Although Primary Care Operations Office officials recommend that facilities keep panel sizes 10 to 15 percent below modeled panel sizes to accommodate growth and provider attrition, we found that actual panel sizes ranged from 23 percent below to 11 percent above their respective modeled panel size. This wide variation may indicate that actual panel sizes at some facilities are too low—potentially leading to inefficiency and wasted resources—or too high—potentially leading to veterans experiencing delays in obtaining care, among other negative effects. It may also indicate that VA’s modeled panel sizes are determined incorrectly based on unreliable facility data or do not sufficiently account for patient acuity levels and demand for primary care services. Actual average panel sizes across the six facilities ranged from a low of 1,000 patients per provider to a high of 1,338 patients per provider. (See fig. 2.) At the three facilities where actual panel sizes were the highest of the six for which we calculated the actual panel sizes, officials cited three key factors that contributed to the higher panel sizes. Growing patient demand: Officials at all three facilities stated that the growing number of patients seeking primary care services at their facilities has required them to assign a larger number of patients to each provider. Officials at one of these facilities stated that not assigning new patients to a panel would result in a greater number of walk-in patients seeking emergency care and a loss of continuity of care. Staffing shortages: Officials at all three facilities described difficulty recruiting primary care providers, which resulted in a shortage of providers. At one of these facilities, about 40 percent of primary care provider positions were vacant at the time of our review. Officials at all three facilities attributed recruiting difficulties to the rural location of these facilities, lack of academic affiliation of the facilities, and the lower pay that VA offers primary care providers compared to nearby private sector medical facilities. In addition, at one of these facilities, officials stated that non-compete clauses limited the facility’s ability to hire providers currently working in the private sector who might otherwise seek employment with VA. Exam room shortages: Officials at two of the three facilities stated that a lack of available exam room space has limited their ability to hire additional primary care providers—and thereby reduce panel sizes. They stated that the process for acquiring additional space—whether through building additional space or leasing it—is cumbersome and requires extensive preplanning. For example, at one of these facilities, officials stated that expanding the facility’s existing exam room space or opening another CBOC to accommodate growing demand for primary care typically takes 5 to 6 years. The officials told us that while the Veterans Access, Choice, and Accountability Act of 2014 provided facilities with funds to acquire additional space, it did not simplify the process for acquiring space. Officials at two of the three facilities stated that the higher actual panel sizes have contributed to provider burnout and attrition. At one facility— where actual panel sizes were 11 percent above the modeled panel size—officials stated that the facility has been unable to hire enough providers to make up for attrition. The officials added that providers have expressed concerns to facility leadership that high panel sizes were impeding their ability to provide safe and effective patient care. All three facilities have taken measures to address higher actual panel sizes. For example, in order to ease staffing shortages the facilities have contracted with non-VA providers to provide care at VA facilities and have offered evening and weekend clinic hours to fully utilize available exam room space. However, while these measures have helped address capacity shortages at these facilities, they do not fully address the longstanding concerns resulting from higher panel sizes. In contrast, at the facility where actual panel size was the lowest of the six we reviewed—23 percent below its modeled panel size—officials said they have made a concerted effort to establish lower panel sizes while increasing the number of primary care providers. Officials stated that they had recently lowered providers’ panel sizes because they believed that the modeled panel size did not sufficiently account for factors affecting patients’ demand for primary care services, such as high acuity levels. These officials noted that they previously followed the modeled panel size but found that it was too high and resulted in primary care provider burnout and poor patient access to primary care providers. Since VA Central Office and network staff generally do not examine differences across medical facilities VA-wide, it is unclear whether the facility with lower panel sizes for providers was providing primary care services in an inefficient manner or whether VA’s modeled panel size for this facility was too high. VA Central Office and networks do not have effective oversight processes for verifying and using facilities’ panel size data to monitor facilities’ management of primary care. VA’s panel management policy requires facilities to ensure the reliability of their reported panel size data, but the policy does not assign oversight responsibility to VA Central Office or the networks for verifying the reliability of these data or for using the data for monitoring purposes. Federal internal control standards state that agencies should clearly define key areas of authority and responsibility, ensure that reliable information is available, and assess the quality of performance over time. However, officials from the Primary Care Operations Office told us that— except for a few isolated situations—they do not verify the panel size data recorded in PCMM to systematically identify unreliable data or to monitor panel sizes across all VA medical facilities. For example, these officials told us that in 2014, they conducted reviews of three facilities that were struggling with recording and reporting reliable data in PCMM to identify ways to improve the reliability of the facilities reported data. The officials said they have not validated facilities’ reported panel size data or used the data to monitor primary care because the office has a limited number of staff and mainly relies on the networks and facilities to ensure that the data are recorded and reported correctly and that monitoring is conducted. Across the seven networks that oversee the seven selected facilities for which we conducted a more in-depth analysis, we also identified variations in the extent to which the networks verified facilities’ panel size data and used the data to monitor and address panel sizes that were too high or too low. Specifically, Data verification: Officials from four of the seven networks told us that they took some steps to verify that facilities’ panel size data were reliable, such as reviewing the data for errors and large variations. For example, officials from one of these networks stated that if they identified large variability in the number of exam rooms—a relatively stable data element over time—it could indicate problems with data reliability, which the network officials would discuss with officials from the facility reporting the data. Officials from another network stated that they compared data reported by facilities to data previously reported by the facilities to identify large variations. Officials from the remaining three networks told us that they did not any take steps to verify that facilities’ reported panel size data were reliable. According to Primary Care Operations Office officials, VA networks can request access to facilities’ PCMM software, which would enable them to verify the data; however, the officials acknowledged that many of VA’s 21 networks are unaware of this capability. Use of data for monitoring primary care: Officials from six of the seven networks said they discussed reported panel size data during monthly calls with facility officials, at primary care committee meetings, or during facility site visits. However, officials from only four of these six networks stated that they took steps to address panel sizes that are too high or too low compared to a facility’s respective modeled panel size. For example, officials at one network told us that they helped a facility recruit additional primary care providers to address high panel sizes. In another network, officials said that they were helping a facility secure additional exam room space to address high panel sizes. Officials at a third network told us that they recently had to curtail monitoring activities to address facilities’ panel sizes due to staffing shortages. In contrast, officials from the one network that does not use panel size data to monitor facilities’ management of primary care told us that they rely on the facilities to manage their own primary care panels and do not believe that the network should take an active role in this process. As a result, officials from this network were unaware that a facility within their network had made a concerted effort to establish panel sizes that were well below its modeled panel size. Absent a robust oversight process that assigns responsibility, as appropriate, to VA Central Office and networks for verifying facilities’ panel size data and using the data to monitor facilities’ management of primary care—such as, examining wide variations from modeled panel sizes—VA lacks assurance that facilities’ data are reliable and that they are managing primary care panels in a manner that meets VA’s goals of providing efficient, timely, and quality care to veterans. Primary Care Operations Office officials stated that VA Central Office is in the process of revising its policy on primary care panel management and is developing additional guidance to require VA Central Office and VA networks to verify reported panel size data in addition to other monitoring responsibilities. However, as the revised policy and guidance are still under development, it is unknown when they will be implemented and whether they will fully address the issues we identified. Based on our review of fiscal year 2014 VA-wide primary care expenditure and encounter data, we found that expenditures per primary care encounter varied widely across VA facilities, from a low of $150 to a high of $396, after adjusting to account for geographic differences in labor costs. Expenditures per encounter at 97 of the 140 facilities we reviewed were within $51 or one standard deviation—a statistical measure of variance—of VA’s overall average of $242. According to officials from VHA’s Office of Finance, one standard deviation is typically used to identify potential outliers when examining encounter and expenditure data. For the remaining 43 facilities, our analysis found that expenditures per encounter at 20 facilities were at least one standard deviation above the average and at 23 facilities were at least one standard deviation below, which may indicate potential outliers that VA Central Office and the networks may need to examine further. (See fig. 3.) Among other things, this variation may indicate that primary care is being delivered efficiently at facilities with relatively low expenditures per encounter or inefficiently at facilities with relatively high expenditures per encounter. We also analyzed expenditures per unique primary care patient—that is, a patient with at least one primary care encounter in fiscal year 2014— and found similar variation across VA’s facilities. (See app. I.) We found that this variation remained when examining expenditures by encounter and per unique patient for facilities within the same complexity group. Of the seven selected facilities, one was among the least expensive facilities across all VA facilities and another was among the most expensive, in terms of expenditures per primary care encounter. An official from the facility that was among the least expensive of the seven we reviewed, with expenditures per encounter of $158, identified an increased use of secure messaging and telephone primary care as primary factors that contributed to a lower expenditure per encounter. Officials from the network that oversees the facility that was among the most expensive of the seven we reviewed, with expenditures per encounter of $330, identified the high cost of living in the area—which resulted in higher leasing and labor costs—as the primary factor that contributed to a higher than average cost per encounter. However, our analysis largely accounted for the higher cost of living in that expenditure data provided by VA were adjusted to account for geographic differences in labor costs, which made up 71 percent of this facility’s costs in fiscal year 2014. The officials also explained that part of the reason for the high expenditures per encounter was that the facility was not appropriately accounting for telephone-based primary care services it provided for the entire network. As a result, primary care encounters and expenditures for the selected facility included encounters and expenditures for telephone primary care services for other facilities within the network. According to network officials, steps are being taken to ensure that the facility is allocating these expenditures appropriately going forward. While VA Central Office and networks verify and use facilities’ encounter and expenditure data for financial purposes, VA’s policies governing primary care do not require VA Central Office and networks to use these data to monitor facilities’ management of primary care. Federal internal control standards state that agencies need both operational and financial data to determine whether they are meeting strategic goals and should use such data to assess the quality of performance over time. We found that the Office of Finance in VA Central Office independently verifies facilities’ encounter and expenditure data to help ensure their reliability and uses the data for cost accounting and budgetary purposes. Similarly, chief financial officers or their designees at six of the seven networks that oversee the facilities we reviewed routinely examine encounter and expenditure data to identify outliers for the purposes of ensuring data reliability and for cost accounting. However, the Primary Care Operations Office in VA Central Office does not use encounter and expenditure data, even though officials stated that examining such data would likely help them monitor facilities’ management of primary care. Furthermore, primary care officials at the seven networks we examined generally do not use these data to monitor facilities’ management of primary care. Some officials told us that they do not use encounter and expenditure data for monitoring primary care delivery because panel sizes are the most effective means of measuring efficiency within primary care. By not using encounter and expenditure data to monitor facilities’ management of primary care, VA may be missing opportunities to identify facilities—such as those that experience higher than average expenditures per encounter or significant changes in expenditures over time—that may warrant further examination and to strengthen the efficiency and effectiveness of the primary care program. Using panel size data in conjunction with encounter and expenditure data, would allow VA Central Office and networks to assess facilities’ capacity to provide primary care services and the efficiency of care delivery. The absence of reliable panel size data and oversight processes could significantly inhibit VA’s ability to ensure that facilities are providing veterans with timely, quality care and delivering that care efficiently. While VA planned to address some of the data reliability issues through new software to help VA facilities record data more accurately, development of this software is currently on hold, and VA could not provide any estimates of when the software would be implemented at its facilities. Even if this software is implemented, VA Central Office and networks will still be relying on self-reported data on primary care panel sizes from its facilities. By not having in place a process to verify the reliability of facilities’ panel size data or to monitor wide variations between facilities’ reported and modeled panel sizes, VA will likely continue to receive unreliable data and miss opportunities to assess the impact of panel sizes on veterans’ access to care. VA Central Office and the networks are also missing opportunities to use readily available encounter and expenditure data to potentially improve the efficiency of primary care service delivery. Consistent with federal internal control standards, using such data in conjunction with reliable panel size data could be a potent tool in “right- sizing” panel sizes to best serve veterans’ needs and deliver primary care efficiently. We recommend that the Secretary of the Department of Veterans Affairs, direct the Undersecretary for Health to take the following two actions to improve the reliability of VA’s primary care panel size data and improve VA Central Office and the networks’ oversight of facilities’ management of primary care: Incorporate in policy an oversight process for primary care panel management that assigns responsibility, as appropriate, to VA Central Office and networks for (1) verifying each facility’s reported panel size data currently in PCMM and in web-PCMM, if the software is rolled- out nationally, including such data as the number of primary care patients, providers, support staff, and exam rooms; and (2) monitoring facilities’ reported panel sizes in relation to the modeled panel size and assisting facilities in taking steps to address situations where reported panel sizes vary widely from modeled panel sizes. Review and document how to use encounter and expenditure data in conjunction with panel size data to strengthen monitoring of facilities’ management of primary care. VA provided written comments on a draft of this report, which we have reprinted in appendix II. In its comments, VA agreed with our conclusions, concurred with our two recommendations, and described the agency’s plans to implement our recommendations. VA also provided technical clarifications and comments on the draft report, including the recommendations contained in the draft report. We incorporated these comments, as appropriate. In particular, we modified our first recommendation in the draft report and now recommend that VA verify each facility’s panel size data in PCMM and, if the latter is available, in web-PCMM. We made this change to reflect the continued uncertainty over the implementation of the web-PCMM software. In addition, we modified our second recommendation in the draft report and no longer recommend VA incorporate into existing VA policy a requirement that the agency and its networks use encounter and expenditure data to strengthen the monitoring of facilities’ management of primary care. We made this change to reflect that VA officials were not prepared to incorporate such a requirement without first examining how to use these data for monitoring purposes. To address our first recommendation, VA stated that it plans to issue guidance by September 2016 clarifying VA Central Office’s and the networks’ oversight responsibilities with regard to primary care panel size data. This guidance will include a process—developed by the Offices of Primary Care Services and Primary Care Operations—for addressing medical facilities whose panel sizes differ significantly from similar facilities’ panels. In its response, however, VA did not provide information on how it plans to address unreliable panel size data facilities record and report in PCMM. We would encourage VA, in the guidance it plans to issue in 2016, to assign responsibility for verifying each facility's reported panel size data as we recommended. To address our second recommendation, VA stated that it will take steps to understand encounter and expenditure data and determine how best to utilize these data to improve patient care with a target completion date for presenting its findings and decisions by September 2018. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We analyzed Department of Veterans Affairs (VA) fiscal year 2014 data on primary care expenditures and calculated expenditures per unique primary care patient. We found that expenditures per unique primary care patient varied widely across facilities in fiscal year 2014, ranging from $558 to $1,544 after adjusting to account for geographic differences in labor costs across facilities. We found that the expenditures per unique patient at 102 of the 140 facilities we reviewed were within $167 or one standard deviation—a statistical measure of variance—of VA’s overall average of $871. For the remaining facilities, expenditures per unique patient were at least one standard deviation above the average (19 facilities) or were at least one standard deviation below the average (19 facilities), which may indicate potential outliers that VA Central Office and the networks may need to examine further. (See fig. 4.) In addition to the contact named above, Rashmi Agarwal, Assistant Director; James Musselwhite, Assistant Director; Kathryn Black; Krister Friday; Cathleen Hamann; Aaron Holling; Emily Wilson; and Michael Zose made key contributions to this report. Department of Veterans Affairs: Expanded Access to Non-VA Care Through the Veterans Choice Program. GAO-15-229R. Washington, D.C.: Nov 19, 2014. VA Health Care: Actions Needed to Ensure Adequate and Qualified Nurse Staffing. GAO-15-61. Washington, D.C.: Oct 16, 2014. VA Health Care: Ongoing and Past Work Identified Access, Oversight, and Data Problems That Hinder Veterans’ Ability to Obtain Timely Outpatient Medical Care. GAO-14-679T. Washington, DC: Jun 9, 2014. VA Health Care: VA Lacks Accurate Information about Outpatient Medical Appointment Wait Times, Including Specialty Care Consults. GAO-14-620T. Washington, D.C.: May 15, 2014. VA Health Care: Ongoing and Past Work Identified Access Problems That May Delay Needed Medical Care for Veterans. GAO-14-509T. Washington, D.C.: Apr 9, 2014. | VA's 150 medical facilities manage primary care services provided to veterans. VA requires facilities to record and report data on primary care panel sizes to help facilities manage their workload and ensure that veterans receive timely and efficient care. VA also requires facilities to record and report data on primary care encounters and expenditures. GAO was asked to examine these data and VA's oversight of primary care. This report examines (1) VA's panel size data across facilities and how VA uses these data to oversee primary care, and (2) VA's encounter and expenditure data across facilities and how VA uses these data to oversee primary care. GAO analyzed fiscal year 2014 data on primary care panel size, encounters, and expenditures for all VA facilities. GAO also conducted a more in-depth, nongeneralizable analysis of data and interviewed officials from seven facilities, selected based on geographic diversity and differences in facility complexity. GAO also interviewed VA Central Office and network officials to examine their oversight of primary care, including the extent to which they verify the data and use it to monitor the management of primary care. GAO found that the Department of Veterans Affairs' (VA) data on primary care panel sizes—that is, the number of patients VA providers and support staff are assigned as part of their patient portfolio—are unreliable across VA's 150 medical facilities and cannot be used to monitor facilities' management of primary care. Specifically, as part of its review, GAO found missing values and other inaccuracies in VA's data. Officials from VA's Primary Care Operations Office confirmed that facilities sometimes record and self-report these data inaccurately or in a manner that does not follow VA's policy and noted that this could result in the data reliability concerns GAO identified. GAO obtained updated data from six of seven selected facilities, corrected these data for inaccuracies, and then calculated the actual panel sizes for the six facilities. GAO found that for these six facilities the actual panel size varied from 23 percent below to 11 percent above the modeled panel size, which is the number of patients for whom a provider and support staff can reasonably deliver primary care as projected by VA. Such wide variation raises questions about whether veterans are receiving access to timely care and the appropriateness of the size of provider workload at these facilities. Moreover, GAO found that while VA's primary care panel management policy requires facilities to ensure the reliability of their panel size data, it does not assign responsibility to VA Central Office or networks for verifying the reliability of facilities' data or require them to use the data for monitoring purposes. Federal internal control standards call for agencies to clearly define key areas of authority and responsibility, ensure that reliable information is available, and use this information to assess the quality of performance over time. Because VA's panel management policy is inconsistent with federal internal control standards, VA lacks assurance that its facilities' data are reliable and that the facilities are managing primary care panels in a manner that meets VA's goals of providing efficient, timely, and quality care to veterans. In contrast to VA's panel data, GAO found that primary care encounter and expenditure data reported by all VA medical facilities are reliable, although the data show wide variations across facilities. For example, in fiscal year 2014, expenditures per primary care encounter—that is, a professional contact between a patient and a primary care provider—ranged from a low of $150 to a high of $396 after adjusting to account for geographic differences in labor costs across facilities. Such wide variations may indicate that services are being delivered inefficiently at some facilities with relatively higher per encounter costs compared to other facilities. However, while VA verifies and uses these data for financial purposes, VA's policies governing primary care do not require the use of the data to monitor facilities' management of primary care. Federal internal control standards state that agencies need both operational and financial data to determine whether they are meeting strategic goals and should use such data to assess the quality of performance over time. Using panel size data in conjunction with encounter and expenditure data would allow VA to assess facilities' capacity to provide primary care services and the efficiency of their care delivery. By not using available encounter and expenditure data in this manner, VA is missing an opportunity to potentially improve the efficiency of primary care service delivery. GAO recommends that VA verify facilities' panel size data, monitor and address panel sizes that are too high or too low, and review and document how to use encounter and expenditure data to help monitor facilities' management of primary care. VA agreed with GAO's recommendations and described its plans to implement them. |
FHA was established in 1934 under the National Housing Act (P.L. 73-479) to broaden homeownership, shore up and protect lending institutions, and stimulate employment in the building industry. FHA insures private lenders against losses on mortgages that finance purchases of properties with one to four housing units. Many FHA-insured loans are made to low-income, minority, and first-time homebuyers. Generally, lenders require borrowers to purchase mortgage insurance when the value of the mortgage is large relative to the price of the house. FHA provides most of its single-family insurance through a program supported by the Mutual Mortgage Insurance Fund. The economic value of the Fund, which consists of the sum of existing capital resources plus the net present value of future cash flows, depends on the relative size of cash outflows and inflows over time. Cash flows out of the Fund from payments associated with claims on foreclosed properties, refunds of up-front premiums on mortgages that are prepaid, and administrative expenses for management of the program. To cover these outflows, FHA deposits cash inflows—up-front and annual insurance premiums from participating homebuyers and the net proceeds from the sale of foreclosed properties— into the Fund. If the Fund were to be exhausted, the U.S. Treasury would have to cover lenders’ claims and administrative costs directly. The Fund remained relatively healthy from its inception until the 1980s, when losses were substantial, primarily because of high foreclosure rates in regions experiencing economic stress, particularly the oil-producing states in the West South Central section of the United States. These losses prompted the reforms that were first enacted in November 1990 as part of the Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508). The reforms, designed to place the Fund on an actuarially sound basis, required the Secretary of HUD to, among other things, take steps to ensure that the Fund attained a capital ratio of 2 percent of the insurance-in-force by November 2000 and to maintain or exceed that ratio at all times thereafter. As a result of the 1990 housing reforms, the Fund must meet not only the minimum capital ratio requirement but also operational goals before the Secretary of HUD can take certain actions that might reduce the value of the Fund. These operational goals include meeting the mortgage credit needs of certain homebuyers while maintaining an adequate capital ratio, minimizing risk, and avoiding adverse selection. However, the legislation does not define what constitutes adequate capital or specify the economic conditions that the Fund should withstand. The 1990 reforms also required that an independent contractor conduct an annual actuarial review of the Fund. These reviews have shown that during the 1990s the estimated value of the Fund grew substantially. At the end of fiscal year 1995, the Fund attained an estimated economic value that slightly exceeded the amount required for a 2 percent capital ratio. Since that time, the estimated economic value of the Fund continued to grow and always exceeded the amount required for a 2 percent capital ratio. In the most recent actuarial review, Deloitte & Touche estimated the Fund’s economic value at about $18.5 billion at the end of fiscal year 2001. This represents about 3.75 percent of the Fund’s insurance-in-force. In February 2001 we reported that the Fund had an economic value of $15.8 billion at the end of fiscal year 1999. This estimate implied a capital ratio of 3.20 percent of the unamortized insurance-in-force. The relatively large economic value and high capital ratio reported for the Fund reflected the strong economic conditions that prevailed during most of the 1990s, the good economic performance that was expected for the future, and the increased insurance premiums put in place in 1990. In our February 2001 report we also reported that, given the economic value of the Fund and the state of the economy at the end of fiscal year 1999, a 2 percent capital ratio appeared sufficient to withstand moderately severe economic scenarios that could lead to worse-than-expected loan performance. These scenarios were based upon recent regional experiences and the national recession that occurred in 1981 and 1982. Specifically, we found that such conditions would not cause the economic value of the Fund at the end of fiscal year 1999 to decline by more than 2 percent of the Fund’s insurance-in-force. Although a 2 percent capital ratio also appeared sufficient to allow the Fund to withstand some more severe scenarios, we found that three of the most severe scenarios we tested would cause the economic value of the Fund to decline by more than 2 percent of the Fund’s insurance-in-force. These results suggest that the existing capital ratio was more than sufficient to protect the Fund from many worse-than-expected loan performance scenarios. However, we cautioned that factors not fully captured in our economic models could affect the Fund’s ability to withstand worse-than-expected experiences over time. These factors include recent changes in FHA’s insurance program and the conventional mortgage market that could affect the likelihood of poor loan performance and the ability of the Fund to withstand that performance. In deciding whether to approve a loan, lenders rely upon underwriting standards set by FHA or the private sector. FHA’s underwriting guidelines require lenders to establish that prospective borrowers have the ability and willingness to repay a mortgage. In order to establish a borrower’s willingness and ability to pay, these guidelines require lenders to evaluate four major elements: qualifying ratios and compensating factors; stability and adequacy of income; credit history; and funds to close. In recent years, private mortgage insurers and conventional lenders have begun to offer alternatives to borrowers who want to make small or no down payments. Private lenders have also begun to use automated underwriting as a means to better target low-risk borrowers for conventional mortgages. Automated underwriting relies on the statistical analysis of hundreds of thousands of mortgage loans that have been originated over the past decade to determine the key attributes of the borrower’s credit history, the property characteristics, and the terms of the mortgage note that affect loan performance. The results of this analysis are arrayed numerically in what is known as a “mortgage score.” A mortgage score is used as an indicator of the foreclosure or loss risk to the lender. During their early years, FHA loans insured from fiscal year 1995 through fiscal year 1998 have shown somewhat higher cumulative foreclosure rates than FHA loans insured from fiscal year 1990 through fiscal year 1994, but these rates are well below comparable rates for FHA loans insured in the 1980s. To better understand how foreclosure rates might vary, we compared the rates for different types of loans—fixed-rate and adjustable rate mortgages (ARMs)—locations of properties, and loan-to-value (LTV) ratios. For loans made in recent years, FHA has been experiencing particularly high foreclosure rates for ARMs and mortgages on properties located in California. One measure of the initial risk of a loan, its LTV, can partly explain the difference over time in foreclosure rates. That is, FHA insured relatively more loans with high LTVs later in the decade than it insured earlier in the decade. However, the same pattern of higher foreclosure rates in the later 1990s exists even after differences in LTV are taken into account. We compared the four-year cumulative foreclosure rates across books of business to measure the performance of FHA’s insured loans. As shown in figure 1, the 4-year cumulative foreclosure rate for FHA-insured loans was generally higher for loans originated later in the 1990s than for loans originated earlier in that decade. Through their fourth year, loans originated during fiscal years 1990 through 1994 had an average cumulative foreclosure rate of 2.23 percent, while loans originated during fiscal years 1995 through 1998 had an average cumulative foreclosure rate of 2.93 percent. Although the 4-year cumulative foreclosure rates for loans that FHA insured in the later part of the 1990s were higher than that for loans that FHA insured earlier in that decade, those rates were still well below the high levels experienced for loans that FHA insured in the early- to mid- 1980s, as shown in figure 2. The 4-year cumulative foreclosure rates for FHA loans originated between 1981 and 1985, a period of high interest and unemployment rates and low house price appreciation rates, ranged between 5 and 10 percent, while the rates for loans originated during the 1990s, when economic conditions were better, have consistently been below 3.5 percent. Since fiscal year 1993, FHA has experienced higher 4-year cumulative foreclosure rates for ARMs than it has for long-term (generally 30-year) fixed-rate mortgages, as shown in figure 3. In addition, between 1990 and 1994 the 4-year cumulative foreclosure rate for ARMs averaged 2.53 percent, as compared with a 3.90 percent average 4-year cumulative foreclosure rate for ARMs originated between 1995 and 1998. These higher foreclosures have occurred even though mortgage interest rates have been generally stable or declining during this period. In the early 1990s, when ARMs were performing better than fixed-rate mortgages, the performance of ARMs had relatively little impact on the overall performance of loans FHA insured because FHA insured relatively few ARMs. However, as shown in figure 4, later in the decade ARMs represented a greater share of the loans that FHA insured, so their performance became a more important factor affecting the overall performance of FHA loans. FHA is studying its ARM program and has contracted with a private consulting firm to examine the program’s design and performance. FHA insured a greater dollar value of loans in the 1990s in California than in any other state. Among the states in which FHA does the largest share of its business, 4-year cumulative foreclosure rates for both long-term, fixed- rate mortgages and ARMs were typically highest in California. California, which accounted for 15 percent of the dollar value of all single-family loans that FHA insured during the 1990s, had an average foreclosure rate of 6.41 percent for both fixed rate and ARMs. In comparison, the 4-year cumulative foreclosure rate for FHA loans insured during the 1990s outside of California averaged 1.97 percent. According to FHA, the poor performance of FHA loans originated in California was attributable to poor economic conditions that existed during the early- to mid-1990s, coupled with the practice of combining FHA’s interest-rate buy-down program with an ARM to qualify borrowers in California’s high-priced housing market. The five states with the greatest dollar value of long-term fixed-rate mortgages insured by FHA during the 1990s were California, Texas, Florida, New York, and Illinois. Loans insured in these states made up about one- third of FHA’s business for this loan type from fiscal year 1990 through fiscal year 1998, with California alone accounting for about 13 percent, as shown in figure 5. As a result, the performance of loans insured in California can significantly affect the overall performance of FHA’s portfolio of loans of this type. For long-term fixed-rate mortgages that FHA insured in California from fiscal year 1990 through fiscal year 1998, the 4-year cumulative foreclosure rates averaged about 5.6 percent. As shown in figure 6, Florida, Texas, and New York also had relatively high 4-year foreclosure rates during the early 1990s. And Florida experienced relatively high 4-year cumulative foreclosure rates again from 1995 through 1998. For states that were not among the five states with the greatest share of fixed-rate mortgages, the 4- year cumulative foreclosure rates for the same type of loan over the same period averaged less than 2 percent. The four states with the highest dollar value of ARMs insured by FHA during the 1990s were California, Illinois, Maryland, and Colorado. Loans insured in these states made up about 42 percent of FHA’s business for this loan type, with California alone accounting for about 21 percent, as shown in figure 7. As a result, the performance of ARMs insured in California can significantly affect the overall performance of FHA’s portfolio of loans of this type. As shown in figure 8, the 4-year cumulative foreclosure rates for ARMs that FHA insured in California were consistently higher than the rates for any of the other three states with the largest dollar volume of ARMs insured by FHA, as well as the average rate for the remaining 46 states and the District of Columbia combined. In fact, for ARMs that FHA insured in California in fiscal years 1995 and 1996, the 4-year cumulative foreclosure rate was about 10 percent, more than twice as high as the rate for any of the other three states with the highest dollar volume of loans or for the remaining 46 states and the District of Columbia combined. Although differences in the share of FHA-insured loans with high LTVs (above 95 percent) may be a factor accounting for part of the difference in cumulative foreclosure rates between more recent loans and loans insured earlier in the 1990s, the same pattern exists even when differences in LTV are taken into account. As shown in figure 9, the share of FHA-insured loans with LTVs of 95 percent or more was higher later in the 1990s. Generally, as shown in figure 10, higher LTV ratios, which measure borrowers’ initial equity in their homes, are associated with higher foreclosure rates. However, figure 10 also shows that the same general pattern over time for the 4-year cumulative foreclosure rates that was shown in figure 1 continues to exist even when the loans are divided into categories by LTV. Thus, differences in LTV alone cannot account for the observed differences in foreclosure rates. Finally, we also considered whether the differences in foreclosures rates could be explained by differences in prepayment rates. Higher prepayment rates might be associated with lower foreclosure rates: if a higher percentage of loans in a book of business are prepaid, then only a smaller share of the original book of business might be subject to foreclosure. However, we found that during the 1990s, prepayment rates showed the same pattern across the years as foreclosure rates and, if anything, were generally higher when foreclosure rates were higher, suggesting that less frequent prepayment was not a factor explaining higher foreclosure rates in the late 1990s. Although economic factors such as house-price-appreciation rates are key determinants of mortgage foreclosure, a number of program- and market- related changes occurring since 1995 could also affect the performance of recently insured FHA loans. Specifically, in 1995 FHA made a number of changes in its single-family insurance program that allow borrowers who otherwise might not have qualified for home loans to obtain FHA-insured loans. These changes also allow qualified borrowers to increase the amount of loan for which they can qualify. According to HUD, these underwriting changes were designed to expand homeownership opportunities by eliminating unnecessary barriers to potential homebuyers. The proportion of FHA purchase-mortgages made to first-time homebuyers increased from 65 percent in 1994 to 78 percent at the end of March 2002 and the proportion of FHA purchase-mortgages made to minority homebuyers increased from 25 percent to 42 percent. At the same time, there has been increased competition from private mortgage insurers offering mortgages with low down payments to borrowers identified as relatively low risk. The combination of changes in FHA’s program and the increased competition in the marketplace may partly explain the higher foreclosure rates of FHA loans originated since fiscal year 1995. FHA has since made changes that may reduce the likelihood of mortgage default, including requiring that, when qualifying an FHA borrower for an ARM, the lender use the ARM’s second year mortgage rate rather than the first-year rate. In addition, FHA has implemented a new loss-mitigation program. Because certain data that FHA collects on individual loans have not been collected for a sufficient number of years or in sufficient detail, we were unable to estimate the effect of changes in FHA’s program and competition from conventional lenders on FHA loan performance. FHA issued revised underwriting guidelines in fiscal year 1995 that, according to HUD, represented significant underwriting changes that would enhance the homebuying opportunities for a substantial number of American families. These underwriting changes made it easier for borrowers to qualify for loans and allowed borrowers to qualify for higher loan amounts. However, the changes may also have increased the likelihood of foreclosure. The loans approved with more liberal underwriting standards might, over time, perform worse relative to existing economic conditions than those approved with the previous standards. The revised standards decreased what is included as borrowers’ debts and expanded the definition of what can be included as borrowers’ effective income when lenders calculate qualifying ratios. In addition, the new underwriting standards expanded the list of compensating factors that could be considered in qualifying a borrower, and they relaxed the standards for evaluating a borrower’s credit history. The underwriting changes that FHA implemented in 1995 can decrease the amount of debt that lenders consider in calculating one of the qualifying ratios, the debt-to-income ratio, which is a measure of the borrower’s ability to pay debt obligations. This change results in some borrowers having a lower debt-to-income ratio than they would otherwise have, and it increases the mortgage amount for which these borrowers can qualify. For example, childcare expenses were considered a recurring monthly debt in the debt-to-income ratio prior to 1995, but FHA no longer requires that these expenses be considered when calculating the debt-to-income ratio. Another change affecting the debt-to-income ratio is that only debts extending 10 months or more are now included in the ratio; previously, FHA required all debts extending 6 months or more to be included. As a result of this change, borrowers can have short-term debts that might affect their ability to meet their mortgage payments, but these debts would not be included in the debt-to-income ratio. However, FHA does encourage lenders to consider all of a borrower’s obligations and the borrower’s ability to make mortgage payments immediately following closing. The 1995 changes not only decreased the amount of debt considered in the debt-to-income ratio; they also increased the amount of income consideredincreasing the number of borrowers considered able to meet a particular level of mortgage payments. When calculating a borrower’s effective income, lenders consider the anticipated amount of income and the likelihood of its continuance. Certain types of income that were previously considered too unstable to be counted toward effective income are now acceptable in qualifying a borrower. For example, FHA previously required income to be expected to continue for 5 years in order for it to be considered as effective income. Now income expected to continue for 3 years can be used in qualifying a borrower. Similarly, FHA now counts income from overtime and bonuses toward effective income, as long as this income is expected to continue. Before 1995, FHA required that such income be earned for 2 years before counting it toward effective income. If borrowers do not meet the qualifying ratio guidelines for a loan of a given size, lenders may still approve them for an FHA-insured mortgage of that size. FHA’s 1995 revised handbook on underwriting standards adds several possible compensating factors or circumstances that lenders may consider when determining whether a borrower is capable of handling the mortgage debt. For example, lenders may consider food stamps or other public benefits that a borrower receives as a compensating factor increasing the borrower’s ability to pay the mortgage. These types of benefits are not included as effective income, but FHA believes that receiving food stamps or other public benefits positively affects the borrower’s ability to pay the mortgage. Lenders may also consider as a compensating factor a borrower’s demonstrated history of being able to pay housing expenses equal to or greater than the proposed housing expense. In FHA’s revised handbook, the section on compensating factors now states, “If the borrower over the past 12 to 24 months has met his or her housing obligation as well as other debts, there should be little reason to doubt the borrower’s ability to continue to do so despite having ratios in excess of those prescribed.” In addition to changes affecting borrowers’ qualifying ratios, the 1995 underwriting changes affected how FHA lenders are supposed to evaluate credit history to determine a borrower’s willingness and ability to handle a mortgage. As with qualifying ratios and compensating factors, FHA relies on the lender’s judgment and interpretation to determine prospective borrowers’ creditworthiness. The 1995 underwriting changes affected FHA guidelines regarding unpaid federal liens as well as credit and credit reports. Specifically, before 1995, borrowers were ineligible for an FHA- insured mortgage if they were delinquent on any federal debt or had any federal liens, including taxes, placed on their property. Following the 1995 changes, borrowers may qualify for a loan even if federal tax liens remain unpaid. FHA guidelines stipulate that a borrower may be eligible as long as the lien holder subordinates the tax lien to the FHA-insured mortgage. If the borrower is in a payment plan to repay liens, lenders may also approve the mortgage if the borrower meets the qualifying ratios calculated with these payments. Finally, FHA expanded the options available to lenders to evaluate a borrower’s credit history. The previous guidance on developing credit histories mentions only rent and utilities as nontraditional sources of credit history. Lenders can now elect to use a nontraditional mortgage credit report developed by a credit reporting agency if no other credit history exists. Lenders may also develop a credit history by considering a borrower’s payment history for rental housing and utilities, insurance, childcare, school tuition, payments on credit accounts with local stores, or uninsured medical bills. In general, FHA advises lenders that an individual with no late housing or installment debt payments should be considered as having an acceptable credit history. Increased competition and recent changes in the conventional mortgage market could also have resulted in FHA’s insuring relatively more loans that carry greater risk. Homebuyers’ demand for FHA-insured loans depends, in part, on the alternatives available to them. In recent years, FHA’s competitors in the mortgage insurance market—private mortgage insurers and conventional mortgage lenders—have increasingly offered products that compete with FHA’s for those homebuyers who are borrowing more than 95 percent of the value of their home. In addition, automated underwriting systems and credit-scoring analytic software such as those introduced by the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) in 1996 are believed to be able to more effectively distinguish low-risk loans for expedited processing. The improvement of conventional lenders’ ability to identify low-risk borrowers might increase the risk profile of FHA’s portfolio as lower-risk borrowers choose conventional financing with private mortgage insurance, which is often less expensive. In addition, by lowering the required down payment, conventional mortgage lenders and private mortgage insurers may have attracted some borrowers who might otherwise have insured their mortgages with FHA. If, by selectively offering these low down payment loans to better risk borrowers, conventional mortgage lenders and private mortgage insurers were able to attract FHA’s lower-risk borrowers, recent FHA loans with down payments of less than 5 percent may be more risky on average than they have been historically. FHA is taking some action to more effectively compete with the conventional market. For example, FHA is attempting to implement an automated underwriting system that could enhance the ability of lenders underwriting FHA-insured mortgages to distinguish better credit risks from poorer ones. Although this effort is likely to increase the speed with which lenders process FHA-insured loans, it may not improve the risk profile of FHA borrowers unless lenders can lower the price of insurance for better credit risks. Since 1996, FHA has revised and tightened some guidelines, specifically in underwriting ARMs, identifying sources of cash reserves and requiring more documentation from lenders. These steps should reduce the riskiness of loans that FHA insures. In a 1997 letter to lenders, FHA expressed concern about the quality of the underwriting of ARMs, particularly when a buy down is used, and reminded lenders that the first-year mortgage- interest rate must be used when qualifying the borrower (rather than the lower rate after the buy down). FHA also stipulated that lenders should consider a borrower’s ability to absorb increased payments after buy down periods. FHA also emphasized that lenders should rarely exceed FHA’s qualifying ratio guidelines in the case of ARMs. In 1998, seeing that borrowers were still experiencing trouble handling increased payments after the buy down period, FHA required borrowers to be qualified at the anticipated second-year interest rate, or the interest rate they would experience after the buy down expired, and it prohibited any form of temporary interest-rate buy down on ARMs. These changes will likely reduce the riskiness of ARMs in future books of business. FHA has also required stricter documentation from lenders on the use of compensating factors and gift letters in mortgage approvals. In a June 10, 1997, letter to lenders, FHA expressed concern about an increased number of loans with qualifying ratios above FHA’s guidelines for which the lender gave no indication of the compensating factors used to justify approval of the loans. FHA emphasized in this letter that lenders are required to clearly indicate which compensating factor justified the approval of a mortgage and to provide their rationale for approving mortgages above the qualifying ratios. Similarly, in an effort to ensure that any gift funds a borrower has come from a legitimate source, FHA has advised lenders of the specific information that gift letters should contain and the precise process for verifying the donor or source of the gift funds. In 2000, FHA also tightened its guidelines on what types of assets can be considered as cash reserves. Although cash reserves are not required, lenders use cash reserves to assess the riskiness of loans. FHA noticed that in some cases lenders considered questionable assets as cash reserves. For example, lenders were overvaluing assets or including assets such as 401(k)s or IRAs that were not easily converted into cash. As a result, FHA strengthened its policy and required lenders to judge the liquidity of a borrower’s assets when considering a borrower’s cash reserves. The new policy requires lenders, when considering an asset’s value, to account for any applicable taxes or withdrawal penalties that borrowers may incur in converting the asset to cash. In 1996 Congress passed legislation directing FHA to terminate its Single- Family Mortgage Assignment Program. FHA ceased accepting assignment applications for this program on April 26, 1996. The same legislation authorized FHA to implement a new program that included a range of loss mitigation tools designed to help borrowers either retain their home’s or to dispose of their property in ways that lessen the cost of foreclosure for both the borrowers and FHA. Specifically, the loss mitigation program provides a number of options for reducing losses, including special forbearance, loan modification, partial claim, pre-foreclosure sale, and deed-in-lieu-of-foreclosure (see table 1 for an explanation of these options). To encourage lenders to engage in loss mitigation, FHA offers incentive payments to lenders for completing each loss mitigation workout. In addition, lenders face a variety of financial penalties for failing to engage in loss mitigation. FHA’s loss mitigation program went into effect on November 12, 1996; however, use was initially fairly low, with only 6,764 loss mitigation cases realized in fiscal year 1997, as lenders began to implement the new approach. HUD experienced substantial growth in loss mitigation claims over the next 4 fiscal years, with total claims reaching 25,027 in fiscal year 1999 and 53,389 in fiscal year 2001. The three loss mitigation tools designed to allow borrowers to remain in their homesspecial forbearance, loan modification, and partial claimrealized the largest increase in use. In contrast, the use of deed-in-lieu-of- foreclosure and pre-foreclosure sale, options resulting in insurance claims against the Fund, declined. Existing FHA data are not adequate to assess the impact of both FHA program changes and the changes in the conventional mortgage market on FHA default rates. Adequately assessing the impact of those changes would require detailed data on information used during loan underwriting to qualify individual borrowers. Such data on qualifying ratios, use of compensating factors, credit scores, and sources and amount of income would allow FHA to assess how factors key to determining the quality of its underwriting have changed over time. In addition, these data could be used in a more comprehensive analysis of the relationship among FHA foreclosures and FHA program design, the housing market, and economic conditions. Some of the data required for that type of assessment and analysis are not collected by FHA, while other data elements have not been collected for a sufficient number of years to permit modeling the impact of underwriting changes on loan performance. Since 1993, FHA has collected data on items such as payment-to-income and debt-to-income ratios, monthly effective income, and total monthly debt payments. However, FHA has not collected more detailed information on individual components of income and debt, such as overtime, bonus income, alimony and childcare payments, or length of terms for installment debt. Nor does FHA collect information on the use by lenders of compensating factors in qualifying borrowers for FHA insurance. These data would be required, for example, to analyze the impact on loan performance of underwriting changes that FHA implemented in 1995. One of the most important measures of a borrower’s credit risk is the borrower’s credit score. Lenders began using credit scores to assess a borrower’s likelihood of default in the mid-1990s. In March 1998, FHA approved Freddie Mac’s automated underwriting system for use by lenders in making FHA-insured loans and began collecting data on borrower credit scores for those loans underwritten using the system. Similarly, in August 1999 FHA approved the use of Fannie Mae’s and PMI Mortgage Servicers’ automated underwriting systems, and it currently collects credit scores on loans underwritten using these systems. According to HUD officials, FHA plans to begin collecting credit score data on all FHA-insured loans underwritten through either automated underwriting systems or conventional methods. Finally, because of the newness of FHA’s loss mitigation program and the several years required for a loan delinquency to be completely resolved, it is difficult to measure the impact that loss mitigation activities will ultimately have on the performance of FHA loans. As recently as 2000, substantial revisions to the program were made that could improve the program’s effectiveness according to Abt Associates Inc. A recent audit of the program by HUD’s Office of Inspector General noted the large increase in usage of loss mitigation strategies and concluded that the program is reducing foreclosures and keeping families in their homes. The overall riskiness of FHA loans made in recent years appears to be greater than we had estimated in our February 2001 report on the Mutual Mortgage Insurance Fund, reducing to some extent the ability of the Fund to withstand worse-than-expected loan performance. Although more years of loan performance are necessary to make a definitive judgment, factors not accounted for in the models that we used for that report appear to be affecting the performance of loans insured after 1995 and causing the overall riskiness of FHA’s portfolio to be greater than we previously estimated. In that report we based our estimate of the economic value of the Fund (as of the end of fiscal year 1999), in part, on econometric models that we developed and used to forecast future foreclosures and prepayments for FHA-insured loans based on the historical experience of loans dating back to 1975. However, a large share of the loans in FHA’s portfolio at that time were originated in fiscal years 1998 and 1999, and therefore there was little direct evidence of how those loans would perform. As a result, at the time that we released that estimate we cautioned that recent changes in FHA’s insurance program and the conventional mortgage market, such as those discussed in the previous section, could be causing recent loans to perform differently, even under the same economic conditions, from earlier loans. To estimate the potential impact of these changes, we first used our previous model to develop estimates of the relationship between, on the one hand, the probability of foreclosure and prepayment and, on the other hand, key explanatory factors such as borrower equity and unemployment for loans insured between fiscal years 1975 and 1995. On the basis of these estimates and of the actual values beyond 1995 for key economic variables, such as interest and unemployment rates and the rate of house price appreciation, we forecasted the performance (both foreclosures and prepayments) of loans that FHA insured from fiscal year 1996 through fiscal year 2001. We then compared those forecasts with the actual experience of those loans. (See app. II for a full discussion of our methodology.) As is shown in figure 11, for each year’s book of business, we found that cumulative foreclosure rates through the end of fiscal year 2001 exceeded our forecasted levels. For example, for the book of business with the longest experience, loans insured in 1996, we forecasted that the cumulative foreclosure rate through the end of fiscal year 2001 would be 3.44 percent, but the actual foreclosure rate was 5.81 percent. These results suggest that some factors other than those accounted for in the model may be causing loans insured after 1995 to perform worse thanwould be expected based on the historical experience of older loans. The fact that cumulative foreclosures for recent FHA-insured loans have been greater than what would be anticipated from a model based on the performance of loans insured from fiscal year 1975 through fiscal year 1995 suggests that the caution we expressed in our 2001 report about the effect of recent changes in FHA’s insurance program and the conventional mortgage market on the ability of the Fund to withstand future economic downturns is still warranted. In particular, the performance of loans insured in fiscal years 1998 and 1999, which represented about one-third of FHA’s loan portfolio at the end of 1999, could be worse than what we previously forecasted. In turn, lower performance by these loans could affect the economic value of the Fund and its ability to withstand future economic downturns. To assess the extent of this effect, we would need to know the extent to which the performance of loans insured in fiscal years 1998 and 1999 has been and will be worse than what we forecasted in developing our previous estimate of the economic value of the Fund. Because loans insured in fiscal years 1998 and 1999 have not completely passed through the peak years for foreclosures, these loans’ foreclosures to date provide only a limited indication of their long-term performance. We do, however, have a better indication of the long-term performance of loans insured in fiscal years 1996 and 1997 because they are older loans with more years of experience. The experience of these loans suggests that changes that are not accounted for in our models are causing these books of business to have higher foreclosure rates than would be anticipated from a model based on the performance of earlier loans. If loans insured in fiscal years 1998 and 1999 are affected by changes that are not accounted for in our models in the same way that loans insured in fiscal years 1996 and 1997 appear to be affected, then the 1998 and 1999 loans will continue to have higher cumulative foreclosure rates than we estimated. Higher foreclosure rates, in turn, imply a lower economic value of the Fund, which is generally estimated as a baseline value under an expected set of economic conditions. With a lower baseline economic value of the Fund under expected economic conditions, the Fund would be less able to withstand adverse economic conditions. To better understand the reasons for the increased risk of recently originated FHA loans would require additional data on factors that might explain loan performance—including qualifying ratios and credit scores. Even if these historical data were available today, it is too soon to estimate with confidence the impact that recent changes will ultimately have on recently insured loans because many of these loans have not yet reached the peak years when foreclosures usually occur. Recently insured loans represent the majority of FHA’s portfolio. The impact of underwriting changes and changes in the conventional mortgage market on the riskiness of the portfolio is not fully understood. Understanding this risk will give a better basis for determining whether the Fund has an adequate capital ratio, and also whether program changes are in order to adjust that level of risk. We obtained written comments on a draft of this report from HUD officials. The written comments are presented in appendix IV. Generally HUD agreed with the report’s findings that the underwriting changes made in 1995 likely increased the riskiness of FHA loans insured after that year. HUD commented that fiscal year 1995 was the first year in which FHA exceeded the 2 percent capital ratio mandated by the National Affordable Housing Act of 1990. According to HUD, by making the 1995 underwriting changes FHA modestly increased the risk characteristics of FHA loans and, by doing so, allowed FHA to achieve its mission of increasing homeownership opportunities for underserved groups. HUD also provided information, which has been incorporated into the final report as appropriate, on the change in homeownership rates among underserved groups since 1994. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Ranking Minority Member of the House Subcommittee on Housing and Community Opportunity and other interested members of Congress and congressional committees. We will also send copies to the HUD Secretary and make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me or Mathew J. Scire at (202) 512-6794, or Jay Cherlow at (202) 512-4918, if you or your staff have any questions concerning this report. Key contributors to this report were Jill Johnson, DuEwa Kamara, Mitch Rachlis, Mark Stover, and Pat Valentine. We initiated this review to determine (1) how the early performance of FHA loans originated in recent years has differed from loans originated in earlier years; (2) how changes in FHA’s program and the conventional mortgage market might explain recent loan performance; and (3) if there is evidence that factors affecting the performance of recent FHA loans may be causing the overall riskiness of FHA’s portfolio to be greater than what we previously estimated, and if so what effect this might have on the ability of the Fund to withstand future economic downturns. To address these objectives, we obtained and analyzed data on loans insured by FHA from 1990 through 1998 by year of origination; by loan type (fixed interest rates versus adjustable interest rates); by loan-to-value ratio; and by location of the property, for selected states that held the greatest share of FHA-insured loans. We compared the foreclosure rates for the first 4 years of these loans. We selected a 4-year cumulative foreclosure rate as a basis for comparing books of business because it best balanced the competing goals of having the greatest number of observations and the greatest number of years of foreclosure experience. We also interviewed HUD officials and reviewed HUD mortgagee letters, trade literature, and publicly available information on the conventional mortgage market. Finally, using the model that we developed for our prior report and basing it on the experience of FHA loans insured from fiscal years 1975 through 1995, we also compared the estimated and actual foreclosure rates through 2001 of loans insured from fiscal years 1996 through 2001. We worked closely with HUD officials and discussed the interpretation of HUD’s data. Although we did not independently verify the accuracy of the data, we did perform internal checks to determine (1) the extent to which the data fields were coded; and (2) the reasonableness of the values contained in the data fields. We checked the mean, median, mode, skewness, and high and low values for each of the variables used. We conducted our review in Washington, D.C., between July 2001 and June 2002 in accordance with generally accepted government auditing standards. For an earlier report, we built econometric and cash flow models to estimate the economic value of FHA’s Mutual Mortgage Insurance Fund (Fund) as of the end of fiscal year 1999. In that report, we acknowledged that factors not fully captured in our models could affect the future performance of loans in FHA’s portfolio and, therefore, the ability of the Fund to withstand worse-than-expected economic conditions. In particular, we suggested that these factors could include changes in FHA’s insurance program and the conventional insurance market. For our current report we sought to assess whether there is evidence that factors not captured in our previous model may be causing the overall riskiness of FHA’s portfolio to be greater than we previously estimated and, if so, would that have a substantial effect on the ability of the Fund to withstand future economic downturns. In this appendix, we describe how we conducted that assessment. Our basic approach was to (1) reestimate the econometric models built for our previous report using the same specifications as before and data on loans insured by FHA in all 50 states and the District of Columbia, but excluding U.S. territories, from 1975 through 1995 (in the previous report, we used data on loans originated through 1999); (2) use the estimated coefficients and actual values of our explanatory variables during the forecasted period to forecast foreclosures and prepayments through fiscal year 2001 for loans insured from fiscal year 1996 through fiscal year 2001; and (3) compare the forecasted and actual foreclosures and prepayments for these loans during that time. A finding that our foreclosure model fit the data well for loans insured from 1975 through 1995, but consistently underestimated foreclosure rates for post-1995 loans, would suggest that there had been a structural change in the post-1995 period not captured in our models that might cause the future performance of FHA-insured loans to be worse than we estimated for our previous report. Our econometric models used observations on loan years—that is, information on the characteristics and status of an insured loan during each year of its life—to estimate conditional foreclosure and prepayment probabilities. These probabilities were estimated using observed patterns of prepayments and foreclosures in a large set of FHA-insured loans. More specifically, our models used logistic equations to estimate the logarithm of the odds ratio, from which the probability of a loan’s payment (or a loan’s prepayment) in a given year could be calculated. These equations were expressed as a function of interest and unemployment rates, the borrower’s equity (computed using a house’s price and current and contract interest rates as well as a loan’s duration), the loan-to-value (LTV) ratio, the loan’s size, the geographic location of the house, and the number of years that the loan had been active. The results of the logistic regressions were used to estimate the probabilities of a loan being foreclosed or prepaid in each year. We prepared separate estimates for fixed-rate mortgages, adjustable rate mortgages (ARMs), and investor loans. The fixed-rate mortgages with terms of 25 years or more (long-term loans) were divided between those that were refinanced and those that were purchase money mortgages (mortgages associated with home purchase). Separate estimates were prepared for each group of long-term loans. Similarly, investor loans were divided between mortgages that were refinanced and the loans that were purchase money mortgages. We prepared separate estimates for each group of investor loans (refinanced and purchase money mortgages). A separate analysis was also prepared for loans with terms that were less than 25 years (short-term loans). A complete description of our models, the data that we used, and the results that we obtained is presented in detail in the following sections. In particular, this appendix describes (1) the sample data that we used; (2) our model specification and the independent variables in the regression models; and (3) the model results. For our analysis, we selected from FHA’s computerized files a 10 percent sample of records of mortgages insured by FHA from fiscal years 1975 through 1995 (1,046,916 loans). From the FHA records, we obtained information on the initial characteristics of each loan, such as the year of the loan’s origination and the state in which the loan originated; LTV ratio; loan amount; and contract interest rates. To describe macroeconomic conditions at the national and state levels, we obtained data at the national level on quarterly interest rates for 30-year fixed-rate mortgages on existing housing, and at the state level on annual civilian unemployment rates from DRI-WEFA. We also used state level data from DRI-WEFA on median house prices to compute house price appreciation rates by state. To adjust nominal loan amounts for inflation, we used data from the 2000 Economic Report of the President on the implicit price deflator for personal consumption expenditures. People buy houses for consumption and investment purposes. Normally, people do not plan to default on loans. However, conditions that lead to defaults do occur. Defaults may be triggered by a number of events, including unemployment, divorce, or death. These events are not likely to trigger defaults if the owner has positive equity in his or her home because the sale of the home with realization of a profit is preferable to the loss of the home through foreclosure. However, if the property is worth less than the mortgage, these events may trigger defaults. Prepayments of home mortgages can also occur. These may be triggered by events such as declining interest rates, which prompt refinancing, and rising house prices, which prompt homeowners to take out accumulated equity or sell the residence. Because FHA mortgages are assumable, the sale of a residence does not automatically trigger prepayment. For example, if interest rates have risen substantially since the time that the mortgage was originated, a new purchaser may prefer to assume the seller’s mortgage. We hypothesized that foreclosure behavior is influenced by, among other things, the (1) level of unemployment, (2) size of the loan, (3) value of the home, (4) current interest rates, (5) contract interest rates, (6) home equity, and (7) region of the country within which the home is located. We hypothesized that prepayment behavior is influenced by, among other things, the (1) difference between the interest rate specified in the mortgage contract and the mortgage rates generally prevailing in each subsequent year, (2) amount of accumulated equity, (3) size of the loan, and (4) region of the country in which the home is located. Our first regression model estimated conditional mortgage foreclosure probabilities as a function of a variety of explanatory variables. In this regression, the dependent variable is a 0/1 indicator of whether a given loan was foreclosed in a given year. The outstanding mortgage balance, expressed in inflation-adjusted dollars, weighted each loan-year observation. Our foreclosure rates were conditional on whether the loan survives an additional year. We estimated conditional foreclosures in a logistic regression equation. Logistic regression is commonly used when the variable to be estimated is the probability that an event, such as a loan’s foreclosure, will occur. We regressed the dependent variable (whose value is 1 if foreclosure occurs and 0 otherwise) on the explanatory variables previously listed. Our second regression model estimated conditional prepayment probabilities. The independent variables included a measure that is based on the relationship between the current mortgage interest rate and the contract rate, the primary determinant of a mortgage’s refinance activity. We further separated this variable between ratios above and below 1 to allow for the possibility of different marginal impacts in higher and lower ranges. The variables that we used to predict foreclosures and prepayments fall into two general categories: descriptions of states of the economy and characteristics of the loan. In choosing explanatory variables, we relied on the results of our own and others' previous efforts to model foreclosure and prepayment probabilities, and on implications drawn from economic principles. We allowed for many of the same variables to affect both foreclosure and prepayment. The single most important determinant of a loan's foreclosure is the borrower's equity in the property, which changes over time because (1) payments reduce the amount owed on the mortgage and (2) property values can increase or decrease. Equity is a measure of the current value of a property compared with the current value of the mortgage on that property. Previous research strongly indicates that borrowers with small amounts of equity, or even negative equity, are more likely than other borrowers to default. We computed the percentage of equity as 1 minus the ratio of the present value of the loan balance evaluated at the current mortgage interest rate, to the current estimated house price. For example, if the current estimated house price is $100,000, and the value of the mortgage at the current interest rate is $80,000, then equity is .2 (20 percent), or 1-(80/100). To measure current equity, we calculated the value of the mortgage as the present value of the remaining mortgage, evaluated at the current year’s fixed-rate mortgage interest rate. We calculated the current value of a property by multiplying the value of that property at the time of the loan's origination by the change in the state’s median nominal house price, adjusted for quality changes, between the year of origination and the current year. Because the effects on foreclosure of small changes in equity may differ depending on whether the level of equity is large or small, we used a pair of equity variables, LAGEQHIGH and LAGEQLOW, in our foreclosure regression. The effect of equity is lagged 1 year, as we are predicting the time of foreclosure, which usually occurs many months after a loan first defaults. We anticipated that higher levels of equity would be associated with an increased likelihood of prepayment. Borrowers with substantial equity in their homes may be more interested in prepaying their existing mortgages, and may take out larger ones to obtain cash for other purposes. Borrowers with little or no equity may be less likely to prepay because they may have to take money from other savings to pay off their loans and cover transaction costs. For the prepayment regression, we used a variable that measures book equity—the estimated property value less the amortized balance of the loan—instead of market equity. It is book value, not market value, that the borrower must pay to retire the debt. Additionally, the important effect of interest rate changes on prepayment is captured by two other equity variables, RELEQHI and RELEQLO, which are sensitive to the difference between a loan’s contract rate and the interest rate on 30-year mortgages available in the current year. These variables are described below. We included an additional set of variables in our regressions related to equity: the initial LTV ratio. We entered LTV as a series of dummy variables, depending on its size. Loans fit into eight discrete LTV categories. In some years, FHA measured LTV as the loan amount less mortgage insurance premium financed in the numerator of the ratio, and appraised value plus closing costs in the denominator. To reflect true economic LTV, we adjusted FHA's measure by removing closing costs from the denominator and including financed premiums in the numerator. A borrower's initial equity can be expressed as a function of LTV, so we anticipated that if LTV was an important predictor in an equation that also includes a variable measuring current equity, it would probably be positively related to the probability of foreclosure. One reason for including LTV is that it measures initial equity accurately. Our measures of current equity are less accurate because we do not have data on the actual rate of change in the mortgage loan balance or the actual rate of house price change for a specific house. Loans with higher LTVs are more likely to foreclose. We used the lowest LTV category as the omitted category. We expected LTV to have a positive sign in the foreclosure equations at higher levels of LTV. LTV in our foreclosure equations may capture the effects of income constraints. We were unable to include borrowers’ income or payment to income ratio directly because data on borrowers’ income were not available. However, it seems likely that borrowers with little or no down payment (high LTV) are more likely to be financially stretched in meeting their payments and, therefore, more likely to default. The anticipated relationship between LTV and the probability of prepayment is uncertain. For two equations—long-term refinanced loans and investor-refinanced loans—we used down payment information directly, rather than the series of LTV variables. We defined down payment to ensure that closing costs were included in the loan amount and excluded from the house price. We used the annual unemployment rates for each state for the period from fiscal years 1975 through 1995 to measure the relative condition of the economy in the state where a loan was made. We anticipated that foreclosures would be higher in years and states with higher unemployment rates, and that prepayments would be lower because property sales slow down during recessions. The actual variable we used in our regressions, LAGUNEMP, is defined as the logarithm of the preceding year's unemployment rate in that state. We included the logarithm of the interest rate on the mortgage as an explanatory variable in the foreclosure equation. We expected a higher interest rate to be associated with a higher probability of foreclosure because higher interest rates cause higher monthly payments. However, in explaining the likelihood of prepayment, our model uses information on the level of current mortgage rates relative to the contract rate on the borrower’s mortgage. A borrower’s incentive to prepay is high when the interest rate on a loan is greater than the rate at which money can currently be borrowed, and it diminishes as current interest rates increase. In our prepayment regression we defined two variables, RELEQHI and RELEQLO. RELEQHI is defined as the ratio of the market value of the mortgage to the book value of the mortgage, but is never smaller than 1. RELEQLO is also defined as the ratio of the market value of the mortgage to the book value, but is never larger than 1. When currently available mortgage rates are lower than the contract interest rate, market equity exceeds book equity because the present value of the remaining payments evaluated at the current rate exceeds the present value of the remaining payments evaluated at the contract rate. Thus, RELEQHI captures a borrower's incentive to refinance, and RELEQLO captures a new buyer's incentive to assume the seller's mortgage. We created two 0/1 variables, REFIN and REFIN2, that take on a value of 1 if a borrower had not taken advantage of a refinancing opportunity in the past, and 0 otherwise. We defined a refinancing opportunity as having occurred if the interest rate on fixed-rate mortgages in any previous year in which a loan was active was at least 200 basis points below the rate on the mortgage in any year through 1994, or 150 basis points below the rate on the mortgage in any year after 1994. REFIN takes a value of 1 if the borrower had passed up a refinancing opportunity at least once in the past. REFIN2 takes on a value of 1 if the borrower had passed up two or more refinancing opportunities in the past. Several reasons might explain why borrowers passed up apparently profitable refinancing opportunities. For example, if they had been unemployed or their property had fallen in value, they might have had difficulty obtaining refinancing. This reasoning suggests that REFIN and REFIN2 would be positively related to the probability of foreclosure; that is, a borrower unable to obtain refinancing previously because of poor financial status might be more likely to default. Similar reasoning suggests a negative relationship between REFIN and REFIN2 and the probability of prepayment; a borrower unable to obtain refinancing previously might also be unlikely to obtain refinancing currently. A negative relationship might also exist if a borrower's passing up one profitable refinancing opportunity reflected a lack of financial sophistication that, in turn, would be associated with passing up additional opportunities. However, a borrower who anticipated moving soon might pass up an apparently profitable refinancing opportunity to avoid the transaction costs associated with refinancing. In this case, there might be a positive relationship, with the probability of prepayment being higher if the borrower fulfilled his or her anticipation and moved, thereby prepaying the loan. Another explanatory variable is the volatility of interest rates, INTVOL, which is defined as the standard deviation of the monthly average of the Federal Home Loan Mortgage Corporation's series of 30-year, fixed-rate mortgages’ effective interest rates. We calculated the standard deviation over the previous 12 months. Financial theory predicts that borrowers are likely to refinance more slowly at times of volatile rates because there is a larger incentive to wait for a still lower interest rate. We also included the slope of the yield curve, YC, in our prepayment estimates, which we calculated as the difference between the 1- and 10- year Treasury rates of interest. We then subtracted 250 basis points from this difference and set differences that were less than 0 to 0. This variable measured the relative attractiveness of ARMs versus fixed-rate mortgages; the steeper the yield curve, the more attractive ARMs would be. When ARMs have low rates, borrowers with fixed-rate mortgages may be induced into refinancing into ARMs to lower their monthly payments. For ARMs, we did not use relative equity variables as we did with fixed-rate mortgages. Instead, we defined four variables, CHANGEPOS, CHANGENEG, CAPPEDPOS, and CAPPEDNEG to capture the relationship between current interest rates and the interest rate paid on each mortgage. CHANGEPOS measures how far the interest rate on the mortgage has increased since origination, with a minimum of 0, while CHANGENEG measures how far the rate has decreased, with a maximum of 0. CAPPEDPOS measures how much further the interest rate on the mortgage would rise if prevailing interest rates in the market did not change, while CAPPEDNEG measures how much further the mortgage's rate would fall if prevailing interest rates did not change. For example, if an ARM was originated at 7 percent and interest rates increased by 250 basis points 1 year later, CHANGEPOS would equal 100 because FHA's ARMs can increase by no more than 100 basis points in a year. CAPPEDPOS would equal 150 basis points, since the mortgage rate would eventually increase by another 150 basis points if market interest rates did not change, and CHANGENEG and CAPPEDNEG would equal 0. Because interest rates have generally trended downward since FHA introduced ARMs, there is very little experience with ARMs in an increasing interest rate environment. We created nine 0/1 variables to reflect the geographic distribution of FHA loans, and included them in both regressions. Location differences may capture the effects of differences in borrowers' incomes, underwriting standards by lenders, economic conditions not captured by the unemployment rate, or other factors that may affect foreclosure and prepayment rates. We assigned each loan to one of the nine Bureau of the Census (Census) divisions on the basis of the state in which the borrower resided. The Pacific division was the omitted category; that is, the regression coefficients show how each of the regions was different from the Pacific division. We also created a variable, JUDICIAL, to indicate states that allowed judicial foreclosure procedures in place of nonjudicial foreclosures. We anticipated that the probability of foreclosure would be lower where judicial foreclosure procedures were allowed because of the greater time and expense required for the lender to foreclose on a loan. To obtain an insight into the differential effect of relatively larger loans on mortgage foreclosures and prepayments, we assigned each loan to 1 of 10 loan-size categorical variables (LOAN1 to LOAN10). The omitted category in our regressions was that of loans between $80,000 and $90,000, and results on loan size are relative to those loans between $80,000 and $90,000. All dollar amounts are inflation adjusted and represent 1999 dollars. The number of units covered by a single mortgage was a key determinant in deciding which loans were more likely to be investor loans. Loans were noted as investor loans if the LTV ratio was between specific values, depending on the year of the loan or whether there were two or more units covered by the loan. Once a loan was identified as an investor loan, we separated the refinanced loans from the purchase-money mortgages and performed foreclosure and payoff analyses on each. For each of the investor equations, we used two dummy variables defined according to the number of units in the dwelling. LIVUNT2 has the value of 1 when a property has two dwelling units and a value of 0 otherwise. LIVUNT3 has a value of 1 when a property has three or more dwelling units and a value of 0 otherwise. The missing category in our regressions was investors with one unit. Our database covers only loans with no more than four units. To capture the time pattern of foreclosures and prepayments (given the effects of equity and the other explanatory variables), we defined seven variables on the basis of the number of years that had passed since the year of the loan's origination. We refer to these variables as YEAR1 to YEAR7 and set them equal to 1 during the corresponding policy year and 0 otherwise. Finally, for those loan type categories for which we did not estimate separate models for refinancing loans and nonrefinancing loans, we created a variable called REFINANCE DUMMY to indicate whether a loan was a refinancing loan. Table 2 summarizes the variables that we used to predict foreclosures and prepayments. Table 3 presents mean values for our predictor variables for each mortgage type for which we ran a separate regression. As previously described, we used logistic regressions to model loan foreclosures and prepayments as a function of a variety of predictor variables. We estimated separate regressions for fixed-rate purchase money mortgages (and refinanced loans) with terms over and under 25 years, ARMs, and investor loans. We used data on loan activity throughout the life of the loans for loans originated from fiscal years 1975 through 1995. The outstanding loan balance of the observation weighted the regressions. The logistic regressions estimated the probability of a loan being foreclosed or prepaid in each year. The standard errors of the regression coefficients are biased downward, because the errors in the regressions are not independent. The observations are on loan years, and the error terms are correlated because the same underlying loan can appear several times. However, we did not view this downward bias as a problem because our purpose was to forecast the dependent variables, not to test hypotheses concerning the effects of independent variables. In general, our results are consistent with the economic reasoning that underlies our models. Most important, the probability of foreclosure declines as equity increases, and the probability of prepayment increases as the current mortgage interest rate falls below the contract mortgage interest rate. As shown in tables 4 and 5, both of these effects occur in each regression model and are very strong. These tables present the estimated coefficients for all of the predictor variables for the foreclosure and prepayment equations. Table 4 shows our foreclosure regression results. As expected, the unemployment rate is positively related to the probability of foreclosure and negatively related to the probability of prepayment. Our results also indicate that generally the probability of foreclosure is higher when LTV and contract interest rate are higher. The overall quality of fit was satisfactory: Chi-square statistics were significant on all regressions at the 0.01-percent level. (X*B), where X refers to the mean value of the ith explanatory variable and B represents the estimated coefficient for the ith explanatory variable. percent to about 10.4 percent would also raise the conditional foreclosure probability by 17 percent (from about 0.6 percent to about 0.7 percent). Values of homeowners’ equity of 10 percent, 20 percent, 30 percent, and 40 percent result in conditional foreclosure probabilities of 0.7 percent, 0.5 percent, 0.3 percent, and 0.2 percent, respectively, illustrating the importance of increased equity in reducing the probability of foreclosure. Table 5 shows our prepayment regression results. The overall conditional prepayment probability for long-term, fixed-rate mortgages is estimated to be about 5.0 percent. This means that, in any particular year, about 5 percent of the loan dollars outstanding will prepay, on average. Prepayment probability is quite sensitive to the relationship between the contract interest rate and the currently available mortgage rate. We modeled this relationship using RELEQHI and RELEQLO. Holding other variables at their mean values, if the spread between mortgage rates available in each year and the contract interest rate widened by 1 percentage point, the conditional prepayment probability would increase by about 78.5 percent to about 8.9 percent. To test the validity of our models, we examined how well they predicted actual patterns of FHA's foreclosure and prepayment rates through fiscal year 1995. Using a sample of 10 percent of FHA's loans made from fiscal years 1975 through 1995, we found that our predicted rates closely resembled actual rates. To predict the probabilities of foreclosure and prepayment in the historical period, we combined the models’ coefficients with information on a loan's characteristics and information on economic conditions described by our predictor variables in each year from a loan's origination through fiscal year 1995. If our models predicted foreclosure or prepayment in any year, we determined the loan's balance during that year to indicate the dollar amount associated with the foreclosure or prepayment. We estimated cumulative foreclosure and prepayment rates by summing the predicted claim and prepayment dollar amounts for all loans originated in each of the fiscal years 1975 through 1995. We compared these predictions with the actual cumulative (through fiscal year 1995) foreclosure and prepayment rates for the loans in our sample. Figure 12 compares actual and predicted cumulative foreclosure rates, and figure 13 compares actual and predicted cumulative prepayment rates for long-term, fixed-rate, nonrefinanced mortgages. Foreclosure rates in the following tables are expressed as a percentage of loan amounts. Specifically, for tables 6 through 15 we compute all rates using the original loan amount of the foreclosed loans compared to the original loan amount of like loans insured by FHA for the corresponding year. For tables 16 we compute foreclosure rates using the unpaid balance of foreclosed loans as a percentage of the total value of mortgages originated. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | Federal Housing Administration (FHA) loans made in recent years have experienced somewhat higher foreclosure rates than loans made in earlier years. However, recent loans are performing much better than loans made in the 1980s. Although economic factors such as house price appreciation are key determinants of mortgage foreclosure, changes in underwriting requirements, as well as changes in the conventional mortgage market, may partly explain the higher foreclosure rates experienced in the 1990s. Factors not fully captured in the model GAO used may be affecting the performance of recent FHA loans and causing the overall risks of FHA's portfolio to be somewhat greater than previously estimated. Thus, the Mutual Mortgage Insurance Fund may be somewhat less able to withstand worse-than-expected loan performance resulting from adverse economic conditions. |
On January 29, 2001, you wrote us that you had become increasingly concerned about media reports of damage to the White House and the EEOB that was discovered by the incoming Bush administration and asked that we investigate whether damage may have been deliberately caused by former Clinton administration staff. We subsequently asked EOP and the General Services Administration (GSA) whether they had any information that may be responsive to your request. On April 18, 2001, the director of the Office of Administration (OA), an EOP unit, wrote us a letter indicating that the White House had no record of damage that “may have been deliberately caused by employees of the prior dministration” and that “.…repair records do not contain information that would allow someone to determine the cause of damage that is being repaired.” In late May and early June 2001, these allegations resurfaced in the news media and on June 4, you asked us to investigate the matter further. On June 5, 2001, the counsel to the president provided us with a list of damage that was discovered in the White House complex during the first days of the Bush administration. In his transmittal letter, the counsel to the president said that the list “…may be responsive to your earlier request for written records documenting damage deliberately caused by employees of the prior dministration….” Further, the counsel said that the list was not the result of a comprehensive or systematic investigation into the issue and should not be considered a complete record of the damage that was found. The list was prepared by OA, which provides common administrative support and services to units within the White House complex, which may include the procurement and maintenance of computers, telephones, furniture, and other personal property. OA prepared the list on the basis of the recollections of five EOP officials with responsibilities in the areas of administration, management, telephones, facilities, and supplies. It listed missing building fixtures, such as doorknobs and a presidential seal; computer keyboards with missing “W” keys; damaged and overturned furniture; telephone lines pulled from the wall; telephones with missing telephone number labels; fax machines moved to the wrong areas and a secure telephone left open with the key in it; offices left in a state of “general trashing,” including the contents of desk drawers dumped on the floor, a glass desk top smashed and on the floor, and refrigerators unplugged with spoiled food; writing on the walls; and voice mail greetings that had obscene messages. The list also indicated that six to eight 14-foot trucks were needed to recover usable supplies that had been thrown away. The EOP consists of a number of units, including the White House Office, the Office of the Vice President, the National Security Council (NSC), and OA. The White House Office is composed of staff who directly support and advance the president’s goals and are commonly referred to as “White House staff.” Offices of the White House Office include, but are not limited to, advance, cabinet affairs, communications, counsel, the first lady, legislative affairs, management and administration, political affairs, presidential personnel, press secretary, public liaison, and scheduling. Although White House Office staff generally leave their positions at the end of an administration, many EOP staff at agencies such as the NSC and OA hold their positions during consecutive administrations. In this report, we referred to staff who are working or worked in the White House complex during the current administration as “EOP staff” and staff who worked in the previous administration and no longer worked in the White House complex after January 20, 2001, as “former Clinton administration staff.” The White House complex consists of several buildings, including the White House, the adjacent EEOB, and the New Executive Office Building (NEOB). This report focuses on observations that were made in the West Wing of the White House and the EEOB during the transition, and not the White House residence or the NEOB. Excluding military staff, most White House Office staff work in the East and West Wings of the White House or the EEOB. GSA maintains the White House office space, including cleaning the offices and repairing the physical structure. OA asks GSA to repair furniture in the White House complex. Some EOP agencies, such as the Office of the Vice President, also handle some of their own administrative functions. The Secret Service, a unit of the Department of the Treasury, is responsible for the security of the White House complex and its occupants. To obtain information regarding observations of damage, vandalism, and pranks, we interviewed the five EOP officials who contributed to the June 2001 list (the OA director, the OA associate director for facilities management, the OA associate director for general services, the management office director, and the telephone service director); the OA associate director for information systems and technology; an on-site manager for a contractor providing telecommunications services in the White House complex; the Secret Service deputy special agent in charge, presidential protection division, White House security branch; the director of GSA’s White House service center; the chief usher for the executive residence; and four GSA cleaning crew leaders who worked in the White House complex during the transition. We also sent letters to 518 EOP staff who worked in the West Wing and EEOB during the first 3 weeks of the Bush administration, asking those who observed any damage, vandalism, or pranks during the weeks surrounding the 2001 transition to arrange a meeting with us through the Office of White House Counsel. We believed that staff who were in the complex during the first 3 weeks of the administration were the most likely staff to have observed damage, vandalism, or pranks. The Office of White House Counsel arranged for interviews with a total of 78 EOP staff, and an associate counsel to the president was present during our interviews with EOP staff. Of the 78 staff, 23 worked for the EOP before January 20, 2001, and 55 began working for the EOP on or after January 20. The interviews with EOP staff were conducted between June 2001 and May 2002. Because these interviews were conducted between 5 and 16 months after the transition, we recognize that recollections could have been imprecise. It was not possible to determine whether, in all cases, the reported incidents had occurred, when they occurred, why they occurred, and who may have been responsible for them. More detailed information about our methodology in reporting the observations is contained in appendix I. To determine if any documentation existed that may not have been previously located, we asked the EOP, GSA, and the Secret Service to provide any documentation they had regarding damage or theft reports, requests for repairs, and invoices for items that had to be purchased. In a June 6, 2001, letter to an associate counsel to the president, we said that “we will need access to any records and documents maintained by the White House, GSA, the Secret Service, or other organizations at the White House that relate to the alleged damage as well as to federal employees and contractors working at the White House who might have information bearing on the allegations.” We also interviewed a total of 29 GSA staff who prepared the office space for the new administration. In addition, we interviewed two National Archives and Records Administration (NARA) staff who worked in the White House complex to assemble presidential materials during the last days of the Clinton administration about their observations; a contract employee who helped discard keyboards from the EOP after the transition; and an official from the White House Communications Agency (WHCA), which handles communications equipment for the White House. After interviewing EOP and GSA staff about their observations, we interviewed a total of 72 former Clinton administration staff to obtain their comments on the allegations during the 2001 transition and to obtain their observations about the 1993 transition. We interviewed 35 former Clinton administration staff who were identified by the senior advisor for presidential transition during the Clinton administration as having worked in the White House complex during the 1993 or 2001 transitions. We also contacted an additional 37 former Clinton administration staff because they were former directors, managers, or representatives from the primary offices where observations were made. We did not, however, obtain comments from former Clinton administration staff regarding every observation. Of the 72 former Clinton administration staff we interviewed, 67 worked in the White House complex during the 2001 transition and 19 worked there during the 1993 transition. Five of the 72 former Clinton administration staff we interviewed left before the end of the administration, but had worked in the White House complex during the 1993 transition. We obtained repair or replacement costs for some of the observed incidents. However, as explained in more detail later in this report, we did not request cost information associated with all of the observations because we did not believe certain costs would be material or readily available. We also believed that the effort that would have been needed to obtain and verify cost data for all observed incidents would not have been commensurate with the benefit of having reported the information. Further, although certain repair and replacement costs were provided, it was unclear what portion of these costs was incurred or will be incurred due to vandalism. To determine how the 2001 presidential transition compared with others in terms of damage, we asked 14 EOP and 2 GSA staff who worked in the White House complex during previous transitions about their recollections of damage, vandalism, or pranks during previous transitions. In addition, we reviewed news media reports to identify any reported damage, vandalism, or pranks during previous transitions. We searched for news reports concerning the 1981, 1989, and 1993 transitions. We assessed what steps could be taken to help prevent and document any damage during future presidential transitions by discussing the issue with GSA and EOP officials and by obtaining the check-out procedures for departing Clinton administration staff. We also discussed check-out procedures with personnel responsible for the office space and equipment at the U.S. Capitol, including staff from the Office of the Chief Administrative Officer, House of Representatives; Office of Customer Relations, Office of the Senate Sergeant-at-Arms; and Office of the Building Superintendent, Office of the Architect of the Capitol. We contacted them because the change of staff and offices on Capitol Hill after elections appeared somewhat comparable to the turnover of EOP staff at the end of an administration. We did our work from June 2001 to May 2002 in Washington, D.C., in accordance with generally accepted government auditing standards. Damage, theft, vandalism, and pranks did occur in the White House complex during the 2001 presidential transition. Multiple people said that, at the beginning of the Bush administration, they observed (1) many offices that were messy, disheveled, or contained excessive trash or personal items; (2) numerous prank signs, printed materials, stickers, and written messages that were left behind, some of which contained derogatory and offensive statements about the president; (3) government property that was damaged, including computer keyboards with missing or damaged “W” keys and broken furniture; and (4) items that were missing, such as office signs, a presidential seal, cellular telephones, doorknobs, and telephone number labels. In addition, documentation provided indicated that some broken, missing, or possibly stolen items were repaired or replaced at the beginning of the Bush administration. Several EOP staff said they believed that what they observed during the transition, such as broken furniture and excessive trash left behind, was done intentionally. Some former Clinton administration staff acknowledged that they had observed a few keyboards with missing “W” keys and some prank signs at the end of the administration. However, the former Clinton administration staff we interviewed also said that (1) the amount of trash that was observed during the transition was what could be expected when staff move out of their offices after 8 years; (2) they did not take the items that were discovered missing; (3) some furniture was broken, but not intentionally, before the transition and little money was spent on repairs and upkeep during the administration; and (4) many of the reported observations were not of vandalism. Further, two former Clinton administration representatives told us that, in their opinion, most of the observations were not true. Incidents such as the removal of keys from computer keyboards; the theft of various items; the leaving of certain voice mail messages, signs, and written messages; and the placing of glue on desk drawers, clearly were done intentionally. Any intentional damage at the White House complex, which is a national treasure, is both inappropriate and a serious matter. The theft of or willful damage to government property would constitute a criminal act in violation of federal law. Although it is clear that some of the reported incidents were intentional, such as the removal and damaging of keys on computer keyboards, it was unclear whether, in all cases, the reported incidents occurred, when they occurred, how many occurred, and who was responsible for them. In addition, regarding the items reported missing, it was not known whether all of them were thefts, and if they were, who was responsible for them. Some documentation corroborating a number of the observations existed. EOP facilities, computer, and telephone officials said that much repair and replacement work was done during the transition without documentation being prepared because of the need to complete the work quickly. The OA associate director for facilities management, for example, said that no documentation was prepared regarding three to four missing office signs, a doorknob, and two or three medallions (small metal presidential seals affixed to office signs) that were replaced during that time. Further, documentation was provided indicating that much telephone service work was done during the transition, but this information did not directly corroborate allegations of vandalism and pranks involving the telephones. Seventy-eight EOP staff who worked in the White House complex during the 2001 transition provided observations about the condition of the complex shortly before or at the beginning of the administration. In addition, 10 of the 29 GSA staff we interviewed told us about observations that related to the items contained in the June 2001 list. The observations generally reflected the types of incidents included in the June 2001 list and also included additional items that were not on it. In certain categories, the observations of EOP staff differed from the June 2001 list in terms of the total numbers of incidents or the alleged extent of the damage. More observations of damage, vandalism, and pranks were made on the first floor of the EEOB in the offices of advance and scheduling, the counsel’s offices, and the offices of the first lady; and on the second floor of the EEOB in the offices of the vice president, than in other offices. Summarized below are observations made in specific locations in the main categories, related comments from former Clinton administration staff and GSA staff, and any documentation relating to the observations. Appendix I contains additional information about the observations and additional comments from former Clinton administration staff. Twenty-nine EOP staff said they observed about two dozen prank signs, printed materials, stickers, or written messages that were affixed to walls or desks; placed in copiers, desks, and cabinets; or placed on the floor. They said some of these were derogatory and offensive in nature about the president, and sometimes there were multiple copies in certain locations. Six EOP staff also said that they had observed writing on the walls (words) in a total of two rooms. Thirteen former Clinton administration staff said that they saw a total of 10 to 27 prank signs in the EEOB during the transition, but one former employee also said that the prank signs that she saw were harmless jokes. In June and November 2001, EOP staff provided copies of 2 prank signs that they said were found during the transition, which were derogatory jokes about the president and vice president. In August and September 2001, we were also shown a roll of political stickers that were left behind and 2 stickers affixed to a file cabinet and desk containing derogatory statements about the president. Twenty-six EOP staff said that they observed a total of 30 to 64 computer keyboards with missing or damaged “W” keys. Two former Clinton administration staff said that they saw a total of 3 or 4 keyboards with missing “W” keys. Purchase records indicated that the EOP bought 62 computer keyboards on January 23 and 24, 2001. The January 23 purchase request for 31 keyboards indicated that the keyboards were “needed to support the transition,” and the January 24 purchase request for another 31 keyboards indicated that it was a “second request for the letter ‘W’ problem.” The purchase requests were approved by an OA financial manager who, in April 2001, sent an E-mail to an OA branch chief indicating that the 62 keyboards purchased in January 2001 were approximately the number that were defective because “W” keys were missing or inoperable during the transition. (The actual number of keyboards that were damaged during the transition is uncertain because of different statements provided by EOP staff regarding the number of damaged keyboards that had to be replaced.) A March 27, 2001, OA excess property report indicated that 12 boxes of keyboards, speakers, cords, and soundcards were discarded, but did not specify the number of keyboards that were included. (More information about the excess property report is contained in appendix I.) Twenty-two EOP staff and one GSA employee told us that they observed offices that were messy, disheveled, dirty, or contained excessive trash or personal items left behind. Some of those staff also said they believed that offices were intentionally “trashed.” Former Clinton administration staff said the amount of trash that was observed during the transition was what could be expected when staff moved out of their offices after 8 years. The EOP provided seven photographs that, according to an associate counsel to the president, were taken of two or three offices in the EEOB by an EOP employee on January 21, 2001, and that showed piles of binders and office supplies, empty beverage containers, and other items. However, a Clinton administration transition official said that the pictures showed trash and not vandalism. A January 30, 2001, GSA facility request form documented a request to clean carpet, furniture, and drapes and to patch and paint walls and moldings in an office that an EOP employee said was “trashed out,” including the carpet, furniture, and walls, and had three to four “sizable” holes in a wall. The facility request was made by the EOP employee who told us about this observation. Another January 30, 2001, GSA facility request form documented a request to clean carpet, furniture, and drapes in a different office that an EOP employee said was filthy and contained worn and dirty furniture. January 25, 2001, and February 17, 2001, GSA facility request forms documented requests to clean carpet, furniture, and drapes in a suite of offices that an EOP employee told us was “extremely trashed” and smelled bad. The facility requests were made by the EOP employee who told us about this observation. Ten EOP staff said that they observed a total of 16 to 21 pieces of broken furniture. Former Clinton administration staff said that some furniture was broken before the transition and could have been the result of normal wear and tear, and little money was spent on repairs and upkeep during the administration. January 25 and 29, 2001, GSA facility request forms documented requests to gain access to and for a key to a locked file cabinet in a room where an EOP employee said that he had found a key that was bent and almost entirely broken off in a cabinet that, once opened by a locksmith, contained Gore-Lieberman stickers. The requests were made by the EOP employee who told us about this observation. A January 30, 2001, GSA facility request form documented a request to fix a broken desk lock in an office where an EOP employee told us that a lock on her desk appeared to have been smashed. The facility request was made by the EOP employee who told us about this observation. A February 12, 2001, GSA facility request form documented a request to repair a leg on a sofa in an office on a floor of the EEOB where an EOP employee observed a sofa with broken legs. A February 21, 2001, GSA facility request form documented a request to repair arms on two chairs in an office where two EOP staff told us that they had observed broken chairs. The facility request was made for the EOP employee who told us about this observation. However, the manager of the office during the Clinton administration where EOP staff said they observed broken chairs said that arms on two chairs in that suite of offices had become detached a year or two before the transition and that carpenters had glued them back, but that they did not hold. Two GSA facility request forms in 1999 documented requests made by the former office manager for previous repairs of chairs in that office suite. Five EOP staff told us they observed a total of 11 to 13 pieces of furniture that were on their sides or overturned. Six EOP staff said they observed a total of four to five desks with a sticky substance or glue on the top or on drawers. Six EOP staff said that they observed a total of 5 to 11 missing office signs, which include medallions (presidential seals about 2 inches in diameter), and one of those six EOP staff also said he observed that six medallions were missing from office signs; four EOP staff said that they observed a total of 10 to 11 missing doorknobs, which may have been historic originals; an EOP official, a GSA official, and a Secret Service official said that a presidential seal 12 inches in diameter was stolen; two EOP staff said they observed a total of 9 to 11 missing television remote controls; and two EOP staff said that two cameras were missing. In addition, two EOP officials said that about 20 cellular telephones could not be located in the office suite where they belonged. The former occupants of offices during the Clinton administration whom we interviewed where items were observed missing said that they did not take them. An April 19, 2001, GSA facility request form documented a request for “replacement of frames & medallions” for four rooms, including an office where three EOP staff observed a missing office sign and medallion. The three other rooms that, according to the facility request form, needed office signs were located on one of two floors of the EEOB where an EOP employee observed four missing office signs. A February 7, 2001, GSA facility request form documented a request to “put doorknob on inter-office…door” in an office where an EOP employee told us that he had observed two pairs of missing doorknobs. The facility request was made for the EOP employee who told us about this observation. However, a GSA planner/estimator said that the work done in response to that request was not to replace a missing doorknob, but to perform maintenance on a doorknob with a worn-out part. A Secret Service report documented the theft of a presidential seal that was 12 inches in diameter from the EEOB on January 19, 2001. Purchase records indicated that the EOP bought a total of 15 television remote controls on March 6 and 15; June 5; and July 10, 2001. The EOP indicated that these purchases were made to replace remote controls that were missing from offices during the transition. Purchase records indicated that the EOP bought two cameras on March 16, 2001, and April 4, 2001. The EOP indicated that these purchases were made to replace cameras that two EOP staff said were discovered missing. However, the director of the office during the Clinton administration where the cameras belonged said that the cameras were still in the office when the staff left on their last day of employment with the EOP. Purchase records indicated that the EOP bought 26 cellular telephones on January 26, 2001. The EOP indicated that these purchases were made to replace cellular telephones that could not be located. However, former Clinton administration staff who worked in the office where the cellular telephones belonged said that they left them there at the end of the administration. In addition, a former official from that office during the Clinton administration provided copies of check-out forms documenting that the staff had returned their cellular telephones at the end of the administration. Five EOP staff said that they observed a total of 98 to 107 telephones that had no labels identifying the telephone numbers, and seven EOP staff said they saw telephones unplugged or piled up. Former Clinton administration staff said that some telephones did not have labels identifying the numbers during the administration, mainly because certain telephones were used for outgoing calls only. The EOP provided documentation summarizing telephone service orders closed from January 20, 2001, through February 20, 2001, containing 29 service orders that cited the need for or placing of labels on telephones; 6 of the 29 service orders were for work in offices where telephone labels were observed missing. EOP also provided two blanket work orders and four individual work orders that cited relabeling or placing labels on telephones for which the summary document did not mention labels. However, all of the 29 service orders on the summary document and the blanket and individual work orders EOP provided were part of other requests for service and the extent to which the work was done solely to replace missing labels was not clear. A January 29, 2001, telecommunications service request documented a request for services including “replace labels on all phones that removed.” A February 7, 2001, telecommunications service request documented a request to remove a telephone from an office where piles of telephones were observed. Thirteen EOP staff said they heard a total of 22 to 28 inappropriate or prank voice mail greetings or messages, and two EOP staff said they heard a total of 6 to 7 obscene or vulgar voice mail messages that were left on telephones in vacated offices. One former Clinton administration employee said that he left what he considered to be a humorous voice mail greeting on his telephone on his last day of employment. Two EOP staff said that they saw a total of 5 to 6 telephone lines “ripped” (not simply disconnected) or pulled from walls, and another EOP employee said that at least 25 cords were pulled from walls in two rooms. Former Clinton administration staff we interviewed who occupied those offices said they did not pull the cords from the walls. A January 24, 2001, GSA facility request form documented a request to “organize all loose wires and make them not so visible” in an office suite where an EOP employee said that at least 25 cords were pulled from the walls. The facility request was made by the EOP employee who told us about this observation. The former occupant of the main room in that office suite said that he did not observe any computer or telephone cords that were cut or torn out of walls, and that his office only had 5 telephone and computer cords. Observations of damage, vandalism, or pranks were reported by EOP staff in about 100 of about 1,074 rooms in the EEOB and in 8 of about 137 rooms in the East and West Wings of the White House. According to the OA associate director for facilities management, approximately 395 offices were vacated during the transition: 304 in the EEOB, 54 in the West Wing, and 37 in the East Wing. In the overwhelming majority of cases, one person said that he or she observed a specific incident in a particular location. However, more than one person observed most types of incidents. In addition, we were generally unable to determine when the observed incidents occurred and who was responsible for them because no one said he or she saw people carrying out what was observed or said that he or she was responsible for what was observed, with three exceptions: (1) an EOP employee who said she saw a volunteer remove an office sign from a wall, (2) a former Clinton administration employee who said he wrote a “goodwill” message inside the drawer of his former desk, and (3) another former Clinton administration employee who said that he left what he believed to be a humorous voice mail message greeting at the end of the administration. Further, we were told that many contractor staff, such as movers and cleaners, were working in the White House complex during the weekend of January 20 and 21, 2001, but the White House did not provide the data we had requested regarding visitors to the EEOB during that time. From our interviews of EOP staff, we totaled the number of incidents that were observed in the categories indicated in the June 2001 list of damage. In certain categories, the observations of EOP staff differed from the list in terms of the total numbers of incidents or alleged extent of the damage. For example, regarding the statement contained in the June 2001 list that 100 keyboards had to replaced because the “W” keys were removed, EOP staff provided different estimates of the number of keyboards that had to be replaced because of missing or damaged keys, ranging from about 33 keyboards to 150 keyboards. As a result, we could not determine how many keyboards were actually replaced because of missing or damaged “W” keys. Regarding the statement contained in the list that furniture in six offices was damaged severely enough to require a complete refurbishment or destruction, we were told that 16 to 21 pieces of broken furniture were observed during the transition. This included 5 to 7 chairs with broken legs or backs, but we did not obtain any documentation indicating that they were either completely refurbished or destroyed. The EOP provided photographs of 4 pieces of furniture that, according to an associate counsel to the president, were moved to an EOP remote storage facility that is now quarantined. They included a chair with a missing leg, a chair with a missing back, a sofa without a seat cushion, and a desk with missing drawer fronts. However, no information was provided identifying the offices from which these pieces of furniture were taken, when the damage occurred, or whether any of the damage was done intentionally. Further, EOP staff told us about fewer incidences of writing on walls than were indicated in the list. Regarding the statement in the list that eight trucks were needed to recover new and usable supplies that had been thrown away, the EOP official responsible for office supplies said that about eight truckloads of excessed items were brought to an EOP warehouse where they were sorted into usable and nonusable materials, but he was not aware of any usable supplies being discarded. Cost data were not readily available regarding all of the observations. Further, although certain repair and replacement costs were provided, it was unclear what portion of these costs was incurred or will be incurred due to vandalism. The EOP and GSA provided documentation indicating that at least $9,324 was spent to repair and replace items that were observed broken or missing in specific locations and for cleaning services in offices where observations were made. The following list itemizes those costs: $4,850 to purchase 62 keyboards; $2,040 to purchase 26 cellular telephones; $1,150 for professional cleaning services; $729 to purchase 2 cameras; $221 to purchase 15 television remote controls; $108 for locksmith services regarding furniture; $76 to remove a telephone from an office; $75 to repair 2 chairs with broken arms; and $75 to repair a sofa leg. EOP and GSA officials also provided estimates of $3,750 to $4,675 in costs that could have been incurred or may be spent in the future to replace missing items for which no documentation, such as facility request forms or purchase records, was available. Because specific locations were not provided regarding some of the observations of missing items, we were unable to determine whether all of the missing items had been replaced. The costs estimated by EOP or GSA staff for replacing the government property that was observed missing included: $2,100 to $2,200 for 9 to 10 doorknobs; $675 to $750 for 9 to 10 medallions; $625 to $1,375 for 5 to 11 office signs; and $350 for a presidential seal that was 12 inches in diameter. Based on what the White House said were extremely conservative estimates and straightforward documentation, the White House said that the government incurred costs of at least $6,020 to replace missing telephone labels and reroute forwarded telephones. The documentation provided included two blanket work orders and associated bills, a closed orders log for the period January 20 through February 20, 2001, 8 individual work orders for telephone service, and two monthly AT&T invoices. The White House also identified, but did not provide, other individual telephone service work orders that cited the need for or placing labels on telephones. Six of the 29 work orders listed on the closed orders log that cited needing or placing labels and four individual work orders that included labels were for work in offices where telephone labels were observed missing. However, both the orders listed on the closed orders log and the individual work orders, as well as the blanket work orders, cited other services besides labeling, and it was not clear to us from the documentation provided the extent to which relabeling was done solely to replace missing labels or would have been necessary anyway due to changes requested by new office occupants. None of the documents provided specifically cited correcting forwarded telephones. Thus, while we do not question that costs were incurred to replace labels or reroute forwarded telephones, we do not believe the documentation provided is clear enough to indicate what those costs were. Appendix I contains information regarding additional costs to repair furniture that was not in locations where EOP staff told us they observed pieces of damaged or broken furniture during the transition. We did not request cost information associated with some observations, such as the time associated with removing prank signs, placing overturned furniture upright, or investigating missing items because we did not believe these costs would be material or readily available or that the information would be beneficial relative to the effort that would have been required to obtain the data. These costs also did not include any EOP or GSA costs associated with our review or responding to other inquiries related to the alleged damage. According to a limited number of EOP, GSA, and former Clinton administration staff we interviewed who worked in the White House complex during previous transitions, as well as a press account that we reviewed, some of the same types of observations that were made concerning the condition of the White House complex during the 2001 transition were also made during the 1993 transition. These observations included missing office signs and doorknobs, messages written inside desks, prank signs and messages, piles of furniture and equipment, and excessive trash left in offices. We also observed writing in a desk in the EEOB that was dated 1993. In addition, words and initials were reported observed carved into desks during the 1993 transition, which were not reported observed during the 2001 transition. On the other hand, no one said they observed keyboards with missing and damaged keys during previous transitions, as numerous people said they observed in the White House complex during the 2001 transition. Seven EOP staff and one former Clinton administration employee who had worked in the White House complex during previous transitions made comparisons regarding the condition of the space during the 2001 transition with conditions during previous transitions. Six EOP staff said that the condition was worse in 2001 than previous transitions, while one EOP employee and one former Clinton administration employee said the office space was worse in 1993 than 2001. Because of the lack of definitive data available to compare the extent of damage, vandalism, and pranks during the 2001 transition with past transitions, we were unable to conclude whether the 2001 transition was worse than previous ones. Appendix II contains observations and a press account regarding the condition of the White House office space during previous transitions. Former Clinton administration officials told us that departing EOP staff were required to follow a check-out procedure that involved turning in such items as building passes, library materials, and government cellular telephones at the end of the administration. The procedure did not include an inspection of office space or equipment to assess whether any damage had occurred. A January 4, 2001, memorandum from President Clinton’s chief of staff encouraged staff to check out by January 12, 2001, but did not indicate in what condition the office space should be left or provide any warning about penalties for vandalism. When members of Congress and their staff vacate offices on Capitol Hill, their office space and equipment are inspected, and members are held accountable for any damages. Because it is likely that allegations of damage, vandalism, and pranks in the White House complex could be made during future transitions and because of the historic nature of the White House complex and the attention it receives, we are recommending actions to help deter future problems during presidential transitions, including a check-out process for departing EOP staff that includes clear instructions; and an office inspection documenting the condition of office space, furniture, and equipment. In addition, EOP, GSA, and former Clinton administration staff identified a number of issues related to office cleaning during our interviews, such as whether (1) a sufficient number of people were available to do the cleaning as quickly as necessary, (2) cleaning had begun soon enough, (3) sufficient coordination existed between the EOP and GSA, and (4) a sufficient number of containers were available for departing staff to deposit their trash. Accordingly, we are recommending that the EOP and GSA work together to explore what steps should be taken to expedite the cleaning of White House office space during presidential transitions. Appendix III discusses steps to help prevent damage to government property during future presidential transitions. Damage, theft, vandalism, and pranks occurred in the White House complex during the 2001 presidential transition. Incidents such as the removal of keys from computer keyboards; the theft of various items; the leaving of certain voice mail messages, signs, and written messages; and the placing of glue on desk drawers clearly were intentional acts. However, it was unknown whether other observations, such as broken furniture, were the result of intentional acts, when and how they occurred, or who may have been responsible for them. Further, with regard to stolen items, such as the presidential seal, because no one witnessed the thefts and many people were in the White House complex during the transition, it was not known who was responsible for taking them. Moreover, regarding other items reported missing, such as doorknobs, cellular telephones, and television remote controls, it was unknown whether all of them were thefts, and if they were, who was responsible for taking those items and when they were taken. Further complicating our attempt to determine the amount of damage that may have occurred was the lack of documentation directly corroborating some observations and our inability to reconcile certain observations only a few hours apart in locations where some people saw damage, vandalism, or pranks and where others saw none. We realize the difficulty of preparing the White House office space for occupancy by the new administration in the short amount of time that is available during presidential transitions. We also recognize that some prank-type activity has occurred in the White House complex during past transitions and could occur in the future. Because of the historic nature and symbolism of the White House and the public attention it receives, as well as the costs associated with investigating allegations of damage, we believe that current and future administrations should have a cost-effective inspection of office space, furniture, and equipment as part of the check- out process for departing employees during transitions and document any damage observed. We also believe that departing EOP staff should be given clear instructions regarding what condition their office space and equipment should be left in and how to handle office supplies, and they should be informed about the penalties for damage and vandalism. Many EOP staff reported observing what they believed to be an excessive amount of trash in the office space during the transition. Because future presidential transitions may not fall on a weekend, as the 2001 transition did, even less time will be available to clean the space. The EOP and GSA should explore what additional steps could be taken to ensure that the EOP office space is immediately cleaned and prepared for an incoming administration, including communicating with both outgoing and incoming administrations concerning the timetable and procedures for the transition. Steps should be taken to help (1) prevent and document damage that results in repair or replacement costs during presidential transitions; (2) ensure that the space is ready for occupancy; and (3) avoid potential future costs associated with investigating allegations of damage, vandalism, and pranks. We recommend that the director of the Office of Management and Administration for the White House Office and the GSA administrator work together to revise the employee check-out process to require a cost-effective inspection of office space, furniture, and equipment by the EOP and GSA within their respective areas of responsibility and to document any damage observed; and explore what additional steps could be taken to ensure that the EOP office space is immediately cleaned and prepared for an incoming administration, including communicating with both outgoing and incoming administrations concerning the timetable and procedures for transition. We also recommend that the officials provide clear instructions to staff about what condition the office space and equipment should be left in, how office supplies should be handled, and the penalties for damaging and vandalizing government property. In March and April 2002, we held exit conferences with White House officials and former Clinton administration representatives during which we provided them an opportunity to review our preliminary findings. The White House provided written comments on the preliminary findings, and former Clinton administration representatives provided oral comments. We considered those comments in preparing our draft report. On May 3, 2002, we provided copies of a formal draft of this report for comment to the counsel to the president and the GSA administrator. On May 31, 2002, the counsel to the president provided written comments on the draft, which are reprinted in appendix IV. Our response to the White House’s general statements is provided below, and our response to the White House’s specific comments is contained in appendix V. The deputy commissioner of GSA’s Public Buildings Service also provided comments on May 13, which are summarized below and reprinted in appendix VI. We had intended to provide representatives from the Clinton administration with a draft of this report for their review and comment. However, we did not do so because one or more representatives prematurely provided information to the press on the basis of their discussions with us during our review, and we believed that another premature release of the contents of the draft report was likely. Nonetheless, on the basis of the discussions we did have with Clinton administration representatives during the course of our review, we believe that our report fairly reflects the information they provided to us. The White House’s general comments on the draft and our response follow. The White House said that, in our May 3 draft of the report, we had failed to address many of the concerns it had raised in its April 26 set of comments on our preliminary findings. Accordingly, the White House said, it had provided us with a second set of detailed comments on the May 3 draft. The White House also said that it was disappointed that it would not have an opportunity to consider or reply to our response to its comments prior to publication of the final report. It said that this was inconsistent with all previous representations regarding our process. We carefully considered the comments that the White House provided regarding our preliminary findings and made changes in our report where we believed appropriate. On May 13, the White House provided written comments on our May 3 draft report that included the names of people we interviewed during our review. The White House subsequently decided to delete these individuals’ names from its comments, and on May 31, provided us with a second set of comments on our May 3 draft report that did not contain those names. Moreover, we did not provide the White House with an opportunity to reply to our response to its comments because that is not part of our normal comment process; we do not normally provide agencies with our response to their comments prior to publication of the report. The White House is incorrect in indicating that, by not providing the White House with an opportunity to consider or reply to our response to its comments prior to the publication of the report, we were being inconsistent with all previous representations regarding our process. We explained the process on numerous occasions and provided a copy of our congressional protocols to an associate counsel to the president, and we never indicated that the White House would have an opportunity to consider or reply to our response to its comments before the report was published. The White House said that we had not reported many facts that readers needed to know to have a complete and accurate understanding of what happened during the 2001 transition. The White House said that it believed the report did not provide sufficient detail to respond to Representative Barr’s request or to meet Government Auditing Standards, and noted that we did not specifically identify each reported instance of vandalism, damage, or a prank. Further, the White House said that, in many cases, we reported a former staff member’s comments without having discussed the observation itself. The White House noted that reporting when, where, and by whom an observation was made would be helpful in determining the likely perpetrator. The White House also noted that we had not reported the specific content of graffiti, messages, and signs. According to the White House, this written content would provide (1) indications of who wrote the messages and when; (2) an insight into the mind-set or intention of the person who wrote the message; (3) an opportunity to infer that, if departing staff left a vulgar or derogatory message, those same individuals may be responsible for other incidents that were observed near the location of the message; (4) an opportunity to compare the 2001 transition to prior ones; and (5) an opportunity to decide whether we had fairly and objectively characterized the content of the messages. In transmitting a revised set of comments on May 31, the counsel to the president stated his objection to our decision to redact from the White House’s comments, which are reprinted in appendix IV, a word that we considered to be inappropriate that was contained in a prank sign that was found during the transition. He also said that with respect to our description of a particular message that said “jail to the thief” as “arguably” derogatory to the president, because we did not reveal the content of the message, readers have no way of knowing whether our characterization of it being “arguably” derogatory is accurate. We disagree with the White House that we had not reported many facts that readers needed to know to have a complete and accurate understanding of what happened during the 2001 transition. Our report includes the information (1) we agreed to provide to Representative Barr, (2) to support our conclusions and recommendations, and (3) to comply with Government Auditing Standards. As provided for under our congressional protocols when we receive congressional requests, we work with the requesters to agree on a scope of work and an approach that takes into consideration a number of factors. They include the nature of the issues raised; the likelihood of being able to address them in a fair, objective, and complete manner; a consideration of professional standards, rules of evidence, and the nature and sufficiency of evidence likely to be available on the particular engagement; known or possible constraints related to obtaining the information needed; and the time and resources needed and available to accomplish the work. For this review, after independently taking these factors into consideration, we used a thorough, reasonable approach to provide as complete and objective a picture as possible of the damage that may have occurred during the 2001 presidential transition, given that (1) we could not physically observe evidence of most of the incidents that were reportedly observed, (2) limited definitive documentation was available regarding these events, and (3) views of interested parties would likely differ on many issues and would be difficult or impossible to reconcile. Accordingly, we agreed to respond to Representative Barr’s request by reporting on the documentation provided by the White House; summarizing the observations made by occupants and preparers of White House office space during the 2001 transition; and obtaining explanations and other comments of former Clinton administration staff related to any damage, vandalism, or pranks. We neither agreed to nor performed an investigation into who may have been responsible for any damage, vandalism, or pranks identified, nor did we agree to report each individual observation. We reported all observations in a summary fashion (i.e., total number of observations in a particular category) and discussed some observations in detail when warranted. For example, in the section of appendix I regarding furniture, we not only provided the total number of pieces of broken furniture that people observed, but also described the specific problems they observed. However, regarding other categories of observations, such as missing telephone labels, we did not provide details regarding each observation because such information would not have been meaningful; rather, we reported a range of the total number of telephone labels observed missing. Reporting each instance was not only unneeded, but would have been redundant. Further, we separately mentioned each observation that was made in the White House itself. Although we would agree with the White House that the details about when, where, and by whom observations were made may be relevant in assessing the credibility of statements and determining the likely perpetrators, we do not believe that reporting additional detail would have allowed readers to make sound, independent conclusions. Although, as the White House correctly states, Government Auditing Standards require audit reports to contain all the information needed to satisfy audit objectives and promote a correct understanding of the matters to be reported, these standards also recognize that considerable judgment must be exercised in determining an appropriate amount and level of detail to include. Excessive detail can detract from a report, conceal the real message, and confuse or discourage readers. Consistent with these professional standards, we believe that we have provided the appropriate amount of detail needed to satisfy our objectives and support our conclusions and recommendations. In our view, reporting more detail could, at a minimum, confuse readers and contribute to unproductive speculation, rather than lead to sound conclusions. As we have reported, we believe that sufficient, competent, and relevant evidence exists to support our conclusion that damage, vandalism, and pranks did occur during the 2001 presidential transition, and we have presented this evidence in our report. However, we believe it is also important to recognize that corroborating evidence was not provided for all observations, and that definitive evidence regarding who was responsible for the incidents observed generally was not provided. In addition, although a number of incidents appeared intentional by their nature, it often was unknown whether other types of incidents were intentional, malicious acts. Accordingly, we do not believe it was appropriate to include all of the details that the White House suggested because we did not want to mislead readers into concluding that corroboration existed and that all of the reported incidents occurred and were intentional, nor did we believe it was appropriate to contribute to speculation about who may have been responsible for any acts that were intentional for which credible evidence was not provided. In its comments, the White House cited several cases where we failed to report information regarding what staff said other people had seen or had told them. This is correct; in reporting the observations, we did not include information people relayed to us from third parties. We reported what people told us they personally observed. In addition, in certain cases, the White House cited statements in its comments that it claimed staff had said that were not contained in our interview records. An associate counsel to the president told us that, in preparing the White House’s comments, she discussed the accuracy of statements attributed to EOP staff in the report draft with those individuals. Had we known in advance that an associate counsel to the president was going to recontact the EOP staff we interviewed, we would have asked to participate in those discussions. Since we did not participate in those discussions, we have no information about the context or manner in which they took place. Therefore, we reported only what our interview records indicated EOP staff told us. Although we would typically confirm our understanding of statements made to us during interviews directly with the interviewees whenever a question or doubt arises, this was problematic in this review due to the protocol established by the White House for our work. Under this protocol, we were asked to provide written requests for follow-up interviews or additional documentation to the counsel’s office, and all such interviews were arranged by that office. This was a time consuming process that at times involved significant delays in gaining access to the individuals we sought to interview. Had we been granted direct, prompt access to the people we needed to interview, we would have been in a better position to have quickly and efficiently resolved any questions or misunderstandings that may have arisen. Nonetheless, with the exception of one follow-up interview, at least two GAO staff attended interviews in the White House complex, and we believe this approach provided reasonable assurance that we accurately captured what the interviewees told us. Regarding the White House’s statement that, in many cases, in reporting a former staff member’s comments in response to a particular observation, we had not discussed the observation itself, each observation was included in summary fashion, and in some cases, in detail, before we reported the comments by former Clinton administration staff. In a few cases, in response to the White House’s comments, we added additional detail regarding an observation. Moreover, although we reported every observation in summary fashion, we did not obtain comments from former Clinton administration staff regarding all observations, nor did we report every comment provided by former Clinton administration staff. Further, we did not report positive actions that people said former Clinton administration staff had taken to facilitate the transition or welcome new staff because they did not directly relate to the allegations. Regarding the specific contents of graffiti, messages, and signs, we did not believe that it was appropriate or necessary to report their specific contents. Although most of the messages reportedly observed or heard did not contain profane language, some of them did. However, we will not report them and, thus, we decided to redact an obscene word that the White House included in its comments in reference to a message that was found. Further, although we did not report their specific content, we described the general nature of those messages. We believe that the White House is being speculative in suggesting that reporting their specific content would provide indications of who wrote them and when they were written and would provide an insight into the mind-set of the person who wrote them. Further, although whoever left a vulgar or derogatory message could have been responsible for other incidents that were observed near the location of the message, no substantive evidence was presented linking messages to other incidents that were observed. We also do not believe that reporting the specific contents would have provided a meaningful opportunity to compare the 2001 transition to previous ones because we also did not report the specific content of signs and messages that were found during previous transitions, nor was there sufficient information about the condition of White House office space during previous transitions to make a meaningful comparison. In a draft of this report, we had characterized a sticker that said “jail to the thief” as being “arguably” derogatory to the president because we did not know the intent of the person who left the message. However, in response to the White House’s comments, we deleted “arguably.” We informed an associate counsel to the president of our intention to make this change before the White House sent us its May 31 letter raising this concern. Although we agree with the White House’s view that it is solely responsible for its comments, we are publishing its comments as part of our report, and we are responsible for our report. Further, although we would normally not make any changes to an agency’s comments on our draft report, the situation in this case is highly unusual and, in our view, calls for an unusual step on our part. With respect to the White House’s objection to our redaction of a word contained in a sign found during the transition, the word in question is clearly obscene and, in our independent and professional judgment, should not be used in a public report that bears GAO’s name. As a result, we have deleted this word from the White House’s comments, used “*” to reflect the number of letters in the word, and indicated that GAO deleted an obscenity. By doing so, we believe that readers will know that an unacceptable word was used in a message left in the White House complex during the 2001 presidential transition. In addition, because the word was part of its comments, we will refer inquiries about this matter to the White House. Finally, we do not believe that our deletion of one word out of over 70 pages of detailed comments, with full disclosure of the reason why we deleted it, seriously undermines the White House’s comments. The White House objected to our structuring the report around the June 2001 list of damage and comparing the staff members’ observations with the contents of the list. In stating its objection, the White House highlighted the cautionary statement that the counsel to the president made in transmitting the list to us. Further, the White House indicated that we did not ask the individuals who prepared the list to explain how the list was prepared, who transcribed it, what its purpose was, and or what each line referred to. In addition, the White House indicated that we, at times, misstated the contents of the list. We structured appendix I, but not the letter portion of this report, around the June 2001 list because the list highlighted congressional and other interest in initiating our review. Further, interviewees were not restricted to observations about items on the list. Rather, during our interviews, we solicited observations regarding anything that could be damage, vandalism, or pranks. Before the list was prepared, the OA director informed us in writing that no documentation existed regarding the allegations. On page 2 of our draft report and this report, we quoted the counsel to the president’s cautionary remarks about the list that were contained in his June 4, 2001, transmittal letter to us. Further, we note that, according to an article in the June 4, 2001, issue of the Washington Post, the White House press secretary provided the list to the newspaper, which suggested that the White House had sufficient confidence in its contents to release it publicly. In addition, the White House’s assertion is incorrect that we did not ask the individuals whose names appeared on the list to explain how it was prepared. Our record of a June 6, 2001, entrance conference at the White House indicated that the OA director, who contributed to the list, discussed at that meeting how it was prepared. Further, our initial interviews of EOP staff included four of the five individuals who helped prepare the list, which allowed us to ask them about their observations, and, in one case, our interview records indicated that one of the individuals said that a statement on the list “bothered” him. Regarding the White House’s statement that we often misstated the contents of the list, we summarized the contents of the list on page 2 of the report and revised the report as necessary to quote directly from the list throughout the remainder of the report. The White House said that we materially understated the number of observations, and that our methodology of calculating the ranges was flawed. For example, the White House objected to the method that we used to calculate a range of keyboards observed with missing and damaged “W” keys. The White House said that our flawed methodology infected each of the ranges presented in the report. Further, the White House also said that the problem with our analysis was compounded because, in the instance cited, we had grouped three offices together. As indicated in our report regarding the methodology used to report the number of keyboards observed with missing or damaged “W” keys, we reported a range representing the number of incidents observed because some staff said they saw different numbers of incidents in the same rooms or offices. Our methodology in calculating the range of keyboards with missing or damaged keys, as well as for other categories of observations, was used to include both the lowest and the highest numbers that were reported to us in particular locations and to eliminate possible double counting. The White House mischaracterized how we determined our range in the hypothetical cases it provided. For example, in the hypothetical case involving three people who observed 1, 25, and 100 incidents, respectively, the White House said that, using our methodology, we would calculate the range of total observed incidents as being from 1 to 126, which the White House said would be an absurd conclusion. However, the White House’s application of our methodology in this hypothetical case is incorrect and would have resulted in the wrong conclusion; our range of observed incidents in that location would be 1 to 100. The White House similarly mischaracterized the other example it gave on this issue. We disagree with the White House’s argument that, when multiple people provided different numbers of observations in the same specific locations, the lowest number observed in a particular location cannot be used as the low end of the range. We used ranges to account for the different observations made in the same locations and did not make any judgments about which observation was correct because it was not possible in many cases to do so. We believe this approach is the most accurate and objective depiction of views that were shared with us. Further, we did not conclude what the precise numbers of incidents observed in various categories were because they would have been impossible to determine. Regarding the situation that the White House cited when we grouped observations of keyboards with missing and damaged “W” keys in three offices, we did it that way because an EOP employee said that her observation pertained to them. The White House objected to our use of the term “EOP” staff, rather than identifying the specific EOP unit being discussed. The White House said that it is not accurate to refer to each EOP unit individually or all units collectively as the EOP because not all offices in the complex fall within the EOP umbrella and that we did not investigate all EOP units. Further, the White House said we had inaccurately referred to EOP units as agencies. Except for staff we interviewed who worked for the Secret Service, GSA, and the Executive Residence, all of the people we interviewed at the White House complex worked for or had worked for the EOP. We did not believe that it was necessary to break out, in all categories of observations, staff members’ respective EOP units, nor was it an objective of our review. However, when we reported specific observations or comments made by EOP officials, we used their titles, which identified their respective EOP units. To address the White House’s comment that the term “EOP” may be over-inclusive, we added a note to the report indicating that we did not interview, for example, any staff who worked for the United States Trade Representative, the Office of National Drug Control Policy, or the Office of Homeland Security. We also noted that most of the EOP staff we interviewed who worked at the White House before January 20, 2001, worked for OA. Concerning the White House’s comment that we misidentified units that comprise the EOP and misidentified EOP components as “agencies,” we understand that the Executive Residence, although treated as “analogous to an EOP unit” (by the court, e.g., in Sweetland v. Walters, 60 F. 3d 852, 854 (D.C. Cir. 1995)), is technically not an EOP component because it was not created as such. Notwithstanding this technicality, we had listed the Executive Residence as an EOP component because it is shown as such in the White House staff manual that was in effect at the time of the transition and in the Budget of the United States Government, Fiscal Year 2003. To recognize the White House’s comments about this issue, however, we deleted the Executive Residence from our list of EOP components. On the other hand, we do not agree with the White House’s objection to our characterization of EOP components as agencies. We recognize, as the White House contends, that EOP components are not all treated as agencies for purposes of the Freedom of Information Act (FOIA), 5 U.S.C. § 552 (Sweetland v. Walters, supra), although some are. Armstrong v. Executive Office of the President, 90 F. 3d 553, 559 (D.C. Cir. 1995). However, a government entity may be an agency for some purposes but notfor others. We have, for example, consistently viewed the Executive Residence as an agency in applying 31 U.S.C. 716. Finally, the White House said that we made a concerted effort to downplay the damage found in the White House complex because we (1) did not individually report each instance of vandalism, damage, or a prank; (2) underreported the number of observations in nearly every category of damage and ignored additional observations that were made; (3) omitted any mention of several individuals who told us that damage found during the 2001 transition was worse than during prior transitions; (4) ignored documents that showed requests were made to repair telephone damage and clean offices; (5) failed to quantify or estimate certain real costs incurred to remedy or repair the damage; (6) failed to report the content of the graffiti and signs that were found in the complex; and (7) were unwilling to conclude that the vandalism, damage, and pranks were intentional, even when the circumstances plainly indicate that they were. We did not downplay the damage found in the White House complex, as the White House suggested. Rather, we tried to eliminate possible or actual double-counting of observations, present the information fairly and objectively, and avoid speculation. Regarding the White House’s statement (1) that we omitted a reference to each reported instance of vandalism, damage or a prank, as previously explained, all of the reported observations were reported in a summary fashion (i.e., total number of observations in a particular category) and some were also discussed in detail. We also disagree with the White House’s statement (2) that we underreported the number of observations in nearly every category of damage and ignored additional observations that were made. As previously explained and discussed in appendix V in our response to the White House’s specific comments, we reported the number of observations in various categories as a means of eliminating possible or actual double- counting. Regarding the White House’s statement (3) that we omitted any mention of several individuals who told us that the damage found during this transition was worse than prior transitions, the letter portion of the report summarized these individuals’ observations, and appendix II contained statements by six EOP staff that the condition of the White House complex was worse in 2001 than during previous transitions. Consequently, we did not revise the report. Regarding the White House’s statement (4) that we ignored documents that showed requests were made to repair telephone damage and clean offices, the report in fact cited several facility requests for cleaning and telephone service orders, but we could not conclude that they documented intentional damage. This conclusion is inconsistent with the OA director’s April 2001 letter in which he stated that repair records do not indicate the cause of repairs. Further, we did not ignore any of the documentation that the EOP provided, but carefully reviewed all of the documentation that was provided. Finally, the White House did not provide us with copies of all of the documents related to telephone repairs that it cited in its comments. Regarding the White House’s statement (5) that we failed to quantify or estimate certain real costs incurred to remedy or repair the damage, it was not our objective to independently estimate or determine all such costs, and we clearly stated in our report that we did not do so. We did not obtain repair and replacement costs for all reported incidents because we did not believe that they would be readily available or material, nor did we believe that the value of the information would have been commensurate with the level of resources required to obtain and verify such data. Regarding the White House‘s statement (6) that we failed to report the content of graffiti and signs that were found in the complex, as previously discussed, we did not believe it was necessary or appropriate to include their specific content in this report, but we did describe their general nature. Finally, contrary to the White House’s assertion (7) that we were unwilling to conclude that the vandalism, damage, and pranks were intentional, even where the circumstances plainly indicated that they were, we stated in our conclusions that incidents such as the removal of keys from computer keyboards; the theft of various items; the leaving of certain voice mail messages, signs, and written messages; and the placing of glue on desk drawers clearly were done intentionally. However, we also concluded that it was unknown whether other observations, such as broken furniture, were the result of intentional acts and when and how they occurred. In its specific comments, the White House identified instances in which it did not believe that the oral evidence or the amount of detail included in the report was sufficient to meet provisions of the Government Auditing Standards pertaining to the competency of evidence or the objectivity and completeness of reports. Although we address the White House’s specific substantive points in appendix V of our report, we believe that it is important to state here that the report does comply with Government Auditing Standards. In citing the particular standard in question, the White House either did not cite the entire standard or all of the factors that must be considered in interpreting the standard, or both. For example, in discussing the competency of the oral evidence provided by an EOP employee, the White House described the employee’s overall responsibility for handling telecommunications problems during the first month of the new administration and cited the following excerpt from Government Auditing Standards 6.54(f): Testimonial evidence obtained from an individual who…has complete knowledge about the area is more competent than testimonial evidence obtained from an individual who…has only partial knowledge about an area. However, in addition to excluding a portion of this standard, the White House did not refer to other parts of standard 6.54 or other factors that need to be considered. Other relevant parts of standard 6.54 follow: 6.54 The following presumptions are useful in judging the competence of evidence. However, these presumptions are not to be considered sufficient in themselves to determine competence. 6.54(e) Testimonial evidence obtained under conditions where persons may speak freely is more competent than testimonial evidence obtained under compromising conditions (for example, where the persons may be intimidated). 6.54 (f) Testimonial evidence obtained from an individual who is not biased or has complete knowledge about the area is more competent than testimonial evidence obtained from an individual who is biased or has only partial knowledge about the area. Thus, in considering the competency of oral evidence, other factors besides a person’s level of responsibility must be considered, such as the circumstance under which they provide the oral information; whether they are reporting what they observed versus what someone else said they saw; factors that could influence their objectivity; the reasonableness or consistency of the information presented compared to other information or facts; and the extent to which corroborating or contradictory information is provided. We gave appropriate and careful consideration of all of these factors in conducting this review. Similarly, in interpreting other Government Auditing Standards, such as those related to the objectivity or completeness of reports, considerable judgment must be exercised regarding the amount of detail provided to promote an adequate and complete understanding of the matters reported and to present the information in an unbiased manner with appropriate balance and tone. This must be done so that readers can be persuaded by facts, as called for by the standards (7.50, 7.51, and 7.57). In making judgments about the level of detail to provide, it must be recognized that too much detail can detract from a report, as previously discussed. But, even more importantly, aside from the level of detail, the competency and sufficiency of the evidence and completeness of information must be considered, including differentiating between uncorroborated oral statements and substantiated facts. In judging what details to report and how to report them, it is also important to consider what information is not known about particular situations so as to avoid misleading readers into drawing inappropriate or premature conclusions. Notwithstanding our disagreement with the White House’s interpretation of Government Auditing Standards, we agree that efforts should be made to avoid possible misinterpretation of information in audit reports. In that regard, we have clarified our report where we felt it was appropriate. Finally, both in its general and specific comments, the White House expressed concern about our exclusion of certain EOP staff observations in the report, or what it views as our lack of consideration of the documentation it provided and our unwillingness to draw the same conclusions it did based on the information at hand. We believe that it is important to note here that many of the observations in question involved relaying views espoused by others, which we do not believe is acceptable evidence in these cases. Further, although we carefully reviewed and considered all of the evidence that the White House provided, we did not always believe it was sufficient to support the conclusions that the White House suggested or reached. The White House did not provide any comments on our recommendations. GSA’s deputy commissioner of the Public Buildings Service said that GSA had carefully reviewed the draft report and agreed with the two recommendations regarding the logistics of future transitions. The deputy commissioner said that GSA had made every effort during transitions to meet the very considerable demands that are placed on the agency when several hundred staff move out of the White House complex. For this reason, the deputy commissioner said GSA believes that its ability to carry out its responsibilities during future transitions will be strengthened by working with the Office of Management and Administration of the White House Office to develop procedures for both office space inspection and cleaning and office space preparations. He added that improved communication will be an integral part of these procedures. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the chairman and ranking minority member, House Committee on Appropriations; the chairman and ranking minority member, House Appropriations Subcommittee on Treasury, Postal Service and General Government; the chairmen and ranking minority members, House Committee on Government Reform and Senate Committee on Governmental Affairs; the chairman and ranking minority member, Senate Committee on Appropriations; the chairman and ranking minority member, Senate Appropriations Subcommittee on Treasury and Postal Service; the deputy assistant to the president for management and administration; the administrator of the General Services Administration; former President Clinton; and the former deputy assistant to the president for management and administration during the Clinton administration. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Major contributors to this report were Bob Homan, John Baldwin, and Don Allison. If you have any questions, please contact me on (202) 512-8387 or at ungarb@gao.gov. This appendix contains the observations of Executive Office of the President (EOP) and General Services Administration (GSA) staff and former Clinton administration staff regarding the condition of the White House office space during the 2001 presidential transition. Staff we interviewed generally told us that they saw evidence of damage, vandalism, or pranks shortly before or at the beginning of the administration. The observations are discussed in the categories contained in the June 2001 list of damage. Some EOP staff said they believed that what they observed during the transition, such as broken furniture and excessive trash left behind, was done intentionally. Incidents such as the removal of keys from computer keyboards; the theft of various items; the leaving of certain voice mail messages, signs, and written messages; and the placing of glue on desk drawers clearly were done intentionally. However, regarding other observations, we generally could not make judgments about whether they were acts of vandalism because we did not have information regarding who was responsible for them, when they occurred, or why they occurred. Further, in most cases, we were unable to determine the exact number of incidents. When staff said they observed different numbers of incidents in the same location and/or category, we did not attempt to make judgments regarding which account was correct; rather, we used ranges. In the few instances where people observed a different number of items in a particular location, we used the lowest and highest numbers observed by different people in that location as the range. In addition, when an individual provided a range of the number of items that he or she saw, we included that range in our calculation of the total range of observations for that category. When people said they observed incidents, but did not provide a specific number, we did not estimate a number, but noted this situation when relevant. Our interviews were conducted between 5 and 16 months after the transition, and we recognized that recollections could have been imprecise. Further, in some cases, when we conducted follow- up interviews with certain individuals for the purposes of clarification, different accounts of their observations were provided. In those instances, we generally noted both accounts. In the overwhelming majority of cases, one person said that he or she observed a specific incident in a particular location. However, more than one person we interviewed observed most types of incidents. In some cases, people said that they observed damage, vandalism, or pranks in the same areas where others said they observed none, sometimes only hours apart. In calculating the number of incidents, we attempted to eliminate double counting when people said that they observed the same types of incidents in the same locations or could not recall any location. We included repair and replacement costs provided by EOP and GSA for some, but not all, reported damage, vandalism, and theft in this appendix. When it opened in 1888, the Eisenhower Executive Office Building (EEOB), which was originally known as the State, War, and Navy Building and later as the Old Executive Office Building, contained 553 rooms. Over the years, the original configuration of the EEOB office space has been altered, and it now contains about 1,074 rooms. During the Clinton administration, the office space in the East and West Wings of the White House consisted of about 137 rooms. EOP staff cited about 100 rooms in the EEOB and 8 rooms in the White House where incidents were observed. According to the Office of Administration (OA) associate director for facilities management, approximately 395 offices were vacated during the transition: 304 in the EEOB, 54 in the West Wing, and 37 in the East Wing. Observations were made in 16 different units of the White House Office. However, more observations of damage, vandalism, and pranks were made on the first floor of the EEOB in the offices of advance and scheduling, the counsel’s offices, and the offices of the first lady; and on the second floor of the EEOB in the offices of the vice president, than in other offices. Observations that were made in the White House are specifically noted in this appendix, while observations made in the EEOB are provided in the totals for each category or discussed as examples. The June 2001 list indicated that six door signs, six medallions, two EEOB doorknobs, and a presidential seal were stolen. Six EOP staff told us they observed that a total of 5 to 11 office signs, which are affixed with medallions (presidential seals about 2 inches in diameter) were missing. One of those six EOP staff also said he observed that six medallions were missing from office signs. These observations included an office sign that an EOP employee said that she saw a volunteer remove on January 19 outside an office in the EEOB. The EOP employee said that the person who removed the sign said that he planned to take a photograph with it, and that she reported the incident to an OA employee. Further, the EOP employee said that the person attempted to put the sign back on the wall, but it was loose. Two other EOP staff said they noticed that the sign outside that office was missing during the transition. Four EOP staff said they saw that a total of 10 to 11 doorknobs, which may have been historic originals, were missing in different locations. A February 7, 2001, GSA facility request form documented a request to “put doorknob on inter-office…door” in an office where an EOP employee said he observed two pairs of doorknobs were missing. A GSA planner/estimator who said he was in charge of repairing and replacing building fixtures in the EEOB, including office signs, medallions, and doorknobs, said he received no written facility requests made to GSA for replacing missing office signs, medallions, or doorknobs during the transition. He said that work done in response to the February 7, 2001, GSA facility request form was not to replace a missing doorknob, but to repair one that had a worn-out part. He also said that over the past 20 years, doorknobs have been found missing about a half-dozen times in the EEOB, and not only during transitions. In addition, he said the medallions are difficult to remove and that a special wrench is needed to remove them from an office sign. An April 19, 2001, GSA facility request form documented a request for “replacement of frames & medallions,” including an office where three EOP staff observed a missing office sign and medallion. The three other rooms that, according to the facility request form, needed office signs were located on one of two floors of the EEOB where an EOP employee observed four missing office signs. The OA associate director for facilities management said that much repair and replacement work was done during the transition without documentation being prepared because of the need to complete the work quickly. This official said, for example, that three to four missing office signs, a doorknob, and two or three medallions were replaced during the weekend of the inauguration without documentation being prepared. The OA director for facilities management; the director of GSA’s White House service center; and the Secret Service deputy special agent in charge, presidential protection division, White House security branch, said that a presidential seal was stolen from a door in the EEOB. The Secret Service provided an incident report indicating that a presidential seal was reported missing at 8:40 a.m. on January 19, 2001, and last seen at 6:30 a.m. that day. According to the report, the seal was molded, hand-painted, 12 inches in diameter, and had been attached to a door with glue and screws. The Secret Service deputy special agent in charge of the presidential protection division, White House security branch, said that fingerprints were taken from the door where the seal was located, but no suspects were identified. The OA associate director for facilities management showed us where the seal had been located. EOP staff told us about additional missing items that were not contained in the June 2001 list. Two EOP staff told us that a total of 9 to 11 television remote control devices were missing from two offices. In addition, two EOP officials said that about 20 cellular telephones could not be located in the office where they belonged. Regarding the cellular telephones, the deputy assistant for operations in that office said that she was told by an OA employee at the beginning of the administration that the telephones could be found in a particular room; however, they could not be found anywhere in the office suite, so new ones were purchased. Two EOP staff said that two cameras were missing from an office in the EEOB, and another EOP employee said that an ethics manual that a former Clinton administration employee told him had been prepared could not be located. Three EOP officials and one GSA official said that items that were on loan from a private collector and on display in the EEOB during the Clinton administration were found to be missing sometime after the beginning of the new administration. According to the OA senior preservation and facilities officer, the items consisted of a small oil painting, two china soup bowls, a china plate, a brass mantle clock, and a bust of President Lincoln. We were also provided with documentation describing these items. The director of GSA’s White House service center said that he observed the items in the office (except for the Lincoln bust, which was in a different room, the vice president’s ceremonial office) during the morning of January 20; but when he returned to the office in midafternoon, he noticed that many of the items were missing, but did not know the exact number. In August 2001, the OA associate director for security said that the Lincoln bust had been returned from the former vice president (for more information about the return of the missing bust, see comments later in this section made by the former vice president’s former staff). Regarding the other collector’s items that had been on display in another office, this official also said that he had contacted several former Clinton administration staff who had worked in the office where they had been displayed and that he was unsuccessful in locating the items. The associate director for security said that all of the former Clinton administration staff whom he contacted said that the items were still in the office when they left on January 20. Further, the associate director for security said that he had contacted the person in charge of the contract movers who were working in that office on January 20; according to the associate director for security, this person said that the items were still there at 4:00 p.m. or 4:30 p.m. on January 20. According to a GSA planner/estimator, it would cost $400 to replace an historic doorknob set (doorknobs on both sides of a door) with a solid brass replica, or $300 for a single historic doorknob replica; $125 for a new office sign with a medallion; and $75 to replace a medallion. Using those per-unit costs, if all of the items observed missing were replaced, it would have cost $2,100 to $2,200 for 9 to 10 doorknobs; $625 to $1,375 to replace 5 to 11 missing office signs with medallions; and $675 to $750 to replace 9 to 10 missing medallions. However, because specific locations were not provided regarding some of the observations of missing items, we were unable to determine whether all of the missing items had been replaced. In addition, the estimated cost of replacing missing doorknobs assumes that all of the doorknobs that were observed missing will be replaced with historic replicas, which was unknown. It was also unknown how many of the doorknobs that were discovered missing were historic originals. We also did not obtain any information on the value of the original historic doorknobs. The EOP provided purchase records indicating that it spent $2,040 for 26 cellular telephones on January 26, 2001; $729 for two cameras (including a digital camera costing $685) on March 16, 2001, and April 4, 2001; and $221 for 15 television remote controls on March 6 and15; June 5; and July 10, 2001. The OA associate director for facilities management estimated it will cost about $350 to make a replica of the presidential seal that was reported stolen which, as of March 2002, had not been replaced. Although we did not obtain a dollar value regarding the possible historic value of the seal that was stolen, according to the OA associate director for facilities management, the $350 purchase price would not purchase an exact replica of the brass seal that was stolen; the seal was purchased in the mid-1970s, and is no longer available; and the $350 would purchase a plastic-type casting. The former director of an office where an EOP employee told us that she saw someone remove an office sign said that an elderly volunteer in her office removed the sign from the wall on January 19, 2001. She said that she did not know why he had removed the sign. She said that she attempted to put the sign back on the wall, but it would not stay, so she contacted OA and was told to leave it on the floor next to the door. The former office director said that she left the sign on the floor, and it was still there when she left between 8:00 p.m. and 10:00 p.m. on January 19. The former director of an office where an EOP employee told us that he observed two pairs of missing doorknobs said that the office had several doors to the hallway that at some time had been made inoperable, and he was not sure whether the interior sides of those doors had doorknobs. The former occupant of an office, where an EOP employee told us he observed that two pairs of doorknobs were missing (interior and exterior doorknobs for two doors to the outside that were no longer used) and a bolt was missing from a lock, said that a bookcase covered the door to the outside, and he did not know if that door had ever had any doorknobs. He said that to the best of his recollection, the bookcase still covered the door when he left between 10:00 a.m. and 11:00 a.m. on January 20, 2001. He also said that he did not take any doorknobs. A former employee whose office was next door also said that shelves were in front of the door with the missing doorknobs when she worked in that office suite. The deputy assistant to the president for management and administration from 1997 to 2001 said that people frequently take items such as doorknobs from the EEOB to keep as souvenirs, and he believed that visitors to the building were responsible for most of the thefts. He estimated that two to three doorknobs were taken from the EEOB per year. No former Clinton administration staff we interviewed who worked in the two offices where remote controls were observed missing by two EOP staff said they took the remote controls. In one of those two offices, we obtained comments from four former employees. One of those former employees said that it is possible that the remote controls were missing when she worked there; she remembered having to manually change channels on a television set in that office, and she questioned why someone would take a remote control if they also did not have the television set. Another former employee said that some remote controls were missing from that office throughout the administration. A third former employee said that some of the televisions in that suite of offices did not have remote controls, and he was not sure whether they had ever had them. The fourth former employee said that it was possible that the remote controls were missing when he worked there. The former director of another office where two EOP staff told us that she observed four to five missing television remote controls said that most of the television sets that were in her suite of offices were very old and may not have had remote controls. She said that she remembered staff in her office standing on chairs to manually change the channels on the televisions in the suite of offices. The former director of the office from where two EOP staff told us two cameras were missing said that the cameras were still in the office when she and her staff left between 9:30 p.m. and 10:30 p.m. on January 19, 2001. The former office director said that she was instructed to leave the office unlocked (she did not recall who gave her that instruction); she also said that, when the staff left, the cameras were left on an open shelf in the office. Regarding an ethics manual that an EOP employee told us that he could not locate, a former official who handled ethics issues during the Clinton administration said that a manual containing ethics materials was being compiled at the end of the administration for the new administration staff, but he did not know where the manual had been left. Three other former employees who worked for that office said that they were unaware of such a manual. With regard to the collector’s items that two EOP staff and a GSA official told us were missing, the former director of the office where the items were displayed said that they were still in his office when he left at 12:30 p.m. on January 20 (except for the Lincoln bust, which was in another room). Another EOP employee who worked in that office during both the Clinton and Bush administrations said that she saw the items in the office at 5:00 p.m. on January 20, but she noticed that they were missing when she returned on January 22. She also noted that the office was left unlocked when she left on January 20 and that the items were left on open shelves. Regarding a Lincoln bust that two EOP staff told us was missing, but was subsequently returned, a former employee who also worked the former vice president’s transition office provided us with a copy of a July 6, 2001, letter that he received from the counsel to Vice President Cheney asking about the missing item. The former employee said that, after receiving the letter, he located the bust at former Vice President Gore’s personal residence and that he returned it to the White House on July 11, 2001. The former employee also provided us with a July 11, 2001, letter to the counsel to the vice president, in which he wrote that “it appears that the bust was inadvertently packed with the personal effects of Vice President Gore.” The former counsel to the former vice president told us that Mr. Gore did not pack his own items in his office at the end of the administration. The former director of an office where an EOP official told us that she could not locate cellular telephones anywhere in the office suite where they belonged said that the former staff from that office turned in their cellular telephones as part of the check-out process. A former official from that office provided copies of the check-out forms completed for 71 staff who worked in that office indicating that the cellular telephones were returned or that the category did not apply to certain employees. A former employee who helped collect the cellular telephones in that office said that all of the cellular telephones were returned and that he left them on a shelf in his office. The June 2001 list indicated that 100 computer keyboards had to be replaced because the “W” keys had been removed. Twenty-six EOP staff told us that they observed a total of 30 to 64 computer keyboards with missing or damaged (glued, whited-out, or pushed down) “W” keys in specific rooms or offices. We developed a range reflecting the observations because some staff said they saw different numbers of keyboards with missing or damaged “W” keys in the same rooms or offices and as a means of eliminating double counting. In calculating the range, we took the lowest number of keyboards with missing or damaged keys observed and the highest number observed in specific rooms or offices, and then added the observations of all people. The low end of the range could be understated, however, because some EOP staff did not indicate that they looked at every keyboard in a room or office or did not provide a specific number of keyboards that they observed with missing or damaged keys. Further, the high end of the range could be overstated because, in at least one case, the number of keyboards observed with broken or missing “W” keys was greater than the number of keyboards that former Clinton staff said was in that space. Five other EOP staff said that they saw a total of four keyboards with inoperable, missing, or switched keys; they said they were not “W” keys or could not recall which keys were affected. In addition, five EOP staff and one GSA employee said that they saw 13 to 15 “W” keys taped or glued on walls; five EOP staff said they observed piles of keyboards or computers or a computer monitor overturned; three EOP staff said that something was spilled on their keyboards; one EOP official said that she found 3 “W” keys in a desk; and one EOP employee said that his keyboard was missing at the beginning of the new administration. In addition to the EOP staff we interviewed about their observations regarding the keyboards, we interviewed EOP personnel who worked with computers during the transition. The OA associate director for information systems and technology provided us with documentation indicating that on January 23 and 24, 2001, the EOP purchased 62 new keyboards. The January 23, 2001, purchase order for 31 keyboards indicated that “eyboards are needed to support the transition.” The January 24, 2001, purchase request for another 31 keyboards indicated “econd request for the letter ‘W’ problem.” The OA associate director for information systems and technology said that some of the replacement keyboards were taken out of inventory for the new administration staff, but she did not know how many. In an interview in June 2001, this official said that 57 keyboards were missing keys during the transition, and 7 other keyboards were not working because of other reasons, such as inoperable space bars. She also said that she believed that more of the keyboards with problems were found in the offices of the first lady and the vice president, compared to other offices. After later obtaining an estimate from the branch chief for program management and strategic planning in the information systems and technology division, who worked with computers during the transition, that about 150 keyboards had to be replaced because of missing or damaged “W” keys, we conducted a follow-up interview with the OA associate director for information systems and technology. In February 2002, the OA associate director for information systems and technology said that her memory regarding this matter was not as good as when we interviewed her in June 2001, but she estimated that 100 keyboards had to be replaced at the end of the Clinton administration and that one-third of them were missing the “W” key or were intentionally damaged in some way. She also said that of those 100 keyboards, about one-third to one–half would have been replaced anyway because of their age. The official also said that she was not focused on the keyboards during the transition, but saw about 10 keyboards with missing “W” keys, some space bars that were glued down, and a lot of keyboards that were “filthy.” This official said that she took notes regarding the computers during the transition, but she was unable to locate them. An April 12, 2001, E-mail sent from the OA financial manager who approved the request to purchase 62 keyboards in January 2001 to an OA Information Systems and Technology Division branch chief indicated that There were a number of keyboards which had the ‘W’ missing/inoperable during transition. Based upon our need to provide working keyboards to incoming EOP staff, we placed rush keyboard orders on January 23rd and January 24th. We ordered a total of 62 keyboards for a total cost of $4,850. This is the approximate number of keyboards that were defective. The EOP provided a copy of a March 27, 2001, OA excess property report that was prepared regarding its disposal of computer equipment. The report indicated that 12 boxes of keyboards, speakers, cords, and soundcards were discarded, but did not specify the number of keyboards that were included. The contract employee who prepared that report said that she did not know how many keyboards were discarded, but that each box could have contained 10 to 20 keyboards, depending on the size of the box. The EOP also provided a copy of a February 11, 2002, E-mail from a computer contract employee to the OA associate director for information systems and technology indicating that the contract employee had told the OA employee that “… she excessed eight boxes of ‘junk’ after the transition. Six of those boxes each contained 20 or more keyboards with either the ‘W’ problem or a broken space bar.” When we interviewed the contract employee who was referred to in the E-mail as having excessed damaged keyboards, she said that she did not pack all of the boxes and did not look at all of the keyboards, but that most of the keyboards that she saw were missing “W” keys. She also said that she did not know how many discarded keyboards had missing or damaged “W” keys and that she did not know how many damaged keyboards were discarded after the transition. Further, she said that some of the keyboards that were discarded had been waiting to be disposed of before the transition because they were dirty or because of wear and tear. In a February 2002 interview, the OA associate director for information systems and technology said that she believed that four of the boxes of excessed computer equipment contained damaged keyboards. Because of the lack of documentation, we could not determine how many keyboards may have been taken out of inventory to replace keyboards that were intentionally damaged during the transition. As a result, it was not possible to determine the total costs associated with replacing damaged keyboards. However, we are providing cost estimates for various totals provided by EOP staff. In reviewing the costs, it must be recognized that according to the OA associate director for information systems and technology, one-third to one-half of the keyboards for EOP staff, including the ones provided to EOP staff at the beginning of the administration, may have been replaced anyway because staff receive new computers every 3 or 4 years. Therefore, some of the damaged keyboards would have been replaced anyway. We did not attempt to obtain information on any other costs that may have been associated with replacing damaged keyboards, such as those related to delivering and installing new keyboards. Below is a table showing the different costs that could have been incurred on the basis of different estimates we were provided regarding the number of damaged keyboards that were replaced and the range we calculated regarding the observations of keyboards with damaged and missing keys. The cost estimates were calculated on the basis of the per-unit cost of the 62 keyboards that the EOP purchased in late January 2001 for $4,650, or $75 per keyboard. One former senior Clinton administration official said that he found the reports of keyboards with missing “W” keys to be believable but regrettable and indefensible. Two former employees said that they observed a total of three to four keyboards with missing “W” keys in offices in the EEOB at the end of the administration. Another five former Clinton administration staff said that they heard people talking about removing “W” keys or keyboards with missing “W” keys before the end of the administration, but did not see any keyboards with missing “W” keys or see anyone removing them. The former senior advisor for presidential transition questioned whether as many as 60 keyboards could have been intentionally damaged because, while helping with the downloading and archiving of data from computers during the morning of January 20, he moved about 50 computer central processing units from offices in the EEOB during the morning of January 20 and did not see any “W” keys missing from keyboards. In addition, regarding an observation of two keyboards with missing “W” keys in a certain office suite, this former official said that he was in that office suite after 10:30 a.m. on January 20 helping with the downloading and archiving of data from computers, and he did not see any keyboards with missing “W” keys there. The former manager of an office where an EOP employee said she observed 18 keyboards with missing “W” keys in an office suite said that there were 12 keyboards in that office suite at the end of the administration. The June 2001 list indicated that the damage included “urniture that was damaged severely enough to require complete refurbishment or destruction--6 offices.” It also indicated that a glass desk top was smashed and on the floor, and that desks and other furniture were overturned in six offices. Ten EOP staff told us that they observed a total of 16 to 21 pieces of broken furniture, including 5 to 7 chairs with broken legs or backs; 5 to 7 broken glass desk tops, including one on the floor; 1 to 2 chairs with missing or broken arms; a desk with the drawer fronts removed; a sofa with broken legs; a credenza with broken door glass; a broken mirror; and a cabinet with its doors hanging with only one hinge. Six EOP staff also said that the locks on four desks or cabinet drawers were damaged or the keys were missing or broken off in the locks. This included the observation of a file cabinet with a key broken off, which, when opened, contained a Gore bumper sticker. Another EOP employee said that he saw that the fabric was torn on three chairs. This employee said that the tears were made in the same spots on two of the chairs, which he observed in a hallway, and that the fabric on them appeared to have been new. He thought that they had been intentionally cut with a knife. One EOP employee said that her desk had five to six large cigar burns on it, and other desks had scratches that she said appeared to have been made with a knife. Five EOP staff also said that they observed writing inside drawers of five desks. Four of these employees said the writing was found written inside the top drawers of the desks. The other employee could only recall on which floor he saw the writing. In August and September 2001, we were shown the writing in four of the five desks. Five EOP staff told us that they saw a total of 11 to 13 pieces of furniture that were on their sides or overturned in specific rooms or offices. The five people who told us the approximate time that they observed overturned furniture said they made those observations between the early morning hours and the afternoon of January 20. In addition, another EOP employee and the director of GSA’s White House service center said they observed overturned furniture, but did not indicate where. The director of GSA’s White House service center also said that furniture could have been overturned for a variety of reasons other than vandalism, such as to reach electrical or computer connections. Further, five EOP staff also said they saw pieces of furniture that appeared to have been moved to areas where they did not belong, such as desks moved up against doors. Six EOP staff said they observed a total of four to five desks with a sticky substance on them between January 20 and 22 in two different locations (an office in the EEOB and an office area in the West Wing). In addition, three EOP staff said that they saw a total of two to four desks with handles missing on January 20 or 21. Included were the observations of two employees who worked in the West Wing who said that their desks had a sticky substance on the bottom of drawers or a pull-out tray (one of those two employees who worked in that area also said that her desk was missing handles); an employee who said that a desk in that area had a sticky substance on the bottom of a drawer and was missing handles; an employee who said that another desk in the West Wing had glue on the bottom of a drawer and was missing handles; and an employee who worked in the EEOB who said that she had to scrub “sticky stuff” on her desk, but did not know what it was and that it could have been the accumulation of years of grime. Documentation relating to the observations made in specific locations included the following: January 25 and 29, 2001, GSA facility request forms documented requests to gain access to and for a key to a locked file cabinet in a room where an EOP employee said that he found a key that was bent and almost entirely broken off in a cabinet that, once opened by a locksmith, contained Gore-Lieberman stickers. The facility requests were made by the EOP employee who told us about this observation. A January 30, 2001, GSA facility request form documented a request to fix a broken desk lock in an office where an EOP employee said the lock on her desk appeared to have been smashed. The facility request was made by the EOP employee who told us about this observation. A February 12, 2001, GSA facility request form documented a request to repair a leg on a sofa in an office on a floor of the EEOB where an EOP employee observed a sofa with broken legs. A February 21, 2001, GSA facility request form documented a request to repair arms on two chairs in an office where two EOP staff told us that they had observed broken chairs. The facility request was made for the EOP employee who told us about this observation. In August 2001, we observed the desk with the drawer fronts that were detached, which had not been repaired at that time. Other GSA facility request forms for the period January 18, 2001, to February 27, 2001, documented furniture-related requests that were not in locations where EOP reported observing these types of problems. They included requests to repair a chair back, a desk lock, and a mirror, and five requests to repair or replace broken or missing desk handles. Also included were requests for furniture repairs that did not reflect observations made by EOP staff, such as a request to repair a bookcase. Definitive information was not available regarding when the furniture damage occurred; whether it was intentional and, if so, who caused it. The management office director said that during the first two weeks of the administration, the EEOB was filled with furniture that had exceeded its useful life. She believed that the broken furniture that she saw was in that condition as a result of wear and tear and neglect and not something intentional. Similarly, an EOP employee who saw four chairs with broken legs placed in the hall said the chairs could have been in that condition due to normal wear and tear and were not necessarily intentionally damaged. The OA director said that some furniture was thrown away because it was damaged, but “not a lot.” He said that some furniture was put into a dumpster, and other pieces were transferred to the EOP storage facility. He also said that damaged furniture was put in the halls. In addition, he said that there were no records indicating that furniture was deliberately damaged, and that no inventory of furniture in the EEOB exists. An associate counsel to the president provided photographs of four pieces of furniture that she indicated were moved to an EOP remote storage facility that is now quarantined. They included a chair with a missing leg, a chair with a missing back, a sofa without a seat cushion, and a desk with missing drawer fronts. No information was provided regarding from which offices these pieces of furniture had been taken or when or how the damage occurred. GSA provided facility request forms dated between January 18, 2001, and February 27, 2001; we reviewed these and found 49 furniture-related requests that cost a total of $6,964 to complete. Some individual repair costs were substantially more than others, such as $1,855 to refinish a desk and $628 to repair a bookcase. It was unknown what portion of those repair costs, if any, was the result of intentional damage caused during the transition. Further, the work requests for some repairs indicated that they included work other than furniture repair. GSA facility request forms relating to observations made in specific locations indicated that about $258 was incurred and included the following: $75 to repair arms on two chairs, $75 to repair a sofa leg, $54 to gain access to a locked file cabinet, and $54 to fix a broken desk lock. We did not obtain any additional possible costs related to other furniture- related observations, such as those associated with placing overturned furniture upright, removing glue that had been left on desks, or replacing broken glass desk tops. A former Clinton administration employee who worked in an office where an EOP employee showed us writing in his desk told us that he wrote a “goodwill” message inside a drawer of his desk. This former employee said that he obtained the idea to write a message inside of his desk because, historically, vice presidents sign the inside of a desk in their office. Clinton administration officials said that some of the space they vacated needed cleaning and that a conscious decision had been made early in the administration not to spend much money on repairs and upkeep during the administration in view of the generally tight budget; therefore, it could be expected that some furniture showed wear and tear. The former director of one office where EOP staff told us they observed two to four pieces of broken furniture said that the office furniture had been in poor shape for some time, but the staff tolerated it. He said that they did not want to send the furniture away to be repaired because it was uncertain how long it would take or whether the furniture would be returned. The former manager of an office where two EOP staff told us they observed one to two chairs with broken or missing arms said that arms on two chairs in that suite of offices had become detached a year or two before the transition, that carpenters had tried to glue them back, but the glue did not hold. We asked GSA to provide facility request forms for 1999, and we found two requests to repair chairs in that office suite made by the former office manager. A former Clinton administration employee who worked in an office where three EOP staff told us they observed a desk with two detached drawer fronts said that the fronts of two drawers on his desk had come off when he worked there and that someone was contacted once or twice over 5 years to have them fixed, but the glue did not hold. In addition, this former employee said, regarding observations by EOP staff of two to three chairs with broken backs in his office, that a chair with a broken back had been in his office for a long time before the transition. Another former employee in that office said that he remembered that the front of a drawer of the other employee’s desk was held on with rubber bands and that it had been that way for about the last 2 years of the administration. The former director of an office where an EOP official told us he observed a broken glass desk top on the floor during the afternoon of January 20 said that he did not observe that when he left the EEOB at about 1:00 a.m. on January 20, and he said that he and the deputy director were the last office staff to leave. Similarly, the former senior advisor for presidential transition said that he was in the same office after 11:00 a.m. on January 20, and he did not see a broken glass desk top. Three former staff who worked in an area of the West Wing where five EOP staff told us they found glue or a sticky substance on two to three desks said that they left the White House between midnight on January 19 and 4:30 a.m. on January 20 and were not aware of glue being left on desks. One of those former employees who worked in that same area where EOP staff said they observed one to three desks with missing handles said that her desk was missing handles when she started working at that desk in 1998, and it was still missing them at the end of the administration. The former occupant of an office suite where an EOP employee told us she observed a desk with five to six large cigar burns said that there may have been a burn on one of the two desks in his office, but he did not put it there. He said that he smoked, but not cigars, and not in his office. This former employee also said that with respect to an additional observation by an EOP employee that a desk in the office suite had scratches on it that appeared to have been made with a knife, he did not recall seeing any scratches on either of the two desks in his office. Similarly, the former senior advisor for presidential transition said that he was in the same office after 10:30 a.m. on January 20, and he did not see any scratches on a desk in that office. Three former occupants of a suite of three rooms where two EOP officials told us they observed a table and two desks overturned in the afternoon of January 20 said that no furniture was overturned in their offices when they left on January 20 and that their desks would have been difficult or impossible to move because of the weight of the desks. One of the three former occupants said that he was in his office until 3:30 a.m. or 4:30 a.m. on January 20, the second former employee said he was in his office until 10:00 a.m. or 11:00 a.m. on January 20, and the third former employee said that she was in her office until 11:50 a.m. or 11:55 a.m. on January 20. Regarding another office where an EOP official told us that he observed overturned furniture between 3 a.m. and 4 a.m. on January 20, the former senior advisor for presidential transition said that he was in that office after 11:00 a.m. on January 20, and he did not see any overturned furniture. Similarly, the former director of that office, who said that he left the office around 1:00 a.m. on January 20, said that he did not observe any overturned furniture. Regarding furniture in a hallway of the EEOB that an EOP employee said she observed, two former employees who worked in an office outside of which the furniture was seen in the hallway said that they had moved bookcases, file cabinets, tables, and chairs out of their office into the hallway to help the cleaning staff at the end of the administration. The June 2001 list indicated that: “The phones lines had been cut in the EEOB--pulled from the wall.” “50-75 phone instruments had been tampered with requiring more work than the standard reset. Of those, most had the identifying templates removed.” “Voice mail announcements had been changed to answer the line with obscene messages. After finding 10–15, workers stopped resetting them individually and reset the entire system.” “A stu3 phone in the First Lady’s office was left open with the key in it.” Two EOP staff told us that they saw a total of 5 to 6 telephone lines “ripped” (not simply disconnected) or pulled from the walls during the early morning hours of January 20. In addition, the OA director said he saw some plugs that looked like they were damaged, and another EOP employee said that she saw a telephone cord that appeared to have been cut with scissors. One EOP employee said that she saw at least 25 cords torn out of walls in two rooms on January 22. That employee did not know exactly what types of cords were torn out of the walls, but said she thought that they were telephone and computer cords and also could have been fax and electrical cords. A January 24, 2001, GSA facility request form documented a request to “organize all loose wires and make them not so visible” in an office suite where an EOP employee said that at least 25 cords were pulled from the walls. The facility request was made by the EOP employee who told us about this observation. Five EOP staff said they observed a total of 98 to 107 telephones that had no labels identifying the telephone numbers in specific rooms or offices. Further, an EOP employee who coordinated telephone service during the first month of the administration estimated that 85 percent of the telephones in the EEOB and the White House were missing identifying templates or did not ring at the correct number. She did not identify the locations of these telephones, which could include those that were observed without identifying labels by four other EOP staff. This employee said that she was the “middleman” between EOP staff and contractors regarding the telephones during the first month of the administration, and that she went into every office of the EEOB and the White House during that time. The OA telephone services coordinator said she believed that telephone labels were removed intentionally and that “quite a few” labels were missing during the transition, but she did not agree that 85 percent of the telephones were missing them. She said that she had observed 18 telephones that were missing number labels. The telephone service director said that in one room, missing telephone labels were replaced before noon on January 20, but were found missing again later that day. Five EOP staff said that 13 to 19 telephones were forwarded to ring at other numbers. Further, the EOP employee who coordinated telephone service during the first month of the new administration estimated that about 100 telephones were forwarded to other numbers, but, with one exception, did not specifically identify which telephones. The telephone service director said the numbers for telephones that were missing identifying labels were determined in most cases by placing calls and noting what numbers appeared on the displays of receiving telephones. He also said that another way to identify the telephone numbers was for a telephone technician to obtain them from the telephone service provider. This official also said that, although there is a standard form for telephone service requests, preparation of this paperwork was not required between January 20 and 22 because of the urgency to get new employees moved into their offices. Seven EOP staff, including the telephone service director, said they saw telephones unplugged and/or piled up on two floors of the EEOB and in four specific rooms on those floors. Two EOP staff said that they found telephones that were not working. One of those employees told us that, because many telephones were not working in a section of a floor of the EEOB, the switchboard forwarded calls from that area to other offices where telephones were working, and that she walked from office to office delivering telephone messages. In addition, one EOP employee (a different employee for each of the following observations) said that he or she observed “some” telephones that were moved to other rooms while still connected, two telephones plugged into the wrong plugs, and one telephone with an incorrect number. The EOP provided documentation that summarized telephone service orders closed from January 20, 2001, through February 20, 2001, and contained 29 service orders that cited needing or placing labels; 6 of the 29 service orders were for work in offices where telephone labels were observed missing. All of the 29 service orders mentioning labels were part of orders for other telephone services, as were four individual work orders EOP provided that cited labeling that were not part of the 29 service orders. In discussing the telephone service requests, the OA telephone services coordinator said that the requests for labels did not necessarily mean that the telephones had been missing labels with telephone numbers. She said that a new label might have been needed for a new service, such as having two lines ring at one telephone. Documentation provided by the EOP included a work order to retrieve a telephone that was on the floor in one room, and another work order that said, in part, “replace labels on all phones that removed.” The documentation did not include any work orders indicating that work was performed specifically to correct the forwarding of telephone calls. Two EOP employees who helped establish telephone service for new staff said that they heard a total of 6 to 7 obscene or vulgar voice mail messages that were left on telephones in vacated offices. These employees could not recall the specific content of the messages or the locations of the telephones. In addition, 13 EOP staff said they heard a total of 22 to 28 inappropriate or prank voice mail greetings or incoming messages left. Included in these total numbers was the statement of the telephone service director, who told us that he heard 10 inappropriate voice mail messages, 5 to 6 of which were vulgar, during the early morning hours of January 20. The content of the most commonly heard voice mail message that EOP staff told us about (3 messages heard by four EOP staff) was that the former staff would be out of their offices for the next 4 years. Two EOP staff said they heard a voice mail greeting left by a former Clinton administration employee, who identified himself in the message, that said he would be out of the office for 4 years due to the Supreme Court decision and left his home telephone number. The telephone service director said that EOP staff needed to be physically present in the White House complex to record these greetings on their voice mail by using a passcode. Ten EOP staff said that they had no voice mail service when they began working in the White House complex. The telephone service director said that they initially attempted to erase inappropriate and vulgar voice mail messages on an individual basis, but it was eventually decided to erase all of them. The OA associate director for facilities management said that no record was kept of voice mail complaints, but so many complaints were received about them that voice mail service was discontinued for a while to clear out the system. This official said that no one had access to voice mail for at least 5 days and possibly up to 2 weeks. This official said that he made the decision not to erase all voice mail messages and greetings at the end of the administration because doing so would have deleted voice mail for all EOP staff, including staff who did not leave at the end of the administration, and not just for the departing staff. The OA telephone services coordinator said that voice mail greetings and messages were not removed on a systemwide basis at the end of the Clinton administration because the EOP had not yet done an equipment upgrade, which was done later. Two EOP officials said they observed a stu3 (secure) telephone with the key left in it. We interviewed the director of operations support at the White House Communications Agency (WHCA), which coordinates the installation of secure telecommunications equipment in the White House complex. This official said that WHCA had no record of having installed a secure telephone in the office where EOP staff said they observed it and did not know whether such equipment had been used during the Clinton administration. He also said that, for the equipment to be operational in a secure mode, the key in the receiving equipment also must be engaged. The official said that, typically, this type of equipment is picked up from offices by WHCA at the end of an administration, but because the agency had no record of the equipment in that office, it was apparently left there. According to the White House, based on what it said was extremely conservative estimates and straightforward documentation, the government incurred costs of at least $6,020 to replace missing telephone labels and reroute forwarded telephones. The documentation provided included two blanket work orders and associated bills, a closed orders log for the period January 20 through February 20, 2001, 8 individual work orders for telephone service, and two monthly AT&T invoices. The White House also identified, but did not provide 19 other individual telephone service work orders that it used in its cost estimate for or placing labels on telephones. Six of the 29 work orders listed on the closed orders log that cited needing or placing labels and four individual work orders that included labels were for work in offices where telephone labels were observed missing. However, both the orders listed on the closed orders log and the individual work orders, as well as the blanket work orders, cited other services besides labeling, and it was not clear to us from the documentation provided the extent to which relabeling was done solely to replace missing labels or would have been necessary anyway due to changes requested by new office occupants. None of the documents provided specifically cited correcting forwarded telephones. The documentation provided included blanket work orders representing 114 hours for work done on January 20 and 78.5 hours for work on January 21. Costs associated with individual services were not identified for the blanket work orders, but they indicated that the services were for “install, moves, relabeling, rewire, etc.” The summary of work orders closed between January 20, 2001, and February 20, 2001, listed work orders for services such as installing new telephones and fax lines, replacing labels on telephones, clearing voice mail, resetting passwords, and reprogramming telephone numbers. The OA telephone services coordinator estimated that a technician could determine the numbers for 20 to 30 telephones per hour, but also indicated that a technician’s $75.92 hourly charge ($113.88 per hour on Saturdays and $151.84 per hour on Sundays) would be charged even if it took less than an hour to complete a service order. Although we do not question that costs were incurred to replace labels or reroute forwarded telephones, we do not believe the documentation provided is clear or descriptive enough to indicate what those costs were. A January 29, 2001, telecommunications service request documented a request for services including “replace labels on all phones that removed,” but the orders closed log for this service request showed “install new /replace label.” This service request was not made for an office where telephone labels were observed missing. A February 7, 2001, telecommunications service request documented a request to remove a telephone from an office where piles of telephones were observed at a cost of $75.92. Regarding observations by EOP staff that telephone cords were “ripped” from walls, one former Clinton administration employee said that cords may have been pulled out of walls as a result of moving. She said that she remembered seeing two telephone cords pulled out of walls previously, but not around the time of the transition, which she believed was the result of an office move. Another former Clinton administration employee noted that, with respect to the observation that telephone cords were cut, when the carpet was being stretched in an office, a computer cord was cut with a carpet stapler. (She said this did not occur during the transition.) The former occupant of an office suite (consisting of his office and a reception area) where an EOP employee told us she observed more than 25 cords torn out of the walls said that he did not observe any computer or telephone cords that were cut or torn out of the walls in any office when he was helping to remove hard drives from computers during the morning of January 20. He said that his office had only 5 telephone and computer cords when he worked there. Similarly, the former senior advisor for presidential transition said that he was in that office after 10:30 a.m. on January 20, and he did not see any telephone or computer cords cut or torn out of walls. The former chief of staff of an office where two EOP staff told us they observed 9 to 11 missing labels identifying the telephone numbers said she was aware that six telephones in that office suite were missing labels before the transition. She said those telephones were used by interns to invite people to events and that they were used for outgoing calls only, not to receive calls. In addition, another former employee said that a telephone in a room (a reception area) in an office where EOP staff told us they observed missing labels identifying the telephone numbers was missing such a label before the transition. She said that, while she worked there, the office staff did not know the number for that telephone. She also said that the telephone was used only by visitors for outgoing calls. A former employee who also worked in that office suite said that other telephones in the office suite were missing labels before the transition, but he did not know how many were missing. Another former employee who worked in another office where two EOP staff told us they observed missing telephone labels said that her telephone did not have a label identifying the number when she started working there in 1997, and that someone told her what her telephone number was. The former director of another office, where an EOP official told us he observed missing telephone labels, said that staff sometimes moved to other desks and took their telephone numbers with them. The deputy assistant to the president for management and administration during the Clinton administration said that he did not know why labels identifying the telephone numbers were missing. He noted that the label for his telephone was missing when he started working in the White House complex in 1997. The former manager of an office where an EOP employee told us he observed telephones that were unplugged said that he was not aware of anyone in that office unplugging them. A former employee in another office where EOP staff told us they observed telephones that were piled up said that there were extra telephones in that office that did not work and had never been discarded. The former senior advisor for presidential transition said that, during transition meetings, EOP staff discussed a plan to erase the voice mail greetings on all of the telephones during the transition. He provided a typewritten copy of notes regarding an April 28, 2000, transition team meeting indicating “telephones—mass clearing.” However, he said that given the reports of inappropriate voice mail messages found at the beginning of the new administration, the plan apparently had not been carried out. He also said that it would have been technically possible to erase voice mail greetings for most departing EOP staff without also deleting the greetings for staff who did not leave at the end of the administration. In January 2002, he provided us with his telephone number in the White House complex during the Clinton administration; when we called it, his voice mail greeting could still be heard. This former official also said that some telephones were forwarded to other numbers for business purposes at the end of the Clinton administration. He said, for example, that some of the remaining staff forwarded their calls to locations where they could be reached when no one was available to handle their calls at their former offices. A former employee who worked in an office where three EOP staff told us they heard a prank voice mail greeting said that on his last day of work at the end of the administration, he left a voice mail greeting on his telephone indicating that he would be out of the office for the next 4 years due to a decision by the Supreme Court, and he provided his home telephone number. He said that he presumed that the message would be erased the day after he left because he would no longer be employed there. He also said that departing staff were told that they would not be able to access voice mail after they left, but could not recall who told him that or how it was communicated to him (verbally or by E-mail). This former employee said that he left the message in “good humor.” The former manager of the office where two EOP officials told us they observed a secure telephone with the key left in it said that the telephone had not been used for 4 years and was not active. The June 2001 list indicated that “ix fax machines were moved to areas other than the ones in which they had been installed, making them inoperable.” One EOP official told us that he had seen 12 fax machines with the telephone lines switched and another fax machine that was disconnected. Another EOP official said that he also observed some fax machines that were swapped between rooms. Three EOP staff said that they observed a total of 5 copy machines, fax machines, and printers that did not work. Two EOP staff said they observed fax machines moved to areas where they did not appear to belong, including some in the middle of a room, unplugged. An EOP employee who helped prepare the offices for new staff said that the serial numbers for 5 to 7 copy and fax machines and 10 printers were marked out or removed, and that without the serial numbers, he was unable to determine whether the machines were subject to maintenance agreements. He also said that no one knew the access codes needed for some copy machines. Another employee said that a printer and fax machine had been emptied of paper. The EOP provided a copy of a log of broken copy and fax machines for the period from January 29, 2001, to February 28, 2001. The log indicated 18 instances of problems with copiers, such as paper jamming, feeder not working, and printing crooked during this period; and 19 instances of fax machine problems, including not being able to send or receive and a request for service that had not been completed the previous week. One of the items on the log was to repair a copy machine in an office where an EOP employee said that the copy and fax machines and printer did not work, although he said that he did not believe that they were not working because of something intentional. It was not possible to ascertain when the copier and fax machines in the log were broken and whether they were broken intentionally, and if so, who was responsible. We did not request cost information associated with preparing these fax machines, printers, and copy machines for use by the new staff. The former director of an office where an EOP official told us that fax machines were swapped between rooms said that a fax machine may have been pulled around a corner, but it was not done as a prank. Regarding a statement by an EOP employee that no one knew the access codes needed for some copy machines, the former senior advisor for presidential transition said he did not believe that any copy machines in the White House complex had access codes. The June 2001 list indicated that “ffices were left in a state of general trashing,” including contents of drawers dumped on the floor, desk top glass smashed and on the floor, and refrigerators unplugged with spoiled food. In addition, the list indicated that only 20 percent of the offices could be made available to incoming staff late in the afternoon of January 20. Twenty-two EOP staff and 1 GSA employee told us that they observed offices that were messy, disheveled, or dirty or contained trash or personal items left behind in specific rooms or offices. In addition, 6 EOP staff and 4 GSA staff said they observed office space in this condition on specific floors of the EEOB but could not recall the specific room or office. Nine additional EOP staff and 2 GSA staff said that they observed office space in this condition, but they could not recall any locations. (These could be the same observations made by EOP staff in specific rooms or offices.) Included among these observations were EOP staff who described the office space as being “extremely filthy” or “trashed out,” and that a certain room contained “a malodorous stench” or looked like there had been a party. GSA’s director of the White House service center also said that numerous unopened liquor and wine bottles were found. GSA facility requests requesting cleaning in offices where observations were made included the following: A January 30, 2001, GSA facility request form documented a request to clean carpet, furniture, and drapes and to patch and paint walls and moldings in an office that an EOP employee said was “trashed out,” including the carpet, furniture, and walls, and had three to four “sizable” holes in a wall. The facility request was made by the EOP employee who told us about this observation. Another January 30, 2001, GSA facility request form documented a request to clean carpet, furniture, and drapes in a different office that an EOP employee said was filthy and contained worn and dirty furniture. January 25, 2001, and February 17, 2001, GSA facility request forms documented requests to clean carpet, furniture, and drapes in a suite of offices that an EOP employee told us was “extremely trashed” and smelled bad. The facility requests were made by the EOP employee who told us about this observation. We interviewed 23 GSA staff who cleaned the offices during the transition and 4 GSA team leaders. None of the 23 cleaning staff said they observed any damage, vandalism, or pranks. Two of the cleaning staff said that they saw personal items left behind, such as books and an eyeglasses case; 2 employees said that they observed a lot of trash; 1 employee said that he saw empty desk drawers on tables; and 1 employee said that she saw discarded unused office supplies. Three of the 4 team leaders, who were responsible for different floors of the EEOB, said they did not observe any damage. Three of the team leaders said that they saw personal items left behind, such as unopened beer and wine bottles, a blanket, shoes, and a T- shirt with a picture of a tongue sticking out on it draped over a chair. One team leader said that the space on the floor of the EEOB where she worked was “extremely filthy,” and another leader said that trash was piled up because there were not enough dumpsters to handle all of the trash. EOP and GSA staff also provided specific examples of their observations regarding the condition of the office space. Four EOP staff (4 different employees for each of the following observations) said they saw food left in refrigerators and that the furniture, carpet, or drapes in their offices were dirty. Three EOP staff (3 different employees for each of the following observations) said they saw holes or unpainted areas of walls where items had been removed and a key broken off in a door leading to a balcony. Two EOP staff and 1 GSA employee said they saw drawers pulled out of desks. Two EOP staff (2 different employees for each of the following) said they saw the contents of desk drawers or filing cabinets dumped on the floor in two offices; pencil sharpener shavings on the floor of two offices; and paper hole punches arranged on a floor to spell a word. Either one EOP or GSA employee said he or she saw the following: an unplugged refrigerator, a plant turned upside down, a room without lightbulbs, a broken safe lock, and a bolt missing from a lock on the door to the outside. The director of GSA’s White House service center during the transition said that most of the cleaning began at about 7:00 a.m. or 8:00 a.m. on January 20 after OA provided a list of offices to be cleaned. He said that OA authorized GSA to clean only a few offices before January 20 and that the cleaning was completed by the morning of Monday, January 22. The OA director said that the offices were in “pretty good shape” by the evening of January 22. Of the 23 EOP and GSA staff who said they saw offices that were messy, disheveled, or dirty or contained trash or personal items left behind in specific rooms or offices, 13 staff made these observations on January 20 and 21; the remaining 10 staff made these observations on or after January 22. The OA associate director for facilities management said that there were “not a lot” of offices that could have been cleaned before January 20, and that maybe 20 such offices were on a list that was given to GSA. He also said that it took 3 to 4 days after January 20 to complete the cleaning. He said that there was more to clean during the 2001 transition than during previous transitions because (1) more staff were working in White House office space during the Clinton administration compared with previous administrations, (2) many people were messier than they should have been, and (3) it was more difficult to do routine cleaning in some offices because of their condition. This official said the amount of trash he saw was “beyond the norm” and that he observed a limited amount of “trashing” of offices. He also said that it would have taken an “astronomical” amount of resources to have cleaned all of the offices by Monday, January 22. In his opinion, he said that departing staff should have left their offices in a condition so that only vacuuming and dusting would have been needed. A White House management office employee who said that he went into almost all of the offices on three floors of the EEOB and part of another floor said that he observed trash “everywhere” on January 21. He said that what he observed was probably a combination of some trash having been dumped intentionally and an accumulation built up over the years. Another employee said that an office that he saw looked like someone had deliberately left a mess, and that it appeared that someone was sending a message that they were going to make a mess for everyone. For example, he said that desk drawers were dumped out, lamps were on chairs, pictures taken down from the walls, and the door was jammed with pictures leaning against it so that the door could not be easily opened. Further, the OA director said that it looked as if a large number of people had “deliberately trashed the place,” which he considered to be vandalism. The EOP also provided seven photographs of two or three offices in the EEOB taken on January 21, 2001, because, according to an associate counsel to the president, they were possibly responsive to our request for any record of damage that may have been caused deliberately by former Clinton administration staff. These photographs showed piles of empty binders and other office supplies left on the floor, empty filing trays stacked on a sofa, an empty styrofoam coffee cup on a desk, a desk pad with writing on it, a box of empty bottles left under a desk, a Christmas wreath on a table, a string of Christmas lights on a wall, Easter decorations, and three soda cans on a shelf. A GSA facility request form indicated that $1,150 was spent on professional cleaning services in a suite of offices that included a room that an EOP employee said was “extremely trashed” and smelled bad. We did not attempt to determine the costs associated with any additional cleaning effort that may have been needed as a result of excessive trash that needed to be discarded. Former Clinton administration staff generally said the amount of trash that EOP and GSA staff said they observed during the transition was what could be expected when staff move out of office space after 8 years; many staff were working up to the end of the administration and moved out at the last minute; staff worked long hours in their offices, often eating meals at their desks; certain offices were messy throughout the administration and not only at the end of the administration; trash cans and dumpsters were full, so trash was placed next to them; and that staff expected GSA to clean their offices after they left. Regarding the observations by some EOP staff who said that excessive trash had been intentionally left in vacated offices, none of the 67 former Clinton administration staff we interviewed who worked in the White House complex at the end of the administration said that trash was left behind intentionally as a prank or act of vandalism. One former employee who worked in an administrative office said that she did not observe much cleaning of offices before January 20, and she believed that GSA did not have enough supervisors and decision makers to oversee the cleaning. A former administrative head of another office that no one said was left dirty said that he had asked 25 professional staff to help clean the office before they left. In a letter sent to us in January 2002, the former deputy assistant to the president for management and administration and the former senior advisor for presidential transition said that, for months before the transition, they had been assured that additional cleaning crews would be detailed to the White House complex to assist GSA cleaning crews during the final week of the administration. However, the former officials said that they did not observe any cleaning crews during the evening of January 19 or the morning of January 20. Regarding files that an EOP official told us he observed dumped on a floor in another office during the afternoon of January 20, the former senior advisor for presidential transition said that he was in that office after 11:00 a.m. on January 20, and he did not see any files on the floor. The former director of that office also said that files could not have been found dumped on the floor on January 20 because they were archived before he left on January 19. A former official in an office where an EOP employee told us she observed dirty carpet said that, except for one room in the office suite, no money had been available for carpet cleaning throughout the administration. A former employee of an office where three EOP staff told us they observed a key to a door to a balcony broken off in the lock said that only the Secret Service had a key to that door. The office manager for the office where an EOP employee told us it appeared that a pencil sharpener was thrown against the wall and that pencil shavings were on the floor said the sharpener in that office did not work and may have been placed on the floor with other items to be removed. Regarding things that appeared to have been “ripped” from walls that an EOP employee told us about, a former employee said the room had not been painted for years, and items had been put up and removed from that office several times. In addition, the former director of an office, where an EOP employee told us he observed paint missing from the walls, said that when the office was painted about a year before the transition there were air bubbles in the paint that turned into cracks and peeled. The former director of another office where an EOP employee told us she observed a broken safe lock said that it had not worked correctly for some time. The former occupant of an office, which an EOP employee told us contained an odor when he started working there, said that his former office had smelled bad since he started working there in 1999. He said the office smelled moldy every time it rained, and he believed that water seeped into his office from a balcony. In addition, regarding another office that an EOP employee told us smelled bad, the former occupant of that office said that he did not smoke in his office. Regarding the photographs of messy offices that the EOP provided of offices during the transition, the former senior advisor for presidential transition said the photographs showed trash, but they did not show evidence of vandalism. The June 2001 list indicated that “riting on the walls (graffiti) in six offices” was found. Six EOP staff said that they observed writing on the wall of a stall in a men’s restroom that was derogatory to President Bush. In addition, two EOP staff and one GSA employee said that they observed messages written on an office wall. Two of those three employees said that the writing they observed in that office was on a writing board that could be erased. Two other EOP employees said that they saw pen and pencil marks on the walls of two offices, but no written words. This included one employee who said that it looked like there were cracks in the paint, but because the marks washed off, he thought it looked like someone had used a pencil on the wall. Twenty-nine EOP staff said that they observed a total of 25 to 26 prank signs, printed materials, stickers, or written messages that were affixed to walls or desks; placed in copiers, printers, desks, and cabinets; or placed on the floor in specific rooms or offices, and that there were multiple copies of these in some locations. The observers said these materials were generally uncomplimentary pictures or messages about President Bush or jokes about the names of certain offices. Six EOP staff said they saw a total of four messages that they said contained obscene words; three of the messages were observed in the same location. No one told us the pictures that they observed were obscene. Three other EOP staff and two GSA staff said that they observed a total of eight to nine prank messages and materials on certain floors of the EEOB, but they could not recall the specific rooms or offices. The messages and materials that were observed on certain floors, but not identified by specific office or room, could be the same as those that were observed in specific locations. In June and November 2001, EOP staff provided copies of 2 prank signs that were found during the transition, which were derogatory jokes about the president and vice president. In August and September 2001, we were also shown a roll of political stickers that were left behind and 2 stickers affixed to a file cabinet and desk containing derogatory statements about the president. We did not request cost information associated with removing writing on walls and removing prank signs, stickers, and other written messages from the office space because we did not believe that such costs would be readily available. Thirteen former Clinton administration staff said they saw a total of 10 to 27 prank signs in the corridors of the EEOB. One of those former employees, who saw 2 signs, said she could not recall their content, but said they were “harmless jokes.” The June 2001 list indicated that “six to eight 14-foot trucks were needed to recover new and usable supplies that had been thrown away.” The OA associate director for the general services division, who is responsible for office supplies, said that about eight truckloads of excessed items were brought to an EOP warehouse where they were sorted into usable and nonusable materials. He said that departing staff brought excess office supplies to a room in the basement of the EEOB, which eventually became overloaded, and supplies were left in the hallway. However, he was not aware of any usable supplies being discarded. One EOP employee and one GSA employee said they saw supplies that were thrown away, but no one said that trucks were needed to recover supplies that had been thrown away. Another EOP employee said that there were no office supplies in her office when she started working in the EEOB. We did not obtain cost information concerning the value of office supplies that may have been thrown away because the statement that six to eight 14- foot trucks were needed to recover new and usable supplies that had been thrown away generally was not corroborated. The former deputy assistant to the president for management and administration said that departing staff were instructed at the end of the administration to recycle usable office supplies by bringing them to the basement of the EEOB. The former senior advisor for presidential transition said that office supplies were brought to that room so that staff could obtain them from there, rather than obtaining them from the supply center. A former EOP employee said that the room where the supplies were taken became overloaded at the end of the administration. A former office manager said that staff received E-mails indicating that any office supplies that were left in their offices would be thrown away. The OA associate director for facilities management said that he found a secure employee identification and two-way radios that were left in an office and not turned into WHCA. Another EOP employee said that he observed materials that were not returned to the White House library. A GSA employee said that she observed a few classified documents left unsecured in closets and the telephone service director said that he found classified documents in an unlocked safe. Another EOP employee said that he found sensitive documents in a room. No costs were associated with these additional observations. Regarding two-way radios that an EOP official said were left in an office and not turned into WHCA, the director of operations support at WHCA, which handles such equipment, said that the agency had no record of having provided two-way radios to the office where they were observed. The official said that this type of equipment is typically picked up from offices by WHCA at the end of an administration, but because the agency had no record of having provided equipment to that office, it was apparently left there. The former manager of the office where an EOP official told us he observed two-way radios left and not turned into WHCA said it was possible that they were not turned into that office. We attempted to determine how the condition of the White House office space during the 2001 presidential transition compared with the conditions during previous recent transitions by interviewing 14 Executive Office of the President (EOP) staff, 2 General Services Administration (GSA) staff, 19 former Clinton administration staff, and a National Archives and Records Administration (NARA) official about their recollections of damage, vandalism, or pranks during previous transitions. In addition, we reviewed news media reports to identify any reported damage, vandalism, or pranks during the 1993, 1989, and 1981 transitions. Five EOP staff told us they observed damage, vandalism, or pranks in the White House complex when they worked there during past transitions. Regarding the 1993 transition, an EOP employee said that she observed five desks containing prank pictures of former Vice President Gore with written messages on them and a banner on a balcony. In addition, two EOP staff (a different employee for each of the following observations) said he or she observed 1 to 2 poster-sized signs, and 5 to 10 missing office signs. Another EOP employee showed us writing inside a desk that was dated January 1993. Seven EOP staff who had worked in the White House complex during previous transitions made observations comparing the condition of the office space in 2001 to previous transitions; six said that the condition was worse in 2001 than previous transitions and one said that the office space was messier in 1993 than 2001. The director of the Office of Administration (OA), who had been present during five previous transitions, said that he was “stunned” by what he saw during the 2001 transition and had not seen anything similar during previous ones, particularly in terms of the amount of trash. The OA associate director for facilities management said that there was more to clean during the 2001 transition than during previous transitions. The telephone service director, who had worked in the White House complex since 1973, said that he did not recall seeing, in past transitions, the large amount of trash that he had seen during the 2001 transition. Further, an employee who had worked in the White House complex since 1984 said that office space in the complex was messier during the 2001 transition than all of the other transitions he had seen. The chief of staff to the president, who was in charge of the 1993 transition for the George H. W. Bush administration, said that he saw nothing comparable during prior transitions to what he saw during the 2001 transition. (He said that he saw during the 2001 transition, among other things, overturned furniture, prank signs, keyboards with missing “W” keys, and trash and telephones on the floors of vacated offices.) The director of records management, who had worked in the White House complex since 1969 said that, over time, he noticed that more personal items have been left behind by departing staff. The OA senior preservation and facilities officer, who had worked for the EOP since 1978, said she observed some evidence of vandalism or pranks during the 2001 transition, but had not seen any damage, vandalism, or pranks during previous transitions. However, a facilities employee who said that she was responsible for overseeing the custodial staff in the Eisenhower Executive Office Building (EEOB) during the 2001 transition and was involved in the cleanup effort in the EEOB during the 1993 transition said that she believed more trash was left in the building during the 1993 transition than the 2001 transition. She said that she found papers “all over the floor” and the remnants of a party during the 1993 transition. The OA associate director for facilities management said that every transition has had a problem with missing historic doorknobs. The telephone service director said that telephone cords were unplugged and office signs were missing in previous transitions and that unplugging telephones is a “standard prank.” The director of GSA’s White House service center during the 2001 transition said that the condition of the office space during the 2001 transition was the same as what he observed during the 1989 transition. (He said that he observed little during the 2001 transition in terms of damage, vandalism, or pranks.) Similarly, a GSA employee who was one of the cleaning crew leaders during the 2001 transition and was the EEOB building manager when we interviewed him in July 2001, said that he had not seen any damage or pranks during any transition during his 31 years of working in the White House complex. He said there was an excessive amount of trash during the 2001 transition, but that was not unusual for a transition. Further, in a March 2, 2001, letter to Representative Barr on this matter, the acting administrator of GSA said, regarding the condition of the White House complex during the 2001 transition, that “he condition of the real property was consistent with what we would expect to encounter when tenants vacate office space after an extended occupancy with limited cyclical maintenance, such as painting and carpet replacement.” (Real property includes the physical structure of the building and not items such as telephones, computers, and furniture.) NARA’s director of presidential materials said that she was in the White House complex during the 1993 and 2001 transitions and that she went into about 20 offices in the EEOB during the morning of January 20, 2001. She said that she saw a lot of trash in the EEOB during the 2001 transition, but that it was no more than what she observed during the 1993 transition. She said that she did not see any damage, vandalism, or pranks during the 1993 or 2001 transitions. Regarding the 1993 transition, five former employees told us they observed furniture in hallways, piled up, or in places it did not appear to belong. One of those former employees also said there was no furniture in an office. One former employee (a different former employee for each of the following observations) said he or she observed each of the following: a person’s initials carved into the front of the middle drawer of her desk, words carved into two additional desks (a former employee said one of the carved words was an obscenity; the person who observed the other carving in a desk said it was the name of the vice president during the George H. W. Bush administration), and broken chairs. Seven former employees also said that computers were not operational or were missing hard drives at the beginning of the Clinton administration. Two of those employees said that it took 1 to 2 weeks for the computers to work. Two former employees said that telephones were piled on the floors or were disconnected. (One of those former employees said she was told that staff would receive new telephones.) Another former employee said that she saw telephone lines pulled out of walls and that they appeared to have been pulled out intentionally. One former employee who started working in the White House complex in January 1993 and left in January 2001 said that the offices were messier in January 1993 compared with January 2001. Another former employee said that on January 20, 1993, his office contained leftover food and that the walls needed repainting. A third former employee said the offices were still not cleaned by the afternoon of January 21, 1993. Another former employee said that there were “dusty and dirty” typewriters on desks. Three former staff said they saw a total of at least six Bush bumper stickers in different offices, on cubicle walls, in a desk, and on a telephone. One former employee said she saw one to two photocopies of political cartoons left in a copy machine, a medicine bottle with a prank note inside a desk, a banner on the balcony of the EEOB, and a tent tarp. Three former Clinton administration staff said that there were no office supplies when they started working in the White House complex in January 1993. We searched major newspapers and selected magazines for any news reports regarding the condition of the White House office space during the 1981, 1989, or 1993 presidential transitions and found only one such mention. The March 1981 issue of Washingtonian magazine indicated that incoming Reagan administration staff had some complaints about the condition of the EEOB that were similar to observations made by EOP staff in 2001. According to the article, a visitor described the EEOB as being “trashed,” and indicated that memorandums taped to walls, lampshades torn by paper clips hung on them to hold messages, a refrigerator with thick mold, and a large coffee stain on a sofa outside the vice president’s office were found. According to former Clinton administration and General Services Administration (GSA) officials, departing Executive Office of the President (EOP) staff at the end of the Clinton administration were required to follow a check-out process that involved obtaining written approval in 21 categories, including the return of library materials, government cellular telephones, pagers, and building passes. The form indicated that the employee’s final paycheck and/or lump sum leave payment could not be issued until he or she had completed the form and returned it to the White House director of personnel. However, the check-out process did not include an office inspection, including an inspection of the physical condition of the office, equipment, or furniture. We asked former Clinton administration officials what instructions were provided to departing staff regarding vacating their offices at the end of the administration. We were provided with a January 4, 2001, memorandum sent by President Clinton’s chief of staff to the office heads of the White House Office and the Office of Policy Development that encouraged staff to check out by the close of business on January 12, 2001, unless there was an operational need to be on the premises until January 19. However, this memorandum did not indicate in what condition the office space should be left or how office supplies should be handled, nor did it provide any warning about penalties for vandalism. Provisions of 18 U.S.C. 1361 provide for the punishment of anyone who willfully commits or attempts to commit damage to U.S. government property. If the damage to government property exceeds $1,000, the crime is treated as a felony; if the damage does not exceed $1,000, the crime is a misdemeanor. We contacted congressional personnel to ask what procedures are followed regarding offices on Capitol Hill that are vacated by members of Congress and their staff. They included staff from the Office of the Chief Administrative Officer, House of Representatives; Office of Customer Relations; Office of the Senate Sergeant-at-Arms; and Office of the Building Superintendent, Office of the Architect of the Capitol. The staff said that House and Senate offices are inspected when members vacate their space, and they are held personally liable for any damaged or missing equipment. They also said that former members of both the House and Senate have been charged for this reason. Further, we were informed that furniture is inspected in House members’ district offices. In addition, we note that landlords of privately owned office space and apartments routinely inspect the vacated space when tenants leave, and they charge for any damages. COMMENTS OF THE OFFICE OF THE COUNSEL TO THE PRESIDENT ON THE GAO’S DRAFT REPORT: “ALLEGATIONS OF DAMAGE DURING THE 2001 PRESIDENTIAL TRANSITION” (DATED MAY 3, 2002) The President and his Administration had no interest – and have no interest – in dwelling upon what happened during the 2001 transition. In early 2001, when the press first asked about damage found in the complex, the President said that “ t’s time now to move forward.” Members of this Administration went to great lengths to dampen public interest in the issue, hoping – as Press Secretary Ari Fleischer said at the time – “to put it all behind us” and to “focus . . . just do the job that the American people elected President Bush to do.” We certainly did not instigate an investigation by the General Accounting Office (GAO), nor revel at the prospect of such an inquiry. However, once the GAO agreed to undertake the investigation, we agreed to cooperate fully. We have done so. And we now believe that, if there is to be a report, it is incumbent upon us to ensure that the facts are accurately and fully reported. With that goal in mind, and as a matter of comity between the legislative and executive branches, we provide the GAO with the following comments. We have now provided the GAO with two rounds of extensive comments on their draft. Our first round of comments were provided on April 26, 2002. Unfortunately, the GAO’s revised draft, which we received on May 3, failed to address many of the concerns we had raised. Accordingly, we have now provided a second set of detailed comments on the May 3rd draft. We now understand that GAO intends to publish a response to our comments as an appendix to its final report. We are disappointed that we will not have an opportunity to consider or reply to GAO’s responses to our comments prior to publication of the final report. Part I of the comments describes some general concerns about the overall structure, content, and use of terminology in the draft report. Part II offers more specific comments. And Part III addresses the GAO’s proposed recommendations. In preparing these comments, we have consulted with representatives of the Office of the Vice President, the Office of Administration, the United States Secret Service, and others, on issues involving those entities or their personnel. We have also identified to the GAO the source of all factual information and statements cited herein. Part I: General Comments 1. Failure To Report Material Facts. The GAO has not included in its draft report many facts that a reader needs, in our view, to have a complete and accurate understanding of what happened during the 2001 transition. In calling for this investigation, Congressman Barr asked the GAO to “to fully document the reported examples of vandalism.” And section 7.51 of the Government Auditing Standards “requires that report contain all information needed to satisfy the audit objectives promote an adequate and correct understanding of the matters reported.” In our view, neither Congressman Barr’s directive nor the Government Auditing Standard has been met. For example, the GAO does not specifically identify anywhere in its report, including the appendices, each reported instance of vandalism, damage, or a prank. The GAO’s omission is troubling not only because it ignores the explicit request of the sole Member of Congress who requested the investigation (“to fully document”), but also because the GAO seems willing to detail each comment made by a former staff member. Thus in many cases, the GAO has included a former staff member’s comment in response to a particular observation without ever having discussed the observation itself. We believe that the GAO should treat observations by current staff members in the same manner it treats comments by former staff members. We also believe that the report should refer to each observation of damage individually. The GAO also omits from its report details about when, where, and by whom an observation was made. When an incident was observed is often relevant to determining the likely perpetrator. For example, the damage, vandalism, and pranks were often observed during the night of January 19 – before the cleaning staff began cleaning offices and before members of the Bush Administration entered the complex – and thus eliminating those individuals as the possible culprits. Where damage was found is relevant, for example, because often more than one incident and type of damage was observed in the same location; a concentration of damage (such as that found in the Vice President’s West Wing and EEOB offices) makes it less likely, in our view, that an innocent explanation exists. Finally, who made the observation can bear on issues of credibility; if staff who served in the White House complex during many Administrations observed the damage, as was often the case, then a reader may find the observation more credible than if a member of the incoming Bush Administration reported the same observation. The report also does not contain the content of the graffiti, messages, and signs. We were told that the GAO thinks it is “not appropriate” to include such vulgar and disparaging statements about the President of the United States. While we agree that the statements themselves are “not appropriate,” particularly when affixed to government property, and while we certainly do not wish to propagate such maledictions, we believe that including the content in the report is important for at least five reasons. First, the content of the message can – and often does – indicate who wrote the message and when. Second, the content often provides an insight into the mindset or intention of the person who wrote the message. This is important because it allows the reader to determine for himself whether the statements were “harmless jokes” or “goodwill” messages, as former Clinton Administration officials now claim (see Report at 10 and 17). Third, the content also allows the reader to infer that, if departing staff left a vulgar or derogatory message, those same individuals may also be responsible for other incidents that were observed near the location of the message. Fourth, the content of the messages and other details equip the reader to compare the 2001 transition and prior transitions. Finally, the content of the message allows the reader to assess whether the GAO’s characterization of the observations is fair and objective. For instance, in its report, the GAO describes a particular message as “arguably derogatory to the President.” Report at 10. That message reads, “jail to the thief.” But because the report does not reveal the content of the statement, readers have no way of knowing whether the GAO is accurate in describing the message as “arguably derogatory.” By disclosing the content of the messages and other important details about the reported observations, the GAO can best assure the objectivity of the entire report. Because we believe these details are important, many of our comments highlight facts that the GAO omitted. These facts are undisputed. The GAO omitted them from its report, we were told, not because it has reason to doubt their truth, but because the GAO concluded that it was “not appropriate” to include this level of detail and that the facts were not “material” to the GAO’s conclusions. On this, we simply disagree. By including these facts in our comments and explaining their relevance, we hope that the GAO will recognize the deficiencies in the current draft and revise the final report accordingly. If not, the facts will be in our comments for the readers to judge for themselves. 2. The “June 2001 List.” Throughout the draft report, the GAO refers to a “June 2001 list.” The GAO structures its report around the list and compares the staff members’ observations with the content of the list. The GAO uses the list in this manner even though the Counsel to the President cautioned the GAO, in transmitting the list, that he list is not the result of a comprehensive or systematic investigation into the issue, and should not be considered a complete record of the damage that was found. Rather, the list was prepared quickly and based on the recollections of a handful of individuals who witnessed or learned of the damage. Further, the GAO never even asked the individuals whose names appear on the list to explain how the list was prepared, who transcribed it, what its purpose was, or what each line refers to. Nonetheless, the GAO features the list prominently in its draft report as some type of benchmark or guidepost against which the observations are measured. Worse, the GAO often misstates the contents of the list. For instance, on page 3, the draft report states that “t listed . . . offices with a lot of trash.” In fact, the list states that “ffices were left in a state of general trashing.” (And under that heading are three bullet points that read, “Contents of drawers dumped on floor,” “Desk top glass smashed and on the floor,” and “Refrigerators unplugged (spoiled food).”) We highlighted the GAO’s error – that in today’s parlance saying an office was “generally trashed” is not the same as saying it had “a lot of trash” – in our April 26 comments on the GAO’s preliminary findings. But for some reason, the GAO chose to ignore us. We will continue to note this type of error in this set of comments to allow the GAO another opportunity to correct the record and, in all events, to inform the reader about what the list actually says. 3. Flawed Analysis. Rather than “fully document” each observation, the GAO generally states only “a range” of the “total” number of observations for each category of damage. While we would prefer that that GAO simply provide the underlying data, if the GAO includes these ranges, they must be correct. In our opinion, they are not. The GAO materially understates the number of observations, and its methodology for calculating the ranges, in our view, is flawed. Here is the problem. The GAO said that, in calculating the “total” observations, it is crediting as true each person’s observation. Yet, the GAO reports a range that takes the lowest number of observations in an office suite and then aggregates that lowest-possible number for each suite to arrive at the low end of the range. For the high end, the GAO, by and large, adds up each observation and assumes that no observer is repeating an observation reported by anyone else. Two examples – one taken from a data table which the GAO provided to us and the other a hypothetical – illustrate the flaw in this approach. No. for report (reason) 2-8 (used range for different recollections) Adv. (174, 185, 185½) 1-7 (used range for different 4 (observed by three persons) recollections) Under the GAO’s methodology, and this data, the GAO would say that 10 staff members reported “a total of” 3 to 15 damaged keyboards observed in the two office suites. But that is incorrect if, as the GAO says, all observations are being treated as truthful. One person alone said that he saw 7 or 8 keyboards with missing W keys; thus it could never be the case that total of only 3 keyboards was observed damaged. Assuming the GAO’s data were correct, the appropriate statement would be that 10 staff members reported a total of 11 to 26 (i.e., 7 to 18 in the Advance Office and 4 to 8 in Rooms 192-198); here, the range properly reflects the possibility that an observer may or may not be reporting a keyboard that was observed and reported by another. The comments in this table were, collectively, reported by 10 separate individuals. Unless otherwise indicated, each line reports an observation by one person. A simplified and hypothetical example may further clarify the point. No. observed (observer) No. for report (reason) 1 (Washington) 25 (Adams) 100 (Jefferson) 1 (Madison) 50 (Monroe) Under the GAO’s methodology, the number of “total” observations would be 1 to 126 for Office Suite A and 1 to 51 for Office Suite B – or a total of 2 to 177 for both offices. But that would be an absurd conclusion since three people said that they each alone observed more than 2 damaged keyboards; so unless the GAO is going to simply ignore their observations, or find them not credible, the total must reflect what they said. Therefore a proper range would be 100 to 126 for Office Suite A and 50 to 51 for Office Suite B, or a combined total of 150 to 177. It appears that this flaw in the GAO’s methodology infects each of the ranges presented in the GAO report. It also appears that some of the data is inaccurate in the data tables that the GAO has provided. Without being provided copies of all of the data tables for each category of damage, we cannot know – and hence cannot comment specifically on – the factual accuracy of all data, nor on how each range was calculated. Where the GAO has provided copies of the data table or has described the underlying data to us, we provide specific comments below. 4. Use of the Term “Executive Office of the President.” Throughout the draft report, the GAO refers to organizational units that are housed within the White House complex – such as the White House Office (WHO), the Office of the Vice President (OVP), or the Office of Administration (OA) – individually and collectively, as the “Executive Office of the President” or “EOP.” As we explained to the GAO in our April 26 comments, it is not accurate to refer to each unit individually or all units collectively as the Executive Office of the President. In this context, the term is both under- and over-inclusive. It is under- inclusive because not all offices in the complex fall within the EOP umbrella. And it is over- inclusive to the extent that it covers units that the GAO did not investigate. Thus, for example, it is not accurate to say, as the GAO does, that it “asked EOP” for information (Report at 1). The GAO is also inaccurate when it refers to the EOP units as “agencies.” Report at 3 n.2, 4. They are not. We therefore again recommend that the GAO state specifically the unit being referring to – whether it be the WHO, the OVP, the OA, the NSC, etc. 5. Effort To Downplay the Damage Found in the White House Complex. It appears that the GAO has undertaken a concerted effort in its report to downplay the damage found in the White House complex. The following facts lead us to that conclusion: the GAO omits from its report a reference to each reported instance of vandalism, damage, or a prank; the GAO underreports the number of observations for nearly every category of damage; the GAO omits from its report any mention of several individuals (all but two of whom served during the Clinton Administration) who told the GAO that the damage found during this transition was worse than prior transitions; the GAO ignores documents that show requests were made to repair telephone damage the GAO fails to quantify or estimate certain real costs incurred to remedy or repair the the GAO fails to report the content of the graffiti and signs that were found in the the GAO is unwilling to conclude that the vandalism, damage, and pranks were intentional, even where the circumstances plainly indicate that they were (e.g., damaged W keys, graffiti and signs disparaging the President and the incoming Administration, damaged furniture that contained anti-Bush statements, more than 100 missing phone labels, vulgar and inappropriate voicemail greetings, etc.). Part II: Specific Comments 1. PAGES 2-3. The GAO misstates the contents of the June 2001 list: The GAO says that the list “listed . . . offices with a lot of trash.” It does not. It says that the “ffices were left in a state of general trashing,” and then provides examples that the GAO omits – “ontents of drawers dumped on the floor,” “esk top glass smashed and on the floor,” and “efrigerators unplugged (spoiled food).” The GAO says that the list “listed . . . cut telephone lines.” In fact, the list says “en phone lines cut in the EOB – pulled from the wall.” The GAO says that the list “listed . . . a secure telephone left operational.” It does not. It says that “a stu3 phone . . . was left open with the key in it.” 2. PAGE 3. The GAO misidentifies the units that comprise the EOP. As stated above, not all of the units identified by the GAO fall squarely within the EOP. See, e.g., Sweetland v. Walters, 60 F.3d 852, 854-55 (D.C. Cir. 1995) (“the Executive Residence is not a unit within the Executive Office of the President”). And none of the EOP units are “agencies,” as the GAO contends (see Report at 3 n.2 and 4). 3. PAGES 7 and 23. The GAO concludes that “amage, theft, and pranks did occur in the White House complex during the 2001 presidential transition.” Congressman Barr asked the GAO to address “vandalism,” and elsewhere in the report, the GAO discusses observations of vandalism. Is the GAO unwilling to conclude that “vandalism,” as well as “damage, theft, and pranks” occurred? Or did the GAO simply inadvertently omit the word “vandalism” in these two instances? 4. PAGE 8. The GAO writes that “ultiple people said that . . . they observed (1) many offices that were messy, disheveled, or contained excessive trash or personal items.” That is an understatement, to say the least. The offices were not simply “messy” and “disheveled.” Multiple observers told the GAO that the offices, for example, had more than 20 W keys glued to the walls; at least 14 to 19 pieces of furniture overturned; computers piled up or overturned on floor; telephones and fax machines unplugged and/or piled on the floor in 25 or more offices; at least a dozen fax lines switched; 5 or 6 glass desk tops broken; a plant dumped in the middle of the floor; drawers open and their contents dumped on the desk or the floor; food inside of desks; and beer, wine and liquor bottles littering offices. When one knows the specific allegations, a reader can evaluate the explanation offered by “some former Clinton administration staff” that “the amount of trash that was observed during the transition was what could be expected when staff move out of their offices after 8 years.” Further, if the GAO is going to include the statement by former Clinton administration staff that the amount of trash was “what could be expected,” it should also include the statements of longtime staff members who said the opposite. For example, an individual who has worked in the White House complex since 1971 told the GAO that the amount of trash “was beyond the norm,” and a different individual, who has worked in the White House complex for 17 years, said that the trash was “worse this time” than in prior transitions and that the offices were “more messy” than what he had observed during other transitions. 5. PAGE 8. The GAO reports that some former Clinton Administration staff said that “some reported observations were false.” We are disappointed that President Clinton’s former staff would make such a reckless statement – a statement that is neither based on nor supported by a single shred of evidence. We believe that self-serving accusations like this one illustrate why it is important to provide the reader with many of the details that the GAO omits. If, for example, the reader is told that a particular observation was made by a staff member who worked in the complex for many years (including during the Clinton Administration), or that the damage was found in a location where others observed lots of other damage, then the reader can determine for himself the credibility of the observation. 6. PAGE 9. The GAO writes: “ocumentation was provided indicating that much telephone service work was done during the transition, but this information did not directly corroborate allegations of vandalism and pranks involving the telephones.” We simply do not understand how the GAO can say the documentation does not corroborate the allegations. Several staff members reported missing telephone labels, and the documentation shows, for example, a list of closed telephone service orders that shows, among other things, at least 28 a Telephone Service Request (TSR) that says, “NEED Button labels typed. Tech to separate work-order requests for replacement of labels on one or multiple telephones; label sets”; a TSR that says, “Room 274, 272, 284, & 286. Program phones . . . NEED Button a TSR that says, “Room 272 & 276. Program phones . . . NEED Button labels labels typed. Need tech to place labels on sets”; a TSR that says, “Reprogram sets in Room 263, 265, 266, 267, 268, 269 and 271. typed & placed on sets”; a TSR that says, “NEED TECH TO PLACE BUTTON LABELS” on sets in Room NEED labels placed on each set”; a TSR that says, “Replace labels on all phones that removed” in Room 18; a TSR that says, “Need label placed on set” in Room 148; and a TSR that says, “NEED Label placed on set” in Room 100. In addition, the GAO received two TSRs that show work – “including . . . relabeling” – performed on January 20 and 21, 2001, when individual work orders were not completed. Likewise, staff members reported that telephones were left on the floor, and the documentation shows a request for a technician to retrieve a telephone found on the floor. 7. PAGE 9. The GAO writes that “eventy-nine EOP staff who worked in the White House complex on or after January 20, 2001, provided observations about the condition of the complex at the beginning of the administration.” This statement is inaccurate in two respects. First, many of these 79 staff members worked in the complex before, during, and after January 20, not simply “on or after January 20, 2001.” Second, those staff members provided observations of damage, vandalism, and pranks that occurred shortly before “the beginning of the administration” – on January 19 and the early morning of January 20, 2001. 8. PAGE 10. The GAO reports that “EOP staff . . . observed a total of about two dozen prank signs, printed materials, stickers, or written messages that were affixed to walls or desks; placed in copiers, desks, and cabinets; or placed on the floor.” We believe the GAO has substantially underreported the number of signs and messages. The GAO was informed of, and has not disputed, the following observations: MESSAGES AND SIGNS WRITTEN ON OR AFFIXED (NOT SIMPLY TAPED) TO FURNITURE AND OTHER GOVERNMENT PROPERTY Sticker affixed to filing cabinet that reads “jail to the thief”; shown to Writing on a pull-out tray on desk that reads “W happens”; shown to Writing in top left drawer of desk that reads “GET OUT”; shown to GAO Writing in middle drawer of desk that reads “Hail to the Thief”; Key broken off in file cabinet with Gore bumper sticker with the words “Bush Sucks” stuck to the inside of the cabinet (observed by two persons) Writing in middle drawer of desk that wishes all “who work here” “good luck”; shown to GAO. Gore bumper sticker stuck to the bottom of paper tray in the copier (not including messages and signs written on or permanently affixed to property) “Vulgar words” on white board* Sign comparing President Bush to a chimpanzee found “in a number of printers”; “laced” throughout the reams of paper** Three copies of the same sign taped to wall (observed by two persons)*, *** 15-20 copies of the same sign laced throughout ream of paper in fax machine and copier (observed by two persons) Same sign shuffled throughout the paper tray in copy machine outside the Chief of Staff’s office 20-30 copies of same sign interspersed throughout ream of paper in printer in office that is adjacent to the Oval Office 8” x 10” color piece of paper that said “see you in four, Al Gore” in drawer of the copy machine Same President Bush/chimpanzee sign found in a printer* In location where people “dumped” supplies, a sign read “Gifts for the New President” (Head Telephone Operator)+ Sign taped to a desk of a mock MasterCard ad that includes a picture of President Bush and reads, “NEW BONG: $50, COCAINE HABIT: $300, FINDING OUT THAT THE GOOD-OLD-BOY NETWORK CAN STILL RIG AN ELECTION IN THE DEEP SOUTH: PRICELESS. For the rest of us there’s honesty.” The GAO was provided with a copy of this sign. T-shirt with tongue sticking out draped over chair* Sign that read “just laugh” taped to the wall “Inappropriate” message in printer or fax tray “Quite a few signs” Picture of former First Lady taped to the inside of cabinet Photo in safe that had the word “chad” spelled out in paper punch holes (observed by two persons) Notes in the desk drawers Sign addressed to and disparaging of “Bush staffer” on wall Sign of a mock Time magazine cover that read “WE’RE ******” on wall (observed by five persons) Desk drawer had 2 Gore/Leiberman stickers displayed inside Picture of Bush with something drawn on it on the 2d floor^ Sign reading “VP’s cardiac unit” (observed by two persons) ++, +++. The GAO was shown a copy of this sign. Pictures of President Clinton and notes about President Bush “were Signs inserted into office nameplates, including signs outside of the former First Lady’s Office (Room 100-104), the OMB, and the Office of Faith-Based and Community Initiatives (observed by four persons; three of these (two OA employees and one GSA employee) had worked in the White House complex during the Clinton Administration) *OA employee who worked in the White House complex during Clinton Administration ** OA employee who worked in the White House complex during Clinton Administration *** OA employee who worked in the White House complex during Clinton Administration + OA employee who worked in the White House complex during Clinton Administration ++ OA employee who worked in the White House complex during Clinton Administration +++ OA employee who worked in the White House complex during Clinton Administration ^ GSA employee who worked in the White House complex during the Clinton Administration ^^ GSA employee who worked in the White House complex during the Clinton Administration 9. PAGE 10. While, in some cases, the signs listed above were easily removed and, in a few cases, were probably meant as a joke, we believe the GAO should describe the signs more fully and with greater detail for the reasons stated in General Comment No. 1. Two statements on page 10 illustrate why. First, the GAO reports that “one former employee . . . said that the prank signs that she saw were harmless jokes.” The reader is unable to determine whether the signs were truly “harmless jokes” in some, many, or all of the cases, unless the content is included. Second, the GAO reports that it was shown “2 stickers affixed to a file cabinet and desk containing arguably derogatory statements about the resident.” The GAO is referring to a sticker that reads “jail to the thief.” We do not think that statement is “arguably derogatory,” and we believe that many people would agree with us. Yet, since the report does not reveal the content of the statement, the reader cannot determine whether the GAO is accurate in saying the statement is “arguably derogatory.” 10. PAGE 10. The GAO reports that “wenty-six EOP staff said that they observed a total of 30 to 64 computer keyboards with missing or damaged ‘W’ keys” where a specific room or office was identified. Again, we believe the range provided by the GAO (“30 to 64”) does not accurately reflect the number of observations reported. According to our records, which we earlier provided to the GAO and the GAO did not dispute, staff members observed a total of 58 to 70 computer keyboards with missing or damaged W keys where a specific office or room was identified. In addition, staff members reported 150 keyboards with missing or damaged W keys, where the staff member did not associate the observation with a particular room or office. The data are set forth below: MISSING OR OTHERWISE DAMAGED W KEYS (where a specific room or office was identified) No. for report (reason) Approx. 18 (C’s observation offices in suite”) (observer “B”) Approx. 18 (observer “C”) B, D, and E) 2 (observer “D”) 1 (observer “E”) 189) “at least” 7-8 196, 197, 197A, 197 1-2 (observer “V”) 5-7 (W’s observation likely 197, 197A, 197B, and/or 5 (observer “W”) (4 missing, 1 seen by V, X, Y and Z) defaced) 1 (observers “X” and “Y”) 1 (observers “Y” and “Z”) “heavy concentration” **; *Although no specific room was identified in the West Wing, we have included this observation in this table because, as stated in footnote 19 of the Report, the GAO places it in this category. ** OA employee, worked in the White House complex during the Clinton Administration. *** OA employee, worked in the White House complex during the Clinton Administration. **** OA employee, worked in the White House complex during the Clinton Administration. MISSING OR OTHERWISE DAMAGED W KEYS (where NO specific room or office was identified) No. for report (reason) 0-1 (observation likely counted above) First Floor, East Hall – 0-2 (observation likely counted above) **OA employee, worked in the White House complex during Clinton Administration *** OA employee, worked in the White House complex during the Clinton Administration. 11. PAGE 11. The GAO repeats its statement (found on page 8 of the Report) that staff “told us that they saw offices that were messy, disheveled, dirty or contained excessive trash or personal items left behind” and that “ormer Clinton administration staff said that the amount of trash that was observed during the transition was what could be expected when staff move out of their offices after 8 years.” Please refer to the comments we provided in Specific Comment No. 4. 12. PAGES 11-12. The report states that the “EOP provided seven pictures that . . . showed piles of binders and office supplies, empty beverage containers, and other items left behind. However, a Clinton administration transition official said that the pictures showed trash, and not vandalism.” The GAO’s description of the photographs is, in our view, incomplete. Any description of the photos should also say that the pictures show, among other things, binders, folders, papers, and other trash piled in the middle of the floor; framed pictures and bulletin boards removed from the walls and placed on the ground and on furniture; Christmas lights and strands of tinsel hung from the walls; desk drawers and cabinets left open and containing Easter decorations and personal products; and office supplies piled on sofas. 13. PAGE 12. The report describes two facility request forms that document requests for cleaning in particular offices where the GAO was told by current staff that the offices were “trashed” or extremely “filthy.” The GAO, however, fails to mention three additional and similar facility request forms that we provided: A January 30, 2001, facility request form (Form No. 56990.) shows that an employee asked for the following services in the Advance suite: “Walls/moldings need patching and paint. . . . 1 – Need carpet vacuumed – is awful! 2 – Furniture cleaned and drawers need vacuuming out. 3 – Drapery needs cleaning or replacement.” Facility Request No. 56990. During her interview, this employee told the GAO that the Advance suite was “still trashed out” even after the GSA crew went through the offices for the first time and that it took approximately three weeks before things were “back to standard.” A January 25, 2001, facility request form (Form No. 56662) shows that a different employee asked that GSA clean the carpet, furniture, and drapes in Room 160A. Facility Request No. 56662. This employee had to repeat that request on February 17, by submitting another form (which the GAO does describe) to clean a room that the employee said was “extremely trashed.” A February 21, 2001, facility request form (Form No. 58369) shows a request to clean the carpet in the former First Lady’s suite (Rooms 100-104). At least four current staff members told the GAO that this office suite was trashed, including reports of pencil shavings, dirt, and trash covering the floor. In addition, in describing the January 30, 2001, facility request form, the GAO writes that the form “documented a request to clean carpet, furniture, and drapes in an office that an EOP employee said was ‘filthy’ and contained worn and dirty furniture.” This description is incomplete. The same employee, as well as others from her office suite, also told the GAO about significant damage to furniture in those offices, including a desk drawer with its drawer fronts removed, chairs without legs, and a chair with its entire back broken off. 14. Lady’s office) Desk drawers kicked in – “clearly” intentional; “not just wear and tear” Desk drawers locked; pried open the drawers and found 2 pieces of paper that had anti-Bush statements 2 broken chairs – arms lifted off (observed by two persons) (The GAO apparently believes that one of the two observers said that 1 or 2 chairs had broken arms. That is incorrect; he told the GAO that 2 chairs had broken arms, and indeed showed the GAO the chairs.) “Number of the desks” appeared to have been scratched with knives; multiple “big scratches with a sharp object”; other furniture had red pen marks and other stains Desk covered with 5-6 black, circular burn marks; appeared to be 160-164 (Cabinet Aff.) 1 or 2 chairs with broken legs (observed by three persons) 1 chair with its entire back broken out (observed by two persons) 1 chair with ripped seat Desk with 2 or 3 of the drawer fronts removed (observed by four persons, and witnessed by GAO) 177-189 (Advance) Glass top shattered on floor; appeared that someone stomped on it Lock to the cabinet in desk had been jammed inward so that it would Desk had a key broken off in the lock Key broken off in file cabinet; key hanging in lock by metal thread, and Gore bumper sticker found inside (observed by four persons) Sofa with broken legs and other broken furniture – probably in Counsel’s office, the Scheduling office, and in the Advance offices Some broken pieces of furniture; upholstered pieces of furniture were “filthy” and had spills on them in same offices, where months and weeks earlier, things looked “pretty good”** Broken glass tops in 5 or 6 offices **OA employee, worked in the White House complex during Clinton Administration 15. PAGE 12. The GAO reports that “ormer Clinton administration staff said that some furniture was broken before the transition and could have been the result of normal wear and tear, and little money was spent on repairs and upkeep during the administration.” This explanation cannot be squared with the circumstances surrounding the reported damage. For example, With respect to the key broken off in a file cabinet in Room 197B, the key was found still hanging in lock by a metal thread (suggesting that the damage occurred not long before the transition) and, when the locksmith opened the cabinet, a Gore bumper sticker with the words “Bush Sucks” was prominently displayed inside (suggesting that the damage was intentional and done by a member of the former Administration). Similarly, when the locked desk drawers were pried open in Room 103, two pieces of paper with anti-Bush statements were found displayed inside. Again, in our view, these facts indicate that the damage was intentional, occurred shortly before the transition, and was done by a member of the former Administration. One employee told the GAO that the drawers on her desk “clearly” had been kicked- in intentionally and that it was “not just wear and tear”; A second employee told the GAO that it was unlikely that the slit seats were the result of wear and tear because “the fabric otherwise looked new,” and “it looked like someone had taken a knife or sharp object to the seat”; and, A third employee told the GAO that she saw damaged furniture in offices where things had looked “pretty good” weeks or months earlier. Finally, in still other cases, the nature of the damage suggests that it occurred shortly before the Inauguration because the offices’ prior occupants and cleaning staff would not have let the damage remain in the office for long. For example, it is hard to believe that occupants would not fix or remove a bookcase with broken glass (with shards of glass still in the cabinet) or would allow chairs with broken legs and no backs to remain in an office suite for very long. 16. PAGES 12-13. The GAO lists four facility request forms that show that staff requested repairs of furniture that they told GAO was damaged. The GAO, however, to fails to include in its list a second facility request form (Form No. 56695) submitted by a staff member on January 29, 2001, to obtain “a key to lateral file cabinet,” which was “locked.” 17. PAGE 13. We believe that the GAO has underreported the pieces of furniture that were observed overturned. Our notes show (notes that were provided to the GAO and the GAO did not dispute) that five White House employees, one OA employee, and one GSA employee reported seeing at least 14 to 19 pieces of furniture that were on their sides or overturned, as follows. In each of the three offices and the secretary’s space, almost every desk was overturned – at least one desk or table in At least 2 “desks turned over” Off.) Coffee table standing on end 3-4 pieces of furniture turned over; “couple desks on side,” “couple of chairs”** Desks and credenzas turned on their sides * 0-1 (may or may not be same one seen by a different person) 0-2 (may or may not others observed) *GSA employee, worked in the White House complex during Clinton Administration **OA employee, worked in the White House complex during Clinton Administration 18. PAGE 13. The report reads: “Six EOP staff said they observed a total of four to five desks with a sticky substance or glue on the top or on drawers.” That is inaccurate and incomplete. The GAO was told that a thick layer of an oily glue-like substance was smeared on the bottom of the middle drawer of the desks and smeared all over the top of the right pull-out trays of at least two desks. In addition, three separate employees said that the desk-drawer handle on at least one of the desks was missing, and one of the three said that the handle was found inside the drawer along with more of the glue substance. 19. EEOB – 1st floor, closet at top of Navy steps EEOB – Room 288, exterior door to hall EEOB – 4th floor 0-2 (may or may not be accounted for in the **OA employee, worked in the White House complex during Clinton Administration ***OA employee, worked in the White House complex during Clinton Administration 20. PAGE 13. The GAO is incorrect when its states that “two EOP staff said they observed a total of 9 to 10 missing television remote controls.” An employee of the OVP said that five or six television remote controls were missing from the OVP offices, and a second employee said that “approximately five remote controls” disappeared from various offices throughout the correspondence suite. (The second employee had worked in the same offices before the transition.) Thus, there were reports of 10 to 11 missing remote controls. 21. PAGE 13. The report states that “two EOP officials said that about 20 cellular telephones could not be located in the office suite where they belonged” and that “he former occupants of offices during the Clinton administration where items were observed missing said that they did not take them.” The GAO is referring here to cellular phones that were missing from the OVP, and should so state. The second clause suggests that the GAO interviewed all former employees of the OVP, and all former OVP employees said they did not take them. But that is not true. 22. PAGE 14. The GAO refers to a February 7, 2001 facility request form that asks the GSA to “put doorknob on” interoffice door. We ask the GAO to quote from – rather than paraphrase – this request since the form shows that the requesting employee is incorrect in his recollection that the doorknob was simply repaired (not replaced). Also, if the GAO includes this employee’s recollection, we ask that it state his recollection is inconsistent with the facility request form and at least three current staff members, including the employee who prepared the form. 23. PAGE 15. The report states that “ix staff said that they observed writing on the walls of two rooms.” In fact, the GAO was told about writing on the walls of four rooms, as Graffiti in the men’s restroom read, “What W did to democracy, you are about to do in here” (observed by five persons) Writing on the wall that said something like “Republicans, don’t get comfortable, we’ll be back” EEOB – wall on or near a wall was covered in pencil and pen marks, which was described as “slasher marks” and “beyond normal” wear and tear Entire wall in one office was covered in lines that appeared at a distance to be cracks 24. PAGE 15. The GAO underreports the number of telephones found with missing labels and the number of observers when it states that “our EOP staff said that they observed a total of 99 to 108 telephones that had no labels identifying the telephone numbers.” No. of pieces for “at least 3 missing labels, possibly 5” (observed by two individuals) “additional labels missing in rooms on the corridor”) South corridor” “at least 3 phones” were missing labels 8 phones; “all phones were missing their labels” – both the large paper panel that lists the lines that are in use and the small label that lists the number of the phone “phones were missing labels” 1 phone was missing label “lot missing” in Public Liaison space ** “all stations” in the Public Liaison offices were missing labels; personally saw roughly “phones were missing labels” 2 or 3 phones were missing labels “couple missing phone labels” 177-189 (Advance) “couple missing phone labels” corridor”) “some missing in Advance” ** “some missing in center corridor” on 1st floor ** “labels on phones were all gone” in all OVP **OA employee, worked in the White House complex during Clinton Administration 25. PAGE 15. The draft report states that “seven EOP staff said they saw telephones unplugged or piled up.” This statement provides the reader with no information regarding how many phones or how many offices were affected. Our records show that 25 or more offices in the EEOB had phones piled up or unplugged. 26. PAGE 16. In its summary of the reported damage, the GAO fails to mention the telephones that were forwarded and reforwarded throughout the complex. According to our records, roughly 100 telephones were forwarded to ring at other numbers, as follows: Total no. of pieces “couldn’t answer phone because, as soon as it rang, it would bounce to another phone in the suite, and then went straight into a voice-mail system that could not be accessed” “phones were forwarded and then reforwarded so we could not figure out what number would ring the phone” on desk Phone number in office (187½) did not ring if dialed the number on the phone “called someone and reached a different and unrelated person” West Wing – Chief of “the Chief of Staff’s phone had been forwarded to ring at a phone in a closet” “majority of the phones did not ring” at the assigned phone number; “roughly 100” phones had been forwarded to ring at a different number; “phones [in the West Wing] were forwarded from the first floor to the second floor” and “phones from the West Wing were forwarded to the EEOB” Found at least 7-10 forwarded phones 27. PAGE 16. The draft report states that “wo EOP staff said that they saw a total of 5 to 7 telephone lines ‘ripped’ (not simply disconnected) or pulled from the walls, and another EOP employee said that she saw at least 25 cords torn out of walls in two rooms. Former Clinton administration staff said that cords were probably torn by moving or carpet repairs.” The GAO has failed to provide the reader with important information – information needed to promote “an adequate and correct understanding of the matters reported.” Government Auditing Standard 7.51. The GAO fails to explain that the “two EOP staff” were the White House Director of Telephone Services and the OA’s Associate Director for Facilities Management who together began touring offices and checking phone lines in the EEOB at approximately 1 a.m. on January 20 – before any moving or carpet repairs began in these offices. Thus, this is an instance where information that the GAO omits would have allowed the reader to test the credibility of the explanation provided by the Clinton administration staff. 28. PAGE 17. The GAO writes that, “with three exceptions,” “ w generally unable to determine when the observed incidents occurred and who was responsible for them because no one said he or she saw people carrying out what was observed or said that he or she was responsible for what was observed.” We respectfully disagree. In many cases, the undisputed facts indicate when the incidents occurred and the likely perpetrators. For example, With respect to the key broken off in a file cabinet in Room 197B, the key was found still hanging in lock by a metal thread (suggesting that the damage occurred not long before the transition) and, when the locksmith opened the cabinet, a Gore bumper sticker with the words “Bush Sucks” was prominently displayed inside (suggesting that the damage was intentional and done by a member of the former Administration). Similarly, when the locked desk drawers were pried open in Room 103, two pieces of paper with anti-Bush statements were found displayed inside. Again, in our view, these facts indicate that the damage was intentional, occurred shortly before the transition, and was done by a member of the former Administration. All of the obscene, inappropriate, and prank voicemail greetings must have been recorded shortly before the Inauguration (since many of the messages referred to the change of Administration and one presumes that former staff would not have left vulgar or inappropriate such messages on their phones during the Clinton Administration) and must have been recorded by the person who was assigned that telephone during the Clinton Administration (since a personal identification code is needed to change the voicemail greeting). According to an individual who worked as White House Director of Telephone Services from 1973 to 2001, some of the missing telephone labels “were replaced early on January 20 – before noon”; but the labels were found “missing again later that day.” These facts show that the removal of at least some of the labels was an intentional act, occurred early on January 20, and outgoing staff members were almost certainly responsible. The oily glue-like substance that was smeared on desks in the Vice President’s West Wing office; prank signs that were on walls and interspersed in reams of paper in printer trays and copy machines in the Vice President’s West Wing office; and the “vulgar words” on a white board in that office were all discovered between midnight on January 19 and noon on January 20 by three different individuals. Since we presume that Vice President Gore’s staff did not generally work under these conditions, we can reasonably conclude that that this damage occurred shortly before the Inauguration and again, members of the former Administration were the likely perpetrators. Similarly, it is unlikely that Clinton Administration staff worked for long without having W keys on their keyboards, again suggesting that the vandalism occurred shortly before the Inauguration. In other cases, the person who observed the damage firsthand told the GAO that the nature of the damage itself, and the surrounding conditions, suggested that the damage was done shortly before the transition weekend. For example, one employee told the GAO that she saw damaged furniture in offices where things had looked “pretty good” weeks or months earlier. In still other cases, the nature of the damage suggests that it occurred shortly before the Inauguration because the offices’ prior occupants and cleaning staff would not have let the damage remain in the office for long. For example, it is hard to believe that occupants would not fix or remove a bookcase with broken glass (with shards of glass still in the cabinet) or would allow chairs with broken legs and no backs to remain in an office suite for very long. In addition, and with all due respect, it is not true that the GAO “was generally unable to determine who was responsible.” The GAO simply failed to determine who was responsible. The GAO was able to identify the “former Clinton administration employee who said he wrote a ‘goodwill’ message inside the drawer of his former desk” because the GAO called that individual. The GAO failed, however, to try to contact the occupants of the offices where other written messages – expressing things other than “goodwill” – were left. Similarly, the GAO could have contacted – but failed to contact – several former Clinton administration staffers who left inappropriate voicemail messages. And the GAO did not contact all the former staff members who occupied offices where missing or damaged W keys, missing telephone labels, or other damage was found. Therefore, it is inaccurate, in our view, to say that the GAO was “generally unable to determine who was responsible.” Respectfully, in our judgment, the GAO simply decided not to pursue the inquiry in many cases. Finally, the GAO’s suggestion (at page 17) that “contractor staff, such as movers and cleaners” were responsible for the vandalism, damage, and pranks is, in our view, preposterous. It is an insult to the men and women who worked so hard during the weekend of January 20 to clean up the conditions left by the prior Administration and prepare the complex for the new staff. 29. PAGE 18. The GAO writes that, for certain categories of damage, “the observations of EOP staff differed from the list in terms of total numbers of incidents or alleged extent of damage.” The GAO then provides, as an example, the statement included in the list that furniture in six offices was damaged severely enough to require a complete refurbishment or destruction. But the GAO learned of at least 28 to 31 pieces of damaged furniture, including 5 or 6 chairs with broken legs (reported by four employees), 1 chair with its entire back broken out (reported by two employees), and a desk with its drawers kicked in (reported by one employee). These pieces of furniture, at the very least, would have required a complete refurbishment or destruction; they simply could not have been used in their current condition. In addition, when the GAO asked the Director of the Office of Administration what happened to the damaged furniture, he said that some of it was “thrown in the dumpster.” Thus the observations of staff members did not, as the GAO suggests, differ from the June 2001 list. 30. PAGES 19-20. The GAO omits the following documented costs from its list of “Costs Associated with the Observations”: A January 30, 2001, facility request form (Form No. 56713) shows that Cabinet Affairs asked for someone to clean the carpet, furniture, and drapes in Rooms 160, 162, and 164. GSA charged $2,905.70 for that service. As the GAO acknowledged earlier in its report (at page 12), this request was for an office suite that a White House Office “employee said was ‘filthy’ and worn and dirty furniture.” As noted above, that same employee, as well as others from her office, also told the GAO about significant damage to furniture in those offices, including a desk drawer with its drawer-fronts removed, chairs without legs, and a chair with its entire back broken off. The GAO’s discussion of the “costs” associated with telephone problems is both inaccurate and incomplete. Based on extremely conservative estimates and straightforward documentation, the government incurred at least $6020 just replacing removed labels and rerouting the forwarded telephones. The evidence shows: First, the GAO received, but fails to mention, a blanket work order and bill for work – including “relabeling” work – performed on Saturday, January 20, 2001. The techs billed 114 hours at a rate of $113.88 per hour for each hour or fraction of an hour spent on a particular job. Consequently, if technicians spent only ten percent of their time relabeling phones and correcting forwarded telephones on Saturday (a conservative estimate given that there were between 112 and 133 specifically identified missing labels and roughly 100 forwarded phones), that means it cost the taxpayer $1,298 for one day’s work replacing the removed labels and fixing the forwarded phones. Second, and similarly, the GAO acknowledges that it received a work order and bill for work – including “replacing labels on telephones” – performed on Sunday, January 21, 2001. But the GAO fails to estimate any costs associated with that work. The bill shows that the techs worked 78.5 hours that day at a rate of $151.84 per hour for each hour or fraction of an hour spent on a particular job. That means that, if technicians again spent only ten percent of their time relabeling phones and correcting forwarded telephones, the taxpayer incurred an additional cost of $1,192 for that day’s work replacing the removed labels and fixing the forwarded phones. Third, the GAO fails to estimate the costs associated with replacing labels even where it was provided both individual work orders and a summary of orders that specifically identify the relabeling work performed and the amount of time spent on the job. Specifically, we provided the GAO with a document entitled “Orders Closed 1/20/01 Thru 2/20/01” that lists many orders (some of which are highlighted above) where a tech was asked to place one or more labels on the telephone sets. For each of those orders, a “T&M” charge (time and materials) is identified in terms of hours and minutes. Those charges can be computed in dollars by multiplying the total number of hours of T&M charged times $75.92. We do not understand why the GAO failed to perform this simple exercise, particularly given its willingness to provide cost estimates in the context of missing and damaged W keys. Had the GAO done the calculation, the reader would know that approximately $2201.68 was spent to replace labels on telephone sets, as set forth below: On Monday, January 22, 2001, a telephone tech was asked by the OVP to “PROGRM PHNS PER MATT, NEED BTN LABELS, TECH TO LABEL SETS.” The tech billed “4HRS” (4 hours) on this order, for an estimated total cost of $303.68. TSR No. 01010183. On January 31, 2001, a tech was called to Room 273 of the OVP because, among other things, the phones “NEED BTN LABELS TYPED, PLACED.” The tech billed “2HRS” on this order, for an estimated total cost of $151.84. TSR No. 01010386. On February 5, 2001, a tech was called to Room 200 because the phones “NEED LABELS PLACED ON SETS.” The tech billed “2HRS” on this order, for an estimated total cost of $151.84. TSR No. 01020071. On February 9, 2001, a tech was asked to “REPROGRAM IN ROOM 276 EEOB, PLACE BUTTON LABEL ON SET.” The tech billed “1HR” on this order, for an estimated total cost of $75.92. TSR No. On January 29, 2001, a tech was called to Room 18 to, among other things, “REPLACE LABEL.” The tech billed “1HR” to this order, for an estimated total cost of $75.92. TSR No. 01010306. On January 30, 2001, a tech was called to Room 113 because the occupants “NEED LABEL PLACED ON SET BY TECH.” The tech billed “1HR” to this order, for an estimated total cost of $75.92. TSR No. 01010342. On February 3, 2001, a tech was called to Room 100 to “PLACE BTN LABEL.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020154. Also on February 3, 2001, a tech was called to Room 100 because the occupants “NEED BTN LABELS FOR SET.” The tech billed “1 HR,” for an estimated total cost of $75.92. TSR No. 01020156. In six additional and separate service orders on February 3, 2001, a tech was asked to “REPROGRAM” phones in the Room 100 suite and “TO PLACE LABEL ON SET.” TSR No. 1020330; see also TSR Nos. 1020325 (“NEED LABELS PLACED ON SET”), 1020328 (“NEED BTN LABELS”), 1020329 (“NEED LABELS”), 1020331 (“NEED LABELS PLACED ON SET”), 1020340 (“NEED LABELS PLACED ON SET”). The tech billed “1HR” on each of the six service orders, for an estimated total cost of $455.52. On February 5, 2001, a tech was told that the occupants of Room 135 “NEED LABEL PLACED ON SET.” The tech billed “1HR” for this order,” for an estimated total cost of $75.92. TSR No. 01020075. On February 3, 2001, a tech was asked to “REPROGRAM SET ROOM 137” and “PLACE LABEL ON SET.” The tech billed “2HRS,” for an estimated total cost of $151.84. TSR No. 01020099. On February 3, 2001, someone in Room 131 asked a tech to “PLACE LABEL ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020055. In a separate service request on February 3, 2001, a tech was asked to “REPROGRAM IN ROOM 137 EEOB” and “PLACE LABELS ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020168. On February 3, 2001, a tech was told that the occupants of Room 154 “NEED BUTTON LABEL,” among other things. The tech billed “1HR” to this order,” for an estimated total cost of $75.92. TSR No. 01020327. On February 5, 2001, a tech was told that “LABELS ALSO NEEDED” in a Presidential Personnel Office. The tech billed “1HR” for this order,” for an estimated total cost of $75.92. TSR No. 01020360. On February 3, 2001, a tech was asked to “REPROGRAM IN RM 131” and “PLACE LABEL ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020363. On February 2, 2001, a tech was asked to “REPROGRAM IN ROOM 184 EEOB” and “PLACE LABEL ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020132. On February 8, 2001, a tech was told that the occupants of Room 87 “NEED LABELS PLACED ON SET.” The tech billed “1HR” on this order, for an estimated total cost of $75.92. TSR No. 01020160. Fourth, and even more perplexing, the GAO ignores the AT&T invoices (“Activity Reports”) and individual works orders (TSRS) that we provided that show the actual charges incurred on particular orders. We have not attempted in preparing these comments to review all such invoices, but a sampling shows $1,328.60 in charges in addition to those listed above: TSR No. 01010184 (request to “program phones” and “place labels on sets” in Rooms 272, 274, 284, and 286): $341.64. TSR No. 01010185 (request to program phones and place labels on sets in Rooms 272 and 276): $341.64. TSR No. 01010195 (request for, among other things, labels for sets in Rooms 263, 265, 266, 267, 268, 269, and 271): $341.64. TSR No. 01010206 (request for, among other things, “tech to place button labels”): $303.68. Fifth, the GAO also can and should estimate, based on this data, how much it would cost to replace labels on 112-133 telephones (or, at least, on the 99 to 108 that the GAO concedes were observed missing) by estimating how much was charged per telephone and extrapolating that amount to account for the total number of missing labels. Sixth, the GAO suggests that it is unable to provide any estimate on the costs to repair the damaged phones because “the extent to which the service order that mentioned labels involved missing labels was not clear and all of the service order involving labels were part of order for other service.” That is incorrect. As we explained to the GAO, when a System Analyst (SA) performs work that does not require a technician to be dispatched to the office (e.g., reprogramming a phone), there is no separate charge. If work requires a tech dispatch (e.g., replacing a label), then there is a minimum charge of $75.92 for each hour or portion of an hour ($113.88 on Saturdays and $151.84 on Sundays), even if it takes only minutes to perform the work. Therefore, for service orders that requested, for example, both a telephone to be reprogrammed and its label to be replaced, the entire charge is attributable to replacing the label. This is clear from the AT&T billing invoices (or “Activity Reports”) that show that the cost associated with the work orders is for “LABOR CHARGES FOR EQUIP. MOVES/CHGS,” and not for reprogramming expenses. In addition, for the service orders where the minimum charge of $75.92 was assessed, it is immaterial whether work in addition to replacing the label was performed; a charge of $75.92 would have been incurred for replacing the label(s) regardless of whether other work was performed within that first hour. Finally, the closed order list and the service orders do far more than “mention labels,” as the GAO suggests. See Specific Comment No. 79. 31. PAGE 20 n. 9. In estimating the cost to replace missing doorknobs, the GAO has “deducted the value of replacing one historic doorknob from the total number observed missing because . . . a GSA planner/estimator said that a facility request to install a doorknob in an office . . . was to perform maintenance on a doorknob with a worn-out part, not to replace a missing one.” We are puzzled that the GAO would decide to credit the recollection of the GSA planner/estimator, even though his recollection is inconsistent with both a contemporaneous facility request form that asks GSA to “put doorknob on” interoffice door and the recollection of at least three current staff members who recall that no doorknob was on the door. The GAO’s decision simply makes no sense to us. But if the GAO persists with that decision, we ask that the GAO also state in footnote 9 that the statement by the GSA planner/estimator is contrary to the documentation and the recollection of at least three other witnesses. 32. PAGE 21. The GAO concedes that it has not even attempted to quantify additional costs that were incurred as a result of the damage, including: To pay computer staff and contractors who spent time replacing keyboards with missing and damaged W keys; To pay staff who devoted extra hours to removing W keys and prank signs affixed to walls and to clean up trash and dirt that exceeded reasonable amounts or amounts seen in prior transitions; To pay staff who devoted time to placing overturned furniture upright; To pay telephone personnel and technicians to remove inappropriate or obscene voice-mail greetings and to correct phones that had been forwarded to unidentified To pay telephone personnel and technicians to repair cables, phone jacks, and/or To pay personnel to investigate the theft of a Presidential seal; To pay movers to remove damaged furniture; To replace damaged furniture that was not repaired; To remove and replace broken glass tops; and To hire repairman to repair broken cabinets and copy machines. electrical cords pulled from the wall; While it may not be possible to associate precise amounts with these costs, the GAO could have generated a range of estimates, but chose not to do so. We believe that this shortcoming in the investigation results in a substantial underreporting of the very real costs associated with the damage, vandalism, and pranks that occurred during the 2001 transition. 33. PAGES 21-22. In describing how the 2001 presidential transition compared with previous transitions in terms of damage, vandalism, or pranks, the GAO fails to include the statements of several current staff members – all of whom served during prior administrations and many of whom served during the Clinton Administration – who told the GAO that the damage observed during the 2001 transition was worse than prior transitions. The following statements are representative: “This was unusual. . . . Every administration has pranks,” but this was “worse.” (An employee who oversaw White House telephone services from 1973 to 2001) “Never remember seeing anything like this before.” (same employee as above) “I never encountered any problems with telephones” when President George H.W. Bush left office. (same employee as above) Although he had been through many transitions, he “never thought would find things like this.” (same employee as above) One employee was “stunned” by the condition of the EEOB; he had “ever seen anything like it” in prior transitions. (An employee who has observed five prior transitions) The amount of trash “was beyond the norm”; it was “cleaner in some other transitions.” (An employee who has worked in the White House complex since 1971) The damage “was more than ’d seen in other transitions”; in the 1993 transition, this official saw “nothing comparable” to what he saw during this transition. (This Bush Administration official, who worked in the White House complex during Reagan Administration and the prior Bush Administration, personally toured four floors of the EEOB and West Wing on January 20, 1993) The trash was “worse this time” than in prior transitions; “more messy than other” transitions. (An employee who has worked in the White House complex for 17 years) In addition, while pranks and damage may have been observed in prior administrations, the reported observations are not the same in number or kind as those observed during the 2001 transition. Yet the GAO does not mention this in its report. The reader, moreover, is hampered in drawing his own conclusion because the GAO fails to include details about how much damage was reported by current staff. In addition, the GAO seems to overstate the extent of the damage reported during prior transitions. For example, while the GAO writes that the “observations included missing building fixtures like office signs and doorknobs,” we understand there were no observations of “missing building fixtures” other than office signs and doorknobs, and those observations were few in number. A more accurate statement therefore might read “observations included ‘no more than’ 10 missing office signs and 1 or 2 missing doorknobs.” Similarly, the GAO writes that the “observations included . . . messages written inside and carved into desks.” We understand that there was only one observation of a message written inside a desk – the same observation that that the GAO repeats, for some reason, in the sentence that follows. And apparently there were only three observations of carving in desks by staff who served only during the Clinton Administration. Finally, while the GAO refers to “piles of . . . equipment” (apparently referring to only one observation by a Clinton staffer of piles of telephones), the GAO fails to explain that the individual who has overseen telephone services since 1973, said that he “never encountered any problems with telephones” during the 1993 transition; he said that “perhaps some were unplugged, but that would be it.” This employee also told the GAO that, as the Clinton Administration entered office in 1993, he was instructed to “get rid of Republican phone system,” which apparently resulted in the replacement of all the phones. 34. PAGE 22. The GAO says that “ormer Clinton administration officials told that departing EOP staff were required to follow a check-out procedure that involved turning in such items as building passes, library materials, government cellular telephones at the end of the administration.” We have repeatedly told the GAO that some current staff members who served during the prior administration believe that the check- out procedures were often not followed and, in particular, building passes were not returned. The GAO apparently did not ask the Clinton staff or the National Archives to produce copies of the check-out forms, so there is no documentation to shed light on the issue. Consequently, we asked the GAO to include in its report the understanding of current staff – that some or all of the check-out procedures were not followed – and that there was no documentation to support or refute their claim. Or, alternatively, we asked that the GAO delete from its report the description to the “check-out procedures.” For reasons that were not explained to us, the GAO has chosen not to do so. 35. PAGE 23. The GAO writes, “Incidents such as the removal of keys from computer keyboards; the theft of various items; the leaving of certain voice mail messages, signs, and written messages; and the placing of glue on desk drawers clearly were done intentionally.” We believe that this list of incidents is incomplete. The GAO should also include on its list at least the following observations – all of which appear, based on their timing, recurrence, and/or content, to have been done deliberately by former staff leaving the complex. Damage to computer keys (primarily W keys); W keys glued to walls and placed in drawers; Missing phone labels (some of which were replaced on January 19, only to have them Forwarded telephones (including the Chief of Staff’s phone which was forwarded to removed again before noon on January 20); “Crank” calls; Phones piled on floor (observed before cleaning staff and telephone technicians ring in a closet); Most if not all printers and fax machines emptied of paper in vacated offices in the Removal of an office sign that was witnessed by current staff member; Overturned furniture (observed before cleaning staff entered offices); Key broken off in file cabinet that, when opened, displayed Gore bumper sticker with the words “Bush Sucks” on it; Desk drawers locked that, when opened, contained messages disparaging President Gore bumper sticker stuck to the inside of copy machine; Writing on and in desks that reads “W happens,” “Hail to the Thief, ” and “GET Sticker inside a filing cabinet that reads “jail to the thief”; Lamp placed on chair (observed before cleaning staff entered office); Pictures and other objects placed in front of doors (observed before cleaning staff OUT.” Desk drawers turned over on the desk and on the floor (observed before cleaning staff entered office); and entered offices). 36. PAGE 23. The GAO states that “it was unknown whether other observations, such as broken furniture, were the result of intentional acts and when and how they occurred.” While that may be true with respect to a few pieces of the furniture, that is not a reasonable conclusion with respect to other items. For example, in our view, it is not plausible that a key was broken off accidentally in the lock of a cabinet, the key was left hanging by a thread in the lock, and, when opened, a Gore bumper sticker with the words “Bush Sucks” on it was prominently displayed. Nor, in our view, is it reasonable to conclude that desk drawers were accidentally locked and just happened to contain two pieces of paper with anti-Bush statements displayed inside. It is also not plausible to think the cleaning staff completely broke off the backs and legs of multiple chairs within the same office, and then left that furniture in the offices for the new occupants. And it would certainly be odd behavior, in our view, for occupants of these offices to have broken those chairs through normal wear and tear and to have left those chairs in the office – unrepaired – for some period of time. Likewise, the nature of some of the damage – e.g., two seat cushions slit in an identical manner on apparently new upholstery – indicates that it was not accidental. And the GAO’s conclusion that the furniture damage could have been accidental fails to take into account the testimony of one employee who served during the Clinton Administration and told the GAO that some of the upholstered furniture that she saw damaged during the transition looked “pretty good” when she visited the same offices weeks and months earlier. Similarly, it is not reasonable, in our view, to conclude that the furniture was overturned unintentionally. First, most of the witnesses observed the overturned furniture before the cleaning staff or new occupants entered the rooms. Second, it is not plausible to think that cleaning staff would have upended extremely heavy furniture in the manner described by the witnesses: At least two “desks turned over” in the Advance Office (observed by employee with Desks and credenzas turned on their sides (observed by two witnesses) Coffee table standing on end, sofa upside down, and tables turned over in the 29 years of service in the White House) In the Counsel’s Office, in each of the three offices and the secretarial space, almost every desk was overturned – “at least one desk or table in each room” “Couple desks on side” and a “couple of chairs” turned over on the first floor of the EEOB (observed by employee with 31 years of service in the White House) Sofa overturned with broken legs In fact, the GAO was told by two employees of the GSA that cleaning staff would “not move” large pieces of furniture in this fashion, and none of these things would happen in the normal course of “moving” out of an office. Likewise, we know that the removal of at least some of the labels was an intentional act, occurred early on January 20, and outgoing staff members were almost certainly responsible. The employee who oversaw White House telephone services from 1973 to 2001 told the GAO that some of the missing telephone labels “were replaced early on January 20 – before noon,” but were found “missing again later that day.” 37. PAGE 28. The GAO writes: “Staff we interviewed told us that they saw evidence of damage, vandalism, or pranks on or after January 20, 2001, when they started working in the White House complex.” This statement is misleading for two reasons. First, it suggests that all observations were made by staff who “started working in the White House complex” “on or after January 20, 2001”; in fact, many, if not most, of the observations were made by employees who worked in the complex long before Inauguration Day. Second, the statement suggests that the staff members saw evidence of damage only “on or after January 20, 2001”; in fact, many observations were made on January 19, 2001. Therefore, to be accurate, this sentence should read: “The staff we interviewed, many of whom worked here during the Clinton Administration, told us that they saw evidence of damage, vandalism, or pranks shortly before, on, and shortly after January 20, 2001.” 38. PAGE 28. The GAO repeats a statement made on page 23 that, although “ncidents such as the removal of keys from computer keyboard, the theft of various items, the leaving of certain voice mail messages, signs, and written messages, and the placing of glue on desk drawers clearly were done intentionally,” the GAO “generally could not make judgments about whether were acts of vandalism because did not have information regarding who was responsible for them, when they occurred, or why they occurred.” Again, we respectfully disagree. The GAO’s statement is categorical and speaks of an unwillingness to make any “judgments” about the observations. But the GAO certainly “could” make a judgment about whether at least some – if not most – of the observations were acts of vandalism. As explained in Specific Comment Nos. 35 and 36, the GAO’s list of “clearly intentional” acts is under-inclusive, and the GAO had considerable “information regarding who was responsible for , when they occurred, or why they occurred.” The GAO, it seems, has simply decided to ignore that evidence. It is simply not credible, in our view, for the GAO to claim that it cannot make a judgment about the incidents listed in Specific Comment No. 35. In addition, we believe the GAO should report the views of many current staffers (including employees who served during the Clinton Administration) who said that, based on their firsthand observations, the damage appeared to have been “deliberate,” “purpose,” and “intentional.” For example, one employee who has worked in the White House since June, 1998 told the GAO that the missing phone labels “must have been intentional,” and another employee said that the damage done to a desk in Room 102 was “clearly” intentional and “not just wear and tear.” A third person told the GAO that the broken file cabinet looked “deliberate.” And two others (one of whom has observed five White House transitions, the other of whom has worked at the White House since 1998) said that, in their view, people had “deliberately” trashed their offices. An employee who worked at the White House from August 1999-August 2001 likewise told the GAO that the condition of 30-40 NSC rooms “was intentional, not accidental.” Two other employees (one of whom has worked at the White House since 1971) also told the GAO that some of the “trashing” was “intentional.” A Bush Administration official said that the conditions he observed were “more than wear and tear.” And an employee who has worked in the White House since 1973 said it looked like the prior occupants had “purposely trashed the place.” By including these sorts of statements, the GAO would not only be providing the reader with “information needed to . . . promote an adequate and correct understanding of the matters reported,” the GAO would also then be treating statements made by current and former staff alike. As drafted, the report contains the views of “ormer Clinton administration staff” on whether the observed acts were intentional. See, e.g., Report at 8 (Former Clinton administration staff said that some furniture was broken, “but not intentionally”); Report at 46 (“The former senior advisor for presidential transition questioned whether as many as 60 keyboards could have been intentionally damaged. . . .”); Report at 83 (“ormer employee said that she saw telephone lines pulled out of walls and that they appeared to have been pulled out intentionally.”). But the GAO fails to report the views of the current staff members regarding precisely the same issue. 39. PAGE 29. We disagree with the GAO’s statement that, “n the overwhelming majority of cases, one person said that he or she observed an incident in a particular location.” According to our records, in many (if not most) cases, more than one person reported seeing the same incident in the same location. Indeed, the GAO reached that conclusion in its April 2002 preliminary draft report, where it stated (on page 22) that “everal people observed most incidents; however, in a few cases, only one person observed them.” The observations have not changed; we do not know why the GAO’s conclusion has. 40. PAGE 29. The GAO states that, “ n some cases, people said that they observed damage, vandalism, or pranks in the same areas where others said they observed none, sometimes only hours apart.” In our April 26 comments on the GAO’s preliminary draft, we explained that, without a description of the specific instances where one current staff member recalled seeing something and another expressly disavowed seeing the same thing, it was impossible to know whether the apparent conflict in testimony could be reconciled or whether the GAO’s statement is factually accurate. We also complained that this vague sentence provides no indication of how many such conflicts existed or what types of incidents are involved. The GAO provided us with only two specific instances to which this sentence refers. The first example was an observation by two individuals – a Bush Administration official, and an employee who has observed five prior transitions -- of overturned furniture in the Counsel’s Office suite (Room 128), which another person claimed could not be reconciled with a third person’s alleged statement that he observed no overturned furniture in the same office. First, according to our interview notes, when the GAO asked the third person (who has worked in the White House for 33 years) specifically about Room 128, and whether he had observed overturned furniture in that office, he told the GAO that he had “no specific recollection of going into that room.” Second, this person told the GAO, during both interviews with him, that he entered rooms in the EEOB between approximately midnight and 2:30 a.m. on January 20, at which time his attention was diverted to the West Wing. This person also told the GAO, during his first interview, that when he entered the Counsel’s Office, “there were still people working” there. (This is consistent with the testimony provided by the prior occupants of that office, who said they left the EEOB close to noon on January 20.) Consequently there is no conflict between this person’s recollection and that of the other two individuals, who said that they did not enter Room 128 until after noon on January 20. This person had no specific recollection of entering that office and, even if had recalled seeing no overturned furniture, he would have made that observation roughly 12 or more hours before the observations of the two other individuals, leaving plenty of time for someone to overturn furniture. The second example that the GAO provided was an observation by an employee who has observed five prior transitions, of a broken glass top and files on the floor in the Advance Office suite, which the GAO claims is inconsistent with “other staff,” who “said they didn’t see that.” While again, the GAO has not identified who offered conflicting testimony, this employee’s observations, which he made around 12:15 p.m. on January 20, are entirely consistent with another employee’s recollection that he saw 5 or 6 broken glass tops when he surveyed the first few floors of the EEOB shortly after noon on January 20. While current staff who occupy the Advance Office may not have seen the broken glass top or dumped files, that would not be surprising since they did not enter the building until much later, allowing time for the broken glass and files to have been removed. Thus we are aware of no instance where there is a direct conflict where one person said they observed damage in a location where others observed none. 41. PAGE 31. The GAO writes: “Six EOP staff told us that they observed a total of 5 to 11 missing office signs. . . .” Four of the “ix EOP staff” members are employees of the OA and served here during the Clinton Administration. A fifth employee, who worked for the White House Office, also served during the Clinton Administration. One of the employees told the GAO that a former member of the Counsel’s Office during the Clinton Administration told her that he too observed two missing brackets on the morning of January 20, 2001. 42. These observations included an office sign that an EOP employee said that she saw someone remove on January 19 outside an office in the EEOB. The EOP employee said that the person who removed the sign said that he planned to take a photograph with it and that she reported the incident to an Office of Administration (OA) employee. Further, the EOP employee said that the person attempted to put the sign back on the wall, but it was loose. This statement implies that the individual who pried the sign off the wall intended all along to put the sign back. In fact, it was only when he was confronted by an OA employee that the individual claimed that he wanted to take a photograph with it and tried to put the sign back. This employee does not believe that the volunteer intended all along to return the sign, as the GAO’s sentence suggests. The GAO fails to mention that the same employee also said that a former member of the Clinton Counsel’s Office told her that he saw that the sign was missing at some point during the night of January 19, 2001. 43. PAGE 31. The GAO fails to mention in its discussion of missing office signs that a facility request form, dated April 19, 2001, requests the “replacement of frames & medallions” on four rooms. 44. PAGE 31. We disagree with the GAO’s statement that “our EOP staff said they saw a total of 10 to 11 doorknobs, which may have been historic originals, were missing in different locations.” As explained above (in Specific Comment No. 19), the GAO was told that 11 to 13 doorknobs were missing. 45. A GSA planner/estimator who said he was in charge of repairing and replacing building fixtures in the EEOB, including office signs, medallions, and doorknobs, said he received no written facility requests made to GSA for replacing missing office signs, medallions, or doorknobs during the transition. He said that the February 7, 2001, GSA facility request was not to replace a missing doorknob, but to repair one that had a worn-out part. He also said that over the past 20 years, doorknobs have been found missing about a half-dozen times in the EEOB, and not only during transitions. In addition, he said that medallions are difficult to remove and that a special wrench is needed to remove them from an office sign. First, if the GAO says that this GSA employee “said he received no written facility requests made to GSA for replacing missing office signs, medallions, or doorknobs during the transition,” it is important that the GAO also say: there is, in fact, a work request, dated April 19, 2001, for “replacement of frames & medallions” on 4 rooms, as well as the February 7 work request to “put . . . on” a An employee of the OA said he provided a written request (although perhaps not on a facility request form) to the GSA for the replacement of name brackets and An OA manager who has worked at the White House since 1971 recalled telling the GSA to replace missing knobs, brackets, and medallions and asking the GSA to check all signs and to take corrective actions; and A WHO employee told the GAO that the GSA noted that the office sign on Room 457 was missing when the GSA did a survey of the rooms. Second, we again ask that the GAO note that the employee’s recollection that the doorknob was repaired (not replaced) is inconsistent with the facility request form and the recollection of at least three current staff members, including the individual who prepared the facility request form. 46. PAGE 33. GAO states that “wo EOP staff told us that 9 to 10 television remote control devices were missing from two offices.” Here, the GAO conflates two separate reports – one the disappearance of five or six television remote controls from the OVP; the other the disappearance of approximately five remote controls from various offices throughout the correspondence suite – for a total of 10 to 11 missing remote controls. We believe that the GAO should discuss these incidents separately. The employee who reported the remote controls missing in the Correspondence Office, worked for the Correspondence Office during the Clinton Administration. This is an important fact because this employee’s prior tenure with the Clinton Administration placed her in a position to know if remote controls were in the rooms before the transition. 47. PAGE 35. The GAO says that “the OA associate director for facilities management estimated it will cost about $350 to make a replica of the presidential seal that was reported stolen. . . . We did not obtain any information about the possible historic value of the seal that was stolen.” That is untrue. The GAO was told, in writing, that the $350 purchase price would not purchase an exact replica of the brass seal that was stolen; that seal was purchased in the mid-1970s, and is no longer available. Rather, the $350 would purchase a plastic-type casting. 48. PAGES 35-36. The GAO begins its section on “Comments by Former Clinton Administration Staff,” with the following statement: The former director of an office where an EOP employee told us that she saw someone remove an office sign said that an elderly volunteer in her office removed the sign from the wall on January 19, 2001. She said that she did not know why he had removed the sign. She said that she attempted to put the sign back on the wall, but it would not stay, so she contacted OA and was told to leave it on the floor next to the door. The former office director said that she left the sign on the floor, and it was still there when she left between 8 p.m. and 10 p.m. on January 19. The GAO’s report omits the fact that another employee, who also worked here during the Clinton Administration, told the GAO that she confronted the volunteer while he was removing the sign and that she contacted the OA immediately. We believe that it was the confrontation by this employee that explains why the volunteer ultimately did not take the sign, and hence that information should be included in the report. The GAO also fails to mention that a former member of the Counsel’s Office said that the sign was missing during the night of January 19, 2001. 49. PAGE 36. The GAO writes: “The former director of an office where an EOP employee told us that he observed two pairs of missing doorknobs said that the office had several doors to the hallway that at some time had been made inoperable, and he was not sure whether the interior sides of those doors had doorknobs.” Even if it were true that the doorknob on the interior side of the door was missing, that fact would not explain this employee’s observation that the door was missing both an interior and an exterior knob. 50. PAGE 38. It is noteworthy that the GAO describes one individual as “nother EOP employee who worked in that office during the Clinton administration and continued working there during the Bush administration for 5 months,” but the GAO fails to note when and for how long a current staff member worked for the Clinton Administration. If tenure during both Administrations is relevant for the individual referred to above, wouldn’t it also be relevant for current employees? Again, we simply ask that the GAO treat statements made by staff serving during this Administration just as the GAO treats the statements made by members of the former Administration – with the same kind of characterization and level of detail. 51. PAGE 40. We believe the range provided by the GAO (“30 to 64 computer keyboards with missing or damaged . . . ‘W’ keys”) understates the actual number of observations. According to our records, which we earlier provided to the GAO and the GAO did not dispute, staff members observed a total of 58 to 70 computer keyboards with missing or damaged W keys where a specific office or room was identified. In addition, staff members reported 150 keyboards with missing or damaged W keys, where the staff member did not associate the observation with a particular room or office. The detailed data are set forth in Specific Comment No. 10. 52. PAGE 40. The GAO states that “ne EOP employee said that she observed 18 keyboards with missing ‘W’ keys in an office suite. However, the manager of that office during the Clinton administration said that there were 12 keyboards in that office suite at the end of the administration.” We do not understand why the GAO includes the second sentence in its section on “Observations of EOP and GSA Staff,” instead of the section on “Comments By Former Clinton Administration Staff,” where it would appear to belong. 53. PAGE 40 n.19. In calculating its range of missing or damaged W keys where the observer identified a specific office or room, the GAO “included the observation of one EOP employee who said that she saw 6 to 10 keyboards missing ‘W’ keys in the West Wing.” The GAO is referring to an individual who was an employee of the Office of Administration. We ask that the GAO use her title – Branch Chief for Program Management and Strategic Planning in the OA Information Systems and Technology Division – and note (as the GAO did in identifying the person referred to in Specific Comment 50) that this individual worked in that position during the Clinton Administration and during the first four months of the Bush Administration. 54. PAGE 41. The GAO continues its discussion of damaged keyboards on page 41: “Five other EOP staff said that they saw a total of four keyboards with inoperable, missing, or switched keys; they said they were not the ‘W’ keys or could not remember which keys were affected.” The GAO fails to mention that, in addition to these five additional observations, the OA’s Associate Director for Information Systems and Technology Division reported that she observed “some glued down space bars.” Also, for clarity, we recommend rewriting that sentence to read: “Five other current staff members said that they saw, in other rooms or offices, an additional four keyboards that had damaged keys (e.g., a key or keys that were inoperable, switched, or missing). In these cases, either it was not the ‘W’ key that was affected, or the observer could not specifically recall the key or keys that were damaged.” 55. W KEYS TAPED OR GLUED ON WALLS W key “stuck over doorway” EEOB – OVP 2nd floor “some” W keys on walls** 10-12 Ws glued on the wall, over the doors “some keys” were taped above doorways” – press secretary’s office for example, key was taped above door to press secretary’s office suite*** *GSA employee, worked in the White House complex during Clinton Administration **OA employee, worked in the White House complex during Clinton Administration *** OA employee, worked in the White House complex during Clinton Administration Second, the GAO fails to mention that two other staff members also reported that they found W keys sitting next to keyboards and computers. Third, five (not four) staff members “observed piles of keyboards or computers or a computer monitor overturned” – including two WHO employees and three OVP employees -- in multiple locations in the EEOB. 56. PAGES 41-42. The GAO’s two paragraphs on the observations of computer personnel keyboards fail, in our view, to present the information that GAO received in a fair and objective manner. These paragraphs (like the entire discussion of damaged keyboards) appear to be designed to downplay the extent of the damage reported. The GAO writes: In addition to the EOP staff we interviewed about their observations regarding the keyboards, we met with EOP personnel who worked with computers during the transition. The OA associate director for information systems and technology provided us with documentation indicating that on January 23 and 24, 2001, the EOP purchased 62 new keyboards. The January 23, 2001, purchase order for 31 keyboards indicated that “eyboards are needed to support the transition.” The January 24, 2001, purchase order for another 31 keyboards indicated “econd request for the letter ‘W’ problem.” The OA associate director for information systems and technology said that some of the replacement keyboards were taken out of inventory for the new administration staff, but she did not know how many. In an interview in June 2001, this official said that 57 keyboards were missing keys during the transition and 7 other keyboards were not working because of other reasons, such as inoperable space bars. After later obtaining an estimate from the branch chief for program management and strategic planning in the information systems and technology division, who worked with computers during the transition, that 150 keyboards had to be replaced because of missing or damaged ‘W’ keys, we conducted a follow-up with the OA associate director for information systems and technology. In February 2002, the OA associate director for information systems and technology said that her memory regarding this matter was not as good as when we interviewed her in June 2001, but estimated that 100 keyboards had to be replaced at the end of the administration and that one-third of them were missing ‘W’ keys or were intentionally damaged in some way. She also said that of those 100 keyboards, about one-third to one-half would have been replaced anyway because of their age. This official said that she took notes regarding computers during the transition, but she was unable to locate them. We offer the following specific comments: The GAO basically ignores the comments of the IS&T Branch Chief, by relegating her observation to the passing phrase, “fter later obtaining an estimate from the branch chief . . . worked with computers during the transition that 150 keyboards had to be replaced because of missing or damaged ‘W’ keys . . . .” While the report dismisses her observations, this employee may, in truth, have been the one person in the best position to assess the total damage. This employee worked during the transition as the person with the cart who continually moved equipment around. She moved the broken and old items out of offices and made deliveries of replacement equipment. She thus personally saw many of the damaged keyboards, which she transported to a temporary workroom in the EEOB. She did this throughout the Inaugural weekend and into the following week. She specifically recalls that, on one of her last deliveries of broken items to the temporary workroom, someone said that the count of damaged keyboards was up to 150. Contrast the GAO’s treatment of the IS&T Branch Chief’s observations with its discussion of another individual, the IS&T Associate Director. The latter individual told the GAO (but the GAO fails to mention) that she was “not focused on keyboards” during the transition and that she “personally saw” only about “10 keyboards” with missing W keys and only heard about others. Her estimates of the total number of keyboards damaged were based purely on inferences drawn from what others may have said. The GAO nonetheless details the IS&T Associate Director’s statements, but not those of the IS&T Branch Chief. Even then, the GAO’s reporting of the IS&T Associate Director’s statements is incomplete. The GAO fails to mention, for instance, that the IS&T Associate Director said that she “saw personally” a concentration of missing W keys in the former First Lady’s Office and in the OVP; that there were “some keyboards” where the space bar had been glued down; and that she was “very upset at the condition” in which some of the keyboards were left. In describing her second interview, the GAO fails to mention that it asked her to estimate the number of keyboards with missing W keys, even though the GAO had asked the same question during her first interview (seven months earlier) and the GAO did not remind her about the earlier inquiry. Nor did the GAO ask her whether she had any reason in February 2002 to question the accuracy of what she had said in June 2001. The GAO also fails to say that the IS&T Associate Director recounted what the contractor who packed the damaged keyboards, had said – namely, that there were “6 boxes of 20 keyboards or more with ‘W’ problems or space-bar problems.” The GAO pressed the IS&T Associate Director to give her own estimate of damaged keyboards (again, even though she had told the GAO that she did not have personal knowledge about the keyboards), and she said that she “thinks around 100 were damaged,” and “if there were 100,” then roughly one-third might have had a “W” missing “or looked like something intentional.” The GAO says that it “met EOP personnel who worked with computers during the transition.” The GAO actually did not “meet” the IS&T Branch Chief; the GAO interviewed her by telephone. So we would recommend rephrasing the report to say that the GAO “spoke to” computer personnel. Also, the IS&T Associate Director and the IS&T Branch Chief are both former employees of the OA and both served during the prior Administration. The contractor referred to in the paragraph immediately above is employed by a contractor, Northrop Grumman. Finally, the GAO misquotes the IS&T Associate Director, when it states that she “also said that of those 100 keyboards, about one-third to one-half would have been replaced anyway because of their age.” The IS&T Associate Director told the GAO that the keyboards would have been replaced “if they had not been changed out in 4 or 8 years.” It is not clear how many (if any) of the damaged keyboards were four years old or older. Therefore, it is not fair to say, and the IS&T Associate Director did not say, that “about one-third to one-half would have been replaced anyway; at most, they may have been. 57. PAGE 43. The GAO says that “12 boxes of keyboards, speakers, cords, and soundcards were discarded,” and “the contract employee who prepared that report said that she did not know how many keyboards were discarded, but that each box could have contained 10 to 20 keyboards, depending on the size of the box.” We believe that the GAO should also explain that the contractor personally packed some of the boxes; and for those, she filled the box with keyboards and then used excessed speakers, cords, and soundcards to fill in gaps and ensure that the keyboards would not shift in the box. 58. PAGE 44. The GAO discusses the “costs” associated with the damaged keyboards: e are providing cost estimates for each of the various totals provided by EOP staff. In reviewing the costs, it must be recognized that according to the OA associate director for information systems and technology, one- third to one-half of the keyboards for EOP staff, including the ones provided to EOP staff at the beginning of the Bush administration, may have been replaced every 3 or 4 years because of their age. Therefore, some of the damaged keyboards would have been replaced anyway. Below is a table showing the different costs that could have been incurred on the basis of different estimates that we were provided regarding the number of damaged keyboards replaced. The cost estimates were calculated on the basis of the per-unit cost of the 62 keyboards that the White House purchased in late January 2001 for $4,850, or $75 per keyboard. This paragraph is followed by a table entitled “Estimated costs of replacing damaged keyboards.” The table lists four estimates. The first estimate, for $2,250-$4,800, is based on the GAO’s “range of 30 to 64 keyboards that were observed by EOP staff with missing or damaged keys.” The second estimate, for $2475, is based on a statement that the IS&T Associate Director made that she “thinks around 100 were damaged,” and “if there were 100,” then roughly one-third might have had a W key missing “or looked like something intentional.” The GAO’s first estimate is simply wrong, in our view, because there were a total of 58 to 70 (not 30 to 64) keyboards with missing or damaged W keys where the witness specified the room or office where the keyboard was located. In addition, contrary to the GAO’s statement in the table, that range does not represent “keyboards that were observed by EOP staff with missing and damaged keys.” It represents only those where a room or office was specifically identified; it does not account for the observations of other “EOP staff” (including the IS&T Branch Chief) who told the GAO about additional damaged keyboards. It is remarkable to us that the GAO would include the second cost estimate when the GAO itself acknowledges that the IS&T Associate Director’s February 2002 estimate of missing and damaged keyboards was unreliable. See Report at 42 (“[the IS&T Associate Director] said that her memory regarding this matter was not as good as when we interviewed her in June 2001.). It is all the more peculiar given that the GAO is unwilling to engage in the same sort of cost estimation when it comes to estimating the cost of missing telephone labels, the repair and replacement cost for damaged furniture, and many of the other categories of reported damage. Also, as stated earlier, it is not accurate to represent that the IS&T Associate Director said “one-third to one-half of the keyboards for EOP staff, including the ones provided to EOP staff at the beginning of the Bush administration, may have been replaced every 3 or 4 years because of their age.” the IS&T Associate Director told the GAO that the keyboards would have been replaced “if they had not been changed out in 4 or 8 years.” Again, it is not clear how many (if any) of the damaged keyboards were four years old or older. Therefore, it is not fair to say, as the GAO does, that “some of the damaged keyboards would have been replaced anyway”; at most, they may have been. 59. PAGES 46-47. We believe that the GAO has underreported the extent of the damaged furniture. As set forth in the table that appears above (Specific Comment No. 14), 17 current staff members reported a minimum of 31 to 33 pieces of damaged furniture – not counting the furniture that was defaced with writing and stickers. 60. PAGE 47. The GAO writes that “ix EOP staff . . . said that the locks on four desks or cabinet drawers were damaged or the keys were missing or broken of in the locks.” We do not recall anyone complaining simply because “keys were missing” – which, in the ordinary case, would hardly be called damage, vandalism, or a prank. Rather, current staff members observed situations where it appeared that keys may have been purposefully broken-off in the locks or drawers were left locked intentionally and keys taken or discarded. For instance, Four individuals told the GAO that a key was broken off inside the lock on a file cabinet in Room 197B; the key was still there hanging in lock by metal thread; and, when a locksmith opened the cabinet, a Gore bumper sticker with the words “Bush Sucks” was displayed inside. A different employee told the GAO that his desk drawers were locked and no key was found; when the drawers were pried open, there were two pieces of paper inside that had “anti-Bush” statements. This is another instance where the GAO’s lack of detail prevents the reader from having a complete and accurate understanding of the damage that was found. 61. PAGE 47. The GAO is mistaken when it says that “ive EOP staff . . . said that they observed writing inside drawers in five desks. . . . We were shown the writing in four of the five desks.” Again, the GAO has underreported the number of observations. The GAO has told us the names of the “ive EOP staff” to whom it refers, each of whom, according to the GAO, observed only one desk with writing inside drawers. The GAO omits, however, that one of these employees showed the GAO a second desk in another room with writing on the pull-out tray that reads “W happens.” Thus, five current staff members observed writing in or on six desks; not all the writing was “inside drawers”; and the GAO was shown the writing in five of the six cases. We also believe that the content of the messages is important because it indicates when and by whom the writings MESSAGES WRITTEN ON OR IN DESKS Writing in desk drawer reads “Take care of this place. We will be back in four (4) years! (1/93)”; shown to GAO Writing on a pull-out tray on desk that reads “W happens”; shown to Writing in top left drawer of desk that reads “GET OUT”; shown to Writing in top middle drawer of desk that reads “Hail to the Thief”; Writing in middle drawer of desk that wishes all “who work here” “good luck”; shown to GAO 62. PAGE 47. The GAO has underreported the number of pieces of furniture that were observed overturned. Our notes show (notes that were provided to the GAO and the GAO did not dispute) that five White House employees, one OA employee, and one GSA employee reported seeing at least 14 to 19 pieces of furniture that were on their sides or overturned, not the “8 to 10 pieces” that the GAO reports. The table detailing each observation of overturned furniture is found above in Specific Comment No. 17. 63. PAGE 47. The GAO writes that “four EOP staff said they saw furniture that appeared to have been moved from areas where they did not appear to belong, such as desks moved up against doors.” There were actually five such individuals – specifically, three WHO employees, one OVP employee and one NSC employee. 64. PAGES 47-48. We believe that the GAO is mistaken when it reports that “he director of GSA’s White House service center said that furniture could have been overturned for a variety of reasons other than vandalism, such as to reach electrical or computer connections.” Indeed, according to our notes, just the opposite is true: two GSA managers told the GAO that cleaning staff would “not move” large pieces of furniture in this fashion, and none of these things would happen in the normal course of “moving” out of an office. 65. PAGE 48. The GAO’s description of the “four to five desks found with a sticky substance on them” is incomplete. First, it is unclear from the GAO’s description that the vandalized desks were in the Vice President’s West Wing office area and included the Vice President’s own desk. Second, the “sticky substance” was a thick layer of an oily glue-like substance (which one observer described as something like a mixture of Vaseline and glue). Third, the substance was smeared on the bottom of the middle drawer of the desks. Consequently, when someone sat at the desk the substance would get on the person’s legs or, when you tried to open the drawer (which had no handles) it would get on your hands. (In fact, one employee of the Office of the Vice President told the GAO that the substance got on her pants.) Fourth, this OVP employee also told the GAO that, on her desk, the substance was smeared all over the top of the right pull-out tray of the desk, as well as under her middle desk drawer. A second OVP employee likewise told the GAO that the substance was on her desk’s pull-out tray, as well as under her middle desk drawer. Fifth, an OVP employee and two OA employees said that the desk-drawer handle on at least one of the desks was missing, and one of the OA employees said that the handle was found inside the drawer along with more of the glue substance. Finally, the substance on some of the desks was first discovered between midnight on January 19 and noon on January 20, 2001. We believe this additional information is relevant and should be included in the GAO report in order to promote an adequate and correct understanding of the matters reported. See Government Auditing Standard 7.51. 66. PAGE 48. The GAO’s list of “ocumentation relating to the observations” of damaged furniture is incomplete. A facility request form states that one named employee “eeds key to lateral file cabinet. Cabinet is locked.” Facility Request No. 56695 (Jan. 29, 2001). 67. PAGE 49. The GAO states that “efinitive information was not available regarding when the furniture damage occurred; whether it was intentional and, if so, who caused it.” While “definitive” proof may be lacking in some cases, that does not mean that the GAO (or the reader) must ignore both common sense and the overwhelming circumstantial evidence that does, in fact, indicate when the damage occurred, whether it was intentional, and who the likely perpetrators are. In some cases, the circumstances indicate that the damage was intentional, occurred shortly before the Inauguration, and the most likely perpetrators were members of the former Administration. For example, With respect to the key broken off in a file cabinet in Room 197B, the key was found still hanging in lock by a metal thread (suggesting that the damage occurred not long before the transition) and, when the locksmith opened the cabinet, a Gore bumper sticker with the words “Bush Sucks” was prominently displayed inside (suggesting that the damage was intentional and done by a member of the former Administration). Similarly, when the locked desk drawers were pried open in Room 103, two pieces of paper with anti-Bush statements were found displayed inside. Again, in our view, these facts indicate that the damage was intentional, occurred shortly before the transition, and was done by a member of the former Administration. In other cases, the person who observed the damage firsthand told the GAO that the nature of the damage itself, and the surrounding conditions, suggested that the damage was intentional and/or was done shortly before the transition weekend. For One person told the GAO that the drawers on her desk “clearly” had been kicked- in intentionally and that it was “not just wear and tear”; A second person told the GAO that it was unlikely that the slit seats were the result of wear and tear because “the fabric otherwise looked new,” and “it looked like someone had taken a knife or sharp object to the seat”; and, A third person told the GAO that she saw damaged furniture in offices where things had looked “pretty good” weeks or months earlier. In still other cases, the nature of the damage suggests that it occurred shortly before the Inauguration because the offices’ prior occupants and cleaning staff would not have let the damage remain in the office for long. For example, it is hard to believe that occupants would not fix or remove a bookcase with broken glass (with shards of glass still in the cabinet) or would allow chairs with broken legs and no backs to remain in an office suite for very long. 68. PAGES 49-50. The GAO includes in its report statements from two employees – one who said that the damaged furniture that she observed was “not something intentional” and the second individual who said, according to GAO, that the four chairs with broken legs in her office were “not necessarily intentional.” First, the second employee told the GAO that, while it was possible that the legs were broken through wear and tear, she thought it “unlikely that you’d keep a broken chair in your office” in that condition. Second, and more important, it is remarkable to us that the GAO includes in its reports the two statements by current employees who noted that particular damage was “not necessarily intentional,” when the GAO has refused, despite our requests, to include statements from individuals (in some cases, the same individuals) who stated that damage which they observed appeared to be intentional. For instance, One person told the GAO that the desk drawers were clearly damaged intentionally and not just wear and tear. A second person said that “it was intentional, not accidental” with respect to the damage he observed in dozens of rooms. A third person said that the broken key in the file cabinet looked “deliberate” to him. A fourth person said that the missing phone labels “must have been intentional.” A fifth person said that the rooms he observed were “deliberately made to look like someone was communicating a message.” A sixth person said that some of conditions he saw looked “intentional.” A Bush Administration official who has observed a prior transition said the conditions of the offices was “more than wear and tear.” An employee who has observed five prior transitions said the offices looked like a “arge number of people . . . deliberately trashed the place.” A seventh person told the GAO that the repairman who fixed the broken copy machine found a pornographic or inappropriate message when he pulled out the copier’s paper drawer and that the repairman thought the paper drawers had been “intentionally realigned” so that the paper supply would jam. An OA manager who has worked at the White House since 1971 said that some of the damage was the result of “intentional trashing.” An employee with over 30 years of service in the White House said it looked like the prior occupants had “purposely trashed the place.” 69. PAGE 51. The GAO’s discussion of the “costs” attributable to the damage furniture fails to mention, or make any attempt to estimate, the costs incurred in replacing the furniture that was discarded because it was beyond repair. For instance, the GAO places no value on replacing the four chairs that an employee said had broken legs or the conference room chair that two other employees said had its back broken out. Likewise, the GAO made no attempt to determine how much it costs to reupholster chairs like the three that one employee told the GAO had slit seats. Nor did the GAO seek estimates on the cost of new glass tops for desks or to replace or repair a desk that had its drawers kicked in. The GAO has simply ignored these costs. Similarly, the GAO has made no attempt to quantify the very real costs incurred in, for example, having movers remove damaged furniture and return with replacement furniture; having movers upright overturned furniture; having personnel (like the employees who found it, or the cleaning staff) clean the glue-like substance; or having personnel divert their time and attention to removing or fixing furniture that should have been found in working condition. 70. The former manager of an office where two EOP staff told us they observed one or two chairs with broken or missing arms said that arms on two chairs in that suite of offices had become detached a year or two before the transition, that carpenters had tried to glue them back, but the glue did not hold. We understand that the GAO is referring here to the former First Lady’s offices – now the suite occupied by the Political Affairs office. At least six pieces of furniture were found damaged in that suite – some under circumstances that indicate the damage was intentional – in addition to the two broken armchairs. These additional reports of damaged furniture as well as other damage found in the same suite undermine the former manager’s innocent explanation for the two chairs. And the former manager of the office apparently provided no explanation for the additional damage. However, because the GAO is unwilling to specify the locations where damage was found, and has not included in its report the details that indicate that the damage was intentional, reader are unable to assess for themselves the credibility of the former manager’s explanation. 71. PAGE 53. The GAO reports that “hree former staff” of the Vice President’s West Wing Office said they “were not aware of glue being left on desks” and that one of those employees “said that her desk was missing handles when she started working at that desk in 1998, and it was still missing them at the end of the administration.” First, this explanation is inconsistent with one employee’s observation that a handle was found inside the desk with more of the oily glue-like substance on top of it. Second, the reader again is unable to evaluate the credibility of the comments made by the former staff members because the report does not say where these vandalized desks were located and the various other damage and pranks that were found in the same location. For example, it is hard to believe the former staff members’ claim of ignorance when one also knows that longtime OA employees found, in the Vice President’s West Wing office, “vulgar words” on a board; signs comparing the President to a chimpanzee on the walls and interspersed in the reams of paper in the printers, copy machines, and fax machines (observed by three employees); empty champagne bottles; and a basketball stuck on a lighted ledge (each observed by one employee). 72. PAGES 53-54 and n. 32. The GAO is just plain wrong when it says that “uring initial interview with employee, she said that the desks with burn marks and scratches were in a particular office” and uring a follow-up interview . . . she said her observations pertained to an office suite, rather than a single office.” She said no such thing. During both interviews, this employee explained, in no uncertain terms, that her observations were with regard to a suite of offices. Indeed, there can be no doubt because this employee personally took the two GAO investigators into the two offices that she was referring to. Thus this employee’s observations referred to multiple offices, and she did not say that the desks (and there was more than one) that she observed with scratch marks were in Room 160A, as the GAO apparently told the former occupant. Consequently, the former occupant’s statement that “he did not recall seeing any scratches . . . in his office” is somewhat beside the point because it does not address the condition of desks in the other office. Unfortunately, the GAO’s report leaves the impression that the former occupant’s statement has directly rebutted an allegation that was made by a member of the current staff, when it does not. 73. PAGE 54. The GAO’s report details at length the testimonials of former staff members who said that they observed no overturned furniture: Three former occupants of a suite of three rooms where two EOP officials told us they observed a table and two desks overturned in the afternoon of January 20 said that no furniture was overturned in their offices when they left on January 20 and that their desks would have been difficult or impossible to move because of the weight of the desks. One of the three former occupants said that he was in his office until 3:30 a.m. or 4:30 a.m. on January 20, the second former employee said he was in his office until 10:00 a.m. or 11:00 a.m. on January 20, and the third former employee said that she was in her office until 11:50 a.m. or 11:55 a.m. on January 20. Regarding another office where two EOP officials told us that they observed overturned furniture, the former senior advisor for presidential transition said that he was in that office after 11:00 a.m. on January 20, and he did not see any overturned furniture. Similarly, the former head of that office, who said that he left the office around 1:00 a.m. on January 20, said that he did not observe any overturned furniture. If the GAO is willing to include this detailed response by members of the former staff, we ask that the GAO also explain that two of individuals who observed the overturned furniture have worked in the White House complex for 29 and 31 years, respectively (including during the Clinton Administration), and that they both observed overturned furniture between approximately 1 a.m. and 5 a.m. on January 20. Likewise, a GSA employee, who served during the Clinton Administration, reported seeing overturned furniture. The GAO’s report should also say that two other individuals observed overturned furniture at approximately 12:15 p.m. on January 20. 74. PAGES 55-56. We believe that the GAO’s data on cut and pulled cords is not accurate. Our records show that 5 staff members (4 White House and 1 OA) told the GAO that they saw a minimum total of 32 to 35 telephone lines or other cords either cut or pulled from the wall, as follows: TELEPHONE AND OTHER CORDS CUT OR PULLED FROM WALL Total no. of pieces “total of 2 or 3 cords ripped from the walls” so that the “cables behind the jack were showing” “phone cable ripped from wall” 182 suite (Scheduling) “phone line pulled out – jack and all” “some plugs” damaged “1 or 2” pulled cables or broken jacks that had been “yanked” ** “couple” pulled cables or broken jacks that had been “yanked” ** Wires torn out of the wall **OA employee, worked in the White House complex during Clinton Administration In addition, a facility request form shows that, on January 24, 2001, an employee asked for “electrical services” in her offices, and specifically asked for someone to “organize all loose wires.” Facility Request No. 56662. 75. PAGE 56. We believe that the GAO has again underreported the observations of phones with missing labels. Based on conservative estimates and calculations, 5 (not 4) staff members (2 White House employees, 2 OA employees, and 1 OVP employee) recalled observing in specific offices or rooms at least 112-133 telephones that had no labels identifying the telephone numbers (not “99 to 108”). A table setting forth our data appears above in Specific Comment No. 24. Oddly, in calculating the number of missing labels in the OVP’s second floor offices, the GAO states (at fn. 36) that it “included a range of 62 to 82,” even though the GAO concedes that the “EOP indicated that there were 82 telephones in that office suite in January 2001.” Why then would the GAO use a range of 62 to 82, particularly since we provided the GAO with an OA document that shows, as a conservative estimate, 82 telephones were in that suite? In addition to the 112-133 missing labels where the observers identified specific rooms or offices, an employee with over 30 years of service in the White House told the GAO that he personally saw “more than 20” phones with missing labels; an OA manager who has worked at the White House since 1971 said that there were “many instances of missing labels on the phones”; and a third person (a new employee who coordinated telephone services during the first month of the Administration) said that the labels on the “majority of the phones” – or “roughly 85 percent” of the phones – in the EEOB and the White House had been removed or contained incorrect numbers. If the GAO is willing to include the OA telephone services coordinator’s personal observation that “she . . . observed 18 telephones that were missing number labels,” we believe the observations of these other telephone and facility officials should also be included, and described accurately, in the report. The GAO says that the new employee who coordinated telephone services during the first month of the Administration “estimated that 85 percent of the telephones in the EEOB and the White House were missing identifying templates or did not ring at the correct numbers.” She actually said that she found that labels on the “majority of the phones” – or “roughly 85 percent” of the phones – in the EEOB and the White House had been removed or contained incorrect numbers. The GAO also downplays a critical fact about the missing phone labels. An employee who worked as White House Director of Telephone Services for 29 years told the GAO that “ertain labels were replaced early on Jan. 20 – before noon,” but the labels were found “missing again later that day.” In our view, this fact shows that no innocent explanation exists for at least some of the missing labels; their removal was an intentional act, apparently by members of the former Administration. 76. PAGE . We believe that the GAO has underreported the number of telephones that were forwarded and reforwarded to ring at different telephones throughout and between the EEOB and West Wing. As set forth in the table (see Specific Comment No. 26), seven White House staff reported that roughly 100 telephones were forwarded to ring at other numbers. We do not understand why the GAO treats the observations of the employee who coordinated telephone services during the first month of the Administration differently from the other observers. As the GAO concedes, this employee’s sole responsibility during the first month of the administration was to address telecommunications problems and, in particular, to work as the “middleman” between the incoming staff who reported the problems and the telephone contractors and personnel who repaired them. This employee told the GAO that she “tried to go into every physical space” in the West Wing and the EEOB “to survey phones.” Thus, her observations are as competent, if not more competent, than the other observations are. See Government Auditing Standard 6.54(f) (“Testimonial evidence obtained from an individual who . . . has complete knowledge about the area is more competent than testimonial evidence obtained from an individual who . . . has only partial knowledge about an area.”). Finally, the GAO fails to mention that this employee told the GAO that the Chief of Staff’s phone was forwarded to ring in a closet. This is, in our view, another important (but omitted) fact because it shows that the phones were not forwarded for legitimate business purposes. 77. PAGE 57. In reporting on telephones that were unplugged and/or piled up, the GAO fails to state 25 or more offices in the EEOB had phones piled up or unplugged. Nor does the GAO explain that one of the observers was an employee who has supervised White House telephone services for more than 30 years. Given his more than 30 years of experience managing telephone services in the White House complex, this individual’s observation is particularly noteworthy. In addition, since this individual identified the unplugged phones as an example of the vandalism, damage, or pranks that he observed while surveying the EEOB on January 19 and the early morning of January 20, it is clear that the phones were not unplugged by the telephone services personnel or by the cleaning staff, who had not yet entered these rooms. We believe that this information is important and, in its absence, the report is incomplete. See Government Auditing Standard 7.51 (“Being complete requires that the report contain all information needed to satisfy the audit objectives, promote an adequate and correct understanding of the matters reported, and meet the report content requirement.”). The information is particularly important because the GAO states on page 63 that “he former manager of an office where an EOP employee told us he observed telephones that were unplugged said that no one in that office unplugged them” and “ former Clinton administration employee in another office where EOP staff told us they observed telephones that were piled up said that there were extra telephones in that office that did not work and had never been discarded.” Since the GAO never mentions that there were observations of unplugged and piled phones in 25 or more offices, the reader does not know that the comments of the former Clinton administration employees, even if true, explain what happened in only 2 of 25 (or more) offices. Thus, the reader has no basis for placing the comments of the former employees in context, nor for understanding that the former employees apparently have no explanation for the remaining observations. 78. PAGE 57. In one of its more dramatic understatements, the GAO writes: “Two EOP staff said that they found telephones that were not working.” Again, because of the GAO’s failure to include important details, it has dramatically downplayed the extent of the problems observed. For instance, an individual who is employed by the OA and worked here during the Clinton Administration told the GAO that there was “no working phone on south side of building.” Since there are a minimum of 26 offices on the south side of the first floor of the EEOB, each of which would contain at least one phone – and likely many more than that – the problem with non-working phones was extensive. 79. PAGE 58. The GAO writes: “The EOP provided documentation summarizing telephone service orders closed from January 20, 2001, through February 20, 2001, containing 29 service orders that mention labels; 6 of the 29 service orders were for work in offices where telephone labels were observed missing. All of the 29 service orders mentioning labels were part of orders for other telephone services. In discussing these documents, the OA telephone service coordinator said that the requests for labels did not necessarily mean that the telephones had been missing labels with telephone numbers. She said that a new label might have been needed for a new service, such as having two lines ring at one telephone.” With all due respect, that statement is false. First, the GAO never “discuss” the closed order list with the OA telephone services coordinator. The GAO never showed her the document, nor expressly discussed its contents with her. While the GAO did ask her whether a request to label a telephone always meant that the label was missing (and she rightly said that it did not), the GAO did not ask her about the document, any particular order on that list, or the labeling that occurred during the first few days of the Administration. Second, the GAO’s suggestion that something other than missing labels precipitated the request for new labels might be plausible if the GAO has nothing to consider except the closed order list. But that is not the case. Here, the GAO concedes that there were observations of more than 100 missing labels during the first days of the Administration. Under those undisputed circumstances, it is beyond doubt that the requests to “PLACE BUTTON LABEL ON SET” were to replace the missing labels. Third, the closed order list does more than “mention labels.” If the GAO provided adequate detail in its report, the reader would learn that the document shows, for On Monday, January 22, 2001, a telephone tech was asked by the OVP because the phones “NEED BTN LABELS, TECH TO LABEL SETS.” The tech billed “4HRS” (4 hours) on this order. TSR No. 01010195. On January 31, 2001, a tech was called to Room 273 of the OVP because, among other things, the phones “NEED BTN LABELS TYPED, PLACED.” The tech billed “2HRS” on this order. On February 5, 2001, a tech was called to Room 200 because the phones “NEED LABELS PLACED ON SETS.” The tech billed “2HRS” on this order. On February 9, 2001, a tech was asked to “REPROGRAM IN ROOM 276 EEOB, PLACE BUTTON LABEL ON SET.” The tech billed “1HR” on this order. Also on February 9, a tech was asked to “REPRGRM in RM 279 EEOB, . . . PLACE LABEL ON SET.” The tech billed “30MINS” to this order. On January 29, 2001, a tech was called to Room 18 to, among other things, “REPLACE LABEL.” The tech billed “1HR” to this order. On February 8, 2001, a tech was asked to “REPRGM RM 148 . . . NEED LABEL PLACE.” The tech billed “30MINS” to this order. On January 30, 2001, a tech was called to Room 113 because the occupants “NEED LABEL PLACED ON SET BY TECH.” The tech billed “1HR.” On February 3, 2001, a tech was called to Room 100 to “PLACE BTN LABEL.” The tech billed “1HR.” In six separate service orders on February 3, 2001, a tech was asked to “REPROGRAM” phones in the Room 100 suite and “TO PLACE LABEL ON SET.” TSR No. 1020330; see also TSR Nos. 1020325 (“NEED LABELS PLACED ON SET”), 1020328 (“NEED BTN LABELS”), 1020329 (“NEED LABELS”), 1020331 (“NEED LABELS PLACED ON SET”), 1020340 (“NEED LABELS PLACED ON SET”). The tech billed “1HR” on each service order. On February 5, 2001, a tech was told that the occupants of Room 135 “NEED LABEL PLACED ON SET.” The tech billed “1HR” for this order. Also on February 5, 2001, a tech was asked to “REPROGRAM SET ROOM 137” and “PLACE LABEL ON SET.” The tech billed “2HRS.” On February 3, 2001, someone in Room 131 asked a tech to “PLACE LABEL ON SET.” The tech billed “1HR.” In a separate service request on February 3, 2001, a tech was asked to “REPROGRAM IN ROOM 137 EEOB” and “PLACE LABELS ON SET.” The tech billed “1HR.” On February 3, 2001, a tech was told that the occupants of Room 154 “NEED BUTTON LABEL,” among other things. The tech billed “1HR” to this order. On February 5, 2001, a tech was told that “LABELS ALSO NEEDED” in a Presidential Personnel Office. The tech billed “1HR” for this order. On February 3, 2001, a tech was asked to “REPROGRAM IN RM 131” and “PLACE LABEL ON SET.” The tech billed “1HR.” On February 2, 2001, a tech was asked to “REPROGRAM IN ROOM 184 EEOB” and “PLACE LABEL ON SET.” The tech billed “1HR.” On February 8, 2001, a tech was told that the occupants of Room 87 “NEED LABELS PLACED ON SET.” The tech billed “1HR” on this order. Fourth, the GAO was provided – but ignores – many of the individual work orders (so-called Telecommunications Service Requests (TSRs)) that are summarized on the closed order list. The TSRs are important because they provide additional information about the need to label the telephones and because, in some cases, they identify additional requests to place labels on telephones that are not referenced on the closed order list. A sampling shows: TSR No. 01010183: “NEED Button labels typed. Tech to label sets.” TSR No. 01010184: “Room 274, 272, 284, & 286. Program phones . . . NEED Button labels typed. Need tech to place labels on sets.” TSR No. 01010185: “Room 272 & 276. Program phones . . . NEED Button labels typed & placed on sets.” TSR No. 01010195: “Reprogram sets in Room 263, 265, 266, 267, 268, 269 and 271. NEED labels placed on each set.” TSR No. 01010206: Among other things, “NEED TECH TO PLACE BUTTON LABELS” on sets in Room 270. TSR No. 01010306: Among other things, “Replace labels on all phones that removed” in Room 18. TSR No. 01020463: “Need label placed on set” in Room 148. TSR No. 01010342: “NEED Label placed on set” in Room 100. Similarly the TSRs indicate, in some cases, where a staff member has reported a phone that is not ringing when the number on the phone is dialed – that is, it has been forwarded. TSR No. 01020225, for example, says line “does not ring on set 6-7453.” Finally, TSRs exist for work – “including . . . relabeling” – performed on January 20 and 21, where individual work orders were often not completed. TSR No. 01010382 shows that, on Saturday, January 20, 2001, the techs worked 114 hours, at $113.88 per hour (time and a half), for a total of $12,982.32. On Sunday, January 21, 2001, the techs worked 78.5 hours, at $151.84 (double time), for a total $11,919.44. 80. PAGES 58-59. The GAO has failed in its discussion of obscene and inappropriate voicemail greetings to include important information – information needed to promote “an adequate and correct understanding of the matters reported.” Government Auditing Standard 7.51. The GAO fails to explain, for example, that the “wo EOP employees” who heard the obscene voicemail messages were the White House Director of Telephone Services and the OA’s Associate Director for Facilities Management, who together began touring offices and checking phones in the EEOB at approximately 1 a.m. on January 20. The first of these individuals estimated that he listened to “roughly 30 greetings,” approximately 10 of which (or one-third) were “inappropriate.” Of the 10 inappropriate messages, “approximately 5 or 6” (or roughly half) “were vulgar.” (He also said that the White House telephone operators notified him that there were “obscene messages” on some of the voice-mail greetings.) This employee told the GAO that, after encountering this high ratio of inappropriate and vulgar messages, and because of these messages, a decision was made to take the entire system down. He also explained that he erased some messages around 1 a.m. on January 20, and they were re-recorded later that day. These are, in our view, important facts regarding the extent of the problem and the consequences thereof – namely, no one had voice-mail service for the first days and weeks of the Administration. 81. PAGES 60-61. The GAO’s section on the “costs” associated with telephone problems is both inaccurate and incomplete. Based on extremely conservative estimates and straightforward documentation, the government incurred at least $6020 just replacing removed labels and rerouting the forwarded telephones. The evidence shows: First, the GAO received, but fails to mention, a blanket work order and bill for work – including “relabeling” work – performed on Saturday, January 20, 2001. The techs billed 114 hours at a rate of $113.88 per hour for each hour or fraction of an hour spent on a particular job. Consequently, if technicians spent only ten percent of their time relabeling phones and correcting forwarded telephones on Saturday (a conservative estimate given that there were between 112 and 133 specifically identified missing labels and roughly 100 forwarded phones), that means it cost the taxpayer $1,298 for one day’s work replacing the removed labels and fixing the forwarded phones. Second, and similarly, the GAO acknowledges that it received a work order and bill for work – including “replacing labels on telephones” – performed on Sunday, January 21, 2001. But the GAO fails to estimate any costs associated with that work. The bill shows that the techs worked 78.5 hours that day at a rate of $151.84 per hour for each hour or fraction of an hour spent on a particular job. That means that, if technicians again spent only ten percent of their time relabeling phones and correcting forwarded telephones, the taxpayer incurred an additional cost of $1,192 for that day’s work replacing the removed labels and fixing the forwarded phones. Third, the GAO fails to estimate the costs associated with replacing labels even where it was provided both individual work orders and a summary of orders that specifically identify the relabeling work performed and the amount of time spent on the job. Specifically, we provided the GAO with a document entitled “Orders Closed 1/20/01 Thru 2/20/01” that lists many orders (some of which are highlighted above) where a tech was asked to place one or more labels on telephone sets. For each of those orders, a “T&M” charge (time and materials) is identified in terms of hours and minutes. Those charges can be computed in dollars by multiplying the total number of hours of T&M charged times $75.92. We do not understand why the GAO failed to perform this simple exercise, particularly given its willingness to provide cost estimates in the context of missing and damaged W keys. Had the GAO done the calculation, the reader would know that approximately $2201.68 was spent to replace labels on telephone sets, as set forth below: On Monday, January 22, 2001, a telephone tech was asked by the OVP to “PROGRM PHNS PER MATT, NEED BTN LABELS, TECH TO LABEL SETS.” The tech billed “4HRS” (4 hours) on this order, for an estimated total cost of $303.68. TSR No. 01010183. On January 31, 2001, a tech was called to Room 273 of the OVP because, among other things, the phones “NEED BTN LABELS TYPED, PLACED.” The tech billed “2HRS” on this order, for an estimated total cost of $151.84. TSR No. 01010386. On February 5, 2001, a tech was called to Room 200 because the phones “NEED LABELS PLACED ON SETS.” The tech billed “2HRS” on this order, for an estimated total cost of $151.84. TSR No. 01020071. On February 9, 2001, a tech was asked to “REPROGRAM IN ROOM 276 EEOB, PLACE BUTTON LABEL ON SET.” The tech billed “1HR” on this order, for an estimated total cost of $75.92. TSR No. 01020225 On January 29, 2001, a tech was called to Room 18 to, among other things, “REPLACE LABEL.” The tech billed “1HR” to this order, for an estimated total cost of $75.92. TSR No. 01010306. On January 30, 2001, a tech was called to Room 113 because the occupants “NEED LABEL PLACED ON SET BY TECH.” The tech billed “1HR” to this order, for an estimated total cost of $75.92. TSR No. 01010342. On February 3, 2001, a tech was called to Room 100 to “PLACE BTN LABEL.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020154. Also on February 3, 2001, a tech was called to Room 100 because the occupants “NEED BTN LABELS FOR SET.” The tech billed “1 HR,” for an estimated total cost of $75.92. TSR No. 01020156. In six additional and separate service orders on February 3, 2001, a tech was asked to “REPROGRAM” phones in the Room 100 suite and “TO PLACE LABEL ON SET.” TSR No. 1020330; see also TSR Nos. 1020325 (“NEED LABELS PLACED ON SET”), 1020328 (“NEED BTN LABELS”), 1020329 (“NEED LABELS”), 1020331 (“NEED LABELS PLACED ON SET”), 1020340 (“NEED LABELS PLACED ON SET”). The tech billed “1HR” on each of the six service orders, for an estimated total cost of $455.52. On February 5, 2001, a tech was told that the occupants of Room 135 “NEED LABEL PLACED ON SET.” The tech billed “1HR” for this order,” for an estimated total cost of $75.92. TSR No. 01020075. On February 3, 2001, a tech was asked to “REPROGRAM SET ROOM 137” and “PLACE LABEL ON SET.” The tech billed “2HRS,” for an estimated total cost of $151.84. TSR No. 01020099. On February 3, 2001, someone in Room 131 asked a tech to “PLACE LABEL ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020055. In a separate service request on February 3, 2001, a tech was asked to “REPROGRAM IN ROOM 137 EEOB” and “PLACE LABELS ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020168. On February 3, 2001, a tech was told that the occupants of Room 154 “NEED BUTTON LABEL,” among other things. The tech billed “1HR” to this order,” for an estimated total cost of $75.92. TSR No. 01020327. On February 5, 2001, a tech was told that “LABELS ALSO NEEDED” in a Presidential Personnel Office. The tech billed “1HR” for this order,” for an estimated total cost of $75.92. TSR No. 01020360. On February 3, 2001, a tech was asked to “REPROGRAM IN RM 131” and “PLACE LABEL ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020363. On February 2, 2001, a tech was asked to “REPROGRAM IN ROOM 184 EEOB” and “PLACE LABEL ON SET.” The tech billed “1HR,” for an estimated total cost of $75.92. TSR No. 01020132. On February 8, 2001, a tech was told that the occupants of Room 87 “NEED LABELS PLACED ON SET.” The tech billed “1HR” on this order, for an estimated total cost of $75.92. TSR No. 01020160. Fourth, and even more perplexing, the GAO ignores the AT&T invoices (“Activity Reports”) and individual works orders (TSRS) that we provided that show the actual charges incurred on particular orders. We have not attempted in preparing these comments to review all such invoices, but a sampling shows $1,328.60 in charges in addition to those listed above: TSR No. 01010184 (request to “program phones” and “place labels on sets” in Rooms 272, 274, 284, and 286): $341.64. TSR No. 01010185 (request to program phones and place labels on sets in Rooms 272 and 276): $341.64. TSR No. 01010195 (request for, among other things, labels for sets in Rooms 263, 265, 266, 267, 268, 269, and 271): $341.64. TSR No. 01010206 (request for, among other things, “tech to place button labels”): $303.68. Fifth, the GAO also can and should estimate, based on this data, how much it would cost to replace labels on 112-133 telephones (or, at least, on the 99 to 108 that the GAO concedes were observed missing) by estimating how much was charged per telephone and extrapolating that amount to account for the total number of missing labels. Sixth, the GAO suggests that it is unable to provide any estimate on the costs to repair the damaged phones because “the extent to which the service order that mentioned labels involved missing labels was not clear and all of the service order involving labels were part of order for other service.” That is incorrect. As we explained to the GAO, when a System Analyst (SA) performs work that does not require a technician to be dispatched to the office (e.g., reprogramming a phone), there is no separate charge. If work requires a tech dispatch (e.g., replacing a label), then there is a minimum charge of $75.92 for each hour or portion of an hour ($113.88 on Saturdays and $151.84 on Sundays), even if it takes only minutes to perform the work. Therefore, for service orders that requested, for example, both a telephone to be reprogrammed and its label to be replaced, the entire charge is attributable to replacing the label. This is clear from the AT&T billing invoices (or “Activity Reports”) that show that the cost associated with the work orders is for “LABOR CHARGES FOR EQUIP. MOVES/CHGS,” and not for reprogramming expenses. In addition, for the service orders where the minimum charge of $75.92 was assessed, it is immaterial whether work in addition to replacing the label was performed; a charge of $75.92 would have been incurred for replacing the label(s) regardless of whether other work was performed within that first hour. Finally, the closed order list and the service orders do far more than “mention labels.” See Specific Comment No. 79. 82. PAGE 62 n.42. A footnote reads: “The director of GSA’s White House service center said that there were ‘any number’ of reasons why problems could have been observed with telephone and computer wires besides having people cut them deliberately. He said, for example, that the cleaning staff could have hit the wires with the vacuum cleaners or computer staff could have been working with the wires.” This statement would be relevant only if the cut and pulled wires were observed after the cleaning staff and the computer staff had entered the offices. But, in this case, the two staff members who reported the cords pulled from the walls observed the damage during the early morning hours of January 20, before any cleaning staff had entered the rooms and before the computer staff entered the rooms to archive computer data. Unfortunately, the readers of the GAO’s report would not know this important fact – and therefore may have been misled by the GAO’s footnote – because the GAO fails to include that detail in its report. 83. PAGE 64. The GAO reports that “ also said that it would have been technically possible to erase voice mail greetings for most departing EOP staff without also deleting greetings for staff who did not leave at the end of the administration.” We believe that, to present a fair and balanced report, the GAO must explain here that two current OA staff members – both of whom served during the Clinton Administration – disagree with the former senior adviser. One of the OA staff members, who has worked at the White House since 1971 and who worked closely with the former senior adviser and the transition team, told the GAO that a proposal to delete all voicemail greetings at the end of the Clinton Administration “was discussed,” but they had decided not to do it “because it would have erased the greetings of all staff members,” including the roughly 1,700 staff members who were not vacating the building. This OA employee further explained that it was his “‘call’ not to go ahead with the proposal,” although the staff which included the former senior adviser was “aware of the decision.” OA’s Telephone Service Coordinator, likewise told the GAO that, until November 2001, the EOP’s phone system did not have the capability to erase voicemails en masse; she explained that it was not until November 2001 that the EOP both had purchased the software and had performed upgrades to the switch that were necessary to allow voicemails to be deleted on other than a manual basis. 84. PAGE 64. The GAO continues with the former senior adviser’s comments: “This former official also said that some telephones were forwarded to other numbers for business purposes at the end of the Clinton administration. He said, for example, that some of the remaining staff forwarded their calls to locations where they could be reached when no one was available to handle their calls at the former offices.” This explanation may sound plausible until you learn how and where the phones were forwarded. The Chief of Staff’s telephone, for example, was forwarded to a closet. There could hardly be a legitimate “business purpose” for that. Yet, because the GAO has not provided the reader with details, like this one, about the current staff’s observations, the reader does not have the facts to judge for herself the credibility of the former staffs’ explanations. These omissions, in our view, result in a report that is woefully incomplete, and, as a consequence, a report that is arguably misleading and lacking in objectivity. See Government Auditing Standard 7.57 (“Objectivity requires that the presentation of the entire report be balanced in content and tone. A report’s credibility is significantly enhanced when it presents evidence in an unbiased manner so that readers can be persuaded by the facts.”). 85. PAGE 65. The heading of the next section of the report reads “Fax Machines,” even though the GAO discusses in that section damaged and tampered with fax machines, printers, and copiers. We believe that the heading should be revised to accurately reflect the content of the section. 86. PAGE 65. The GAO is mistaken when it reports “one EOP official told us that he had seen 12 fax machines with the telephone lines switched and another fax machine that was disconnected.” Our notes shows that two employees told the GAO that they had observed fax machines that were “switched.” An employee of the OA with over 30 years’ service in the White House told the GAO that he saw “at least a dozen switched fax lines,” and a different employee (who has almost 30 years’ service) said that he too saw “faxes switched between offices.” Thus, the GAO’s sentence should read: “One OA employee and one White House employee told us that, during the night of January 19, they saw at least 12 to 14 switched fax lines.” 87. PAGE 65. The GAO reports on observations that “5 copy machines, printers, and copiers . . . did not work.” But the GAO fails to include the details that show that it was not simply a case of an innocently broken machine. For instance, one individual told the GAO that the repairman who fixed the broken copy machine found a pornographic or inappropriate message when he pulled out the copier’s paper drawer and that the repairman told the individual that he thought the paper drawers had been “intentionally realigned” so that the paper supply would jam. 88. PAGES 65 and 66. The GAO states that “wo EOP staff said they observed fax machines moved to areas where they did not appear to belong.” This is another example where we think that the GAO should simply report what the staff member said – and not recharacterize it. One employee said that she saw some fax machines sitting in the middle of the floor, unplugged. In our opinion, unplugged fax machines do not “belong” in the middle of the floor and thus the GAO’s characterization that the fax machines were moved to areas “where they did not appear to belong” is overly charitable. Moreover, even if the GAO disagrees and believe that a fax machine could belong in the middle of the floor, that is a judgment that the reader should be allowed to make. More important, by recharacterizing the observation, the GAO deprives the reader of the facts that he or she needs to judge the relevance and credibility of the comments made by former staff members. On page 66, the GAO reports that “he former director of an office where fax machines were moved to areas other than where they had been installed said that a fax machine may have been pulled around a corner, but it was not done as a prank.” But this explanation does not answer the charge: that multiple fax machines were placed in the middle of the floor, unplugged. Unfortunately, the reader would not know that because the GAO fails to provide the details needed to have a complete and accurate understanding of the matters reported. 89. PAGE 65. The GAO fails to mention in its discussion of fax machines than an employee told the GAO that all printers and fax machines that she observed had been emptied of paper. 90. PAGE 67. The heading of the next section is “trash,” which the GAO apparently equates with the statement on the June 2001 that the “offices were left in a state of general trashing.” As noted above, in today’s parlance, saying an office was “generally trashed” is not the same as saying it had “trash” in it. See General Comment No. 3. The existence of trash in offices was not, in our view, the problem; the problem was that many offices were trashed – and, as the observers told the GAO, it appeared that it was deliberately left in that condition. The GAO therefore should, in our view, revise its heading to read “Trashing of Offices.” 91. PAGE 67. The GAO reports that “wenty-two EOP staff and 1 GSA employee told us that they saw offices that were messy, disheveled, or dirty or contained trash and personal belongings left behind in specific offices or rooms.” With all due respect, it is a gross understatement to say that the GAO was told that the “offices . . . were messy, disheveled, or dirty.” We asked the GAO to accurately report what it was told, and not to recharacterize it. Had the GAO done so in this case, the reader would have learned about the following observations, among others: (not including observations of damaged and overturned furniture or signs) Plant dumped in the middle of the floor * Two pencil sharpeners thrown against wall: in Room 100, mark on wall where hit, shavings on floor, and broken sharpener lay on ground; in Room 102, shavings on floor and broken sharpener lay on ground (observed by two persons) files and papers everywhere on the floor – not just overflowing trash “trash everywhere” Office was “filthy”; had to replace all furniture except one table and Very dirty; “more than wear and tear” “lots of trash”; small pieces of office equipment stacked one on top “lots” of beer bottles and beer cans Offices “trashed out,” even after GSA had been through once; “sizeable” holes in the walls Beer cans thrown on top of 10-foot high filing cabinets and stuffed animal and a shoe lodged in the rafters Contents of large file cabinet units (measuring approx. 10’ x 6’ x 10’) appeared to have been dumped on floor “extremely filthy”** Lots of trash on the floors, food in desk drawers, pizza boxes in corner office, desks moved against doors “trashed”; supplies dumped on floor; “looked like people threw everything” Soil spread across carpet Looked like office was “deliberately made to look like someone was communicating a message”; things in the desk dumped on top of desks; lamps were on chairs; pictures stacked on floor so you could not enter the room; etc.; “looked like when someone trashes a dorm “clutter and mess over and beyond what you’d expect”; “would not have expected this under ordinary circumstances” **** in 25% of the spaces vacated in NSC (30-40 rooms), saw “something that didn’t expect.” E.g., someone had spread holes from a hole punch all over the floor; a desk lamp was placed on a chair in the middle of the office; “papers strewn everywhere” ***** Trash was “dumped everywhere”; pictures were pulled off the walls, “most of the rooms were trashed” and “filthy” ° Binders thrown everywhere and piles of paper “very unclean; trash strewn about; refrigerators full of mold.” °° “tons and tons of trash”; binders piled over a copier; old food boxes “trash was everywhere”; “filth”; food and trash in desks – pizza, sandwiches, tuna fish, chips Offices were “trashed”; supplies and garbage all over; drawers open and on the floors °°° Lots of beer and wine bottles *** Looked like there were a “large number of people who deliberately trashed the place” “amount of trash was beyond the norm” for transitions °°°° Empty wine and beer bottles *Employee of the President’s Foreign Intelligence Advisory Board; worked here during the Clinton **GSA employee; worked here during the Clinton Administration *** GSA employee; worked here during the Clinton Administration ****NSC employee; worked here during the Clinton Administration *****NSC employee; worked here during the Clinton Administration °OA employee; worked here during the Clinton Administration °°OA employee; worked here during the Clinton Administration °°° OA employee; worked here during the Clinton Administration °°°° OA employee; worked here during the Clinton Administration 92. PAGES 67-68. The GAO’s list of facility request forms that document the condition of the offices is incomplete. The documents that were provided include: A January 30, 2001, facility request form shows that Cabinet Affairs asked for someone to clean the carpet, furniture, and drapes in Rooms 160, 162, and 164. GSA charged $2,905.70 for that service. Facility Request No. 56713. A January 30, 2001, facility request form shows that an employee asked for the following services in the Advance suite reads: “Walls/moldings need patching and paint. . . . 1 – Need carpet vacuumed – is awful! 2 – Furniture cleaned and drawers need vacuuming out. 3 – Drapery needs cleaning or replacement.” Facility Request No. 56990. A January 25, 2001, facility request form shows that an employee asked that GSA clean the carpet, furniture, and drapes in Room 160A. Facility Request No. 56662. A February 17, 2001, facility request form shows that an employee asked for a “prof cleaning” in Rooms 154, 156, 157, 159, 160½ (or 160A). For that service, GSA charged $1,150.00. Facility Request No. 58355. A February 21, 2001, facility request form shows a request to clean the carpet in the former First Lady’s suite (Rooms 100-104). Facility Request No. 58369. 93. PAGE 70. Although the GAO reports that “he OA director said that the offices were in ‘pretty good shape’ by the evening of January 22,” the GAO has refused, despite our request, to include others’ observations on how long it took to get the offices in shape. Had the GAO done so, the reader would learn: The GAO asked the Director of White House Telephone Services when things were corrected, and was told that most things were cleaned up within 2 weeks, but “all the mess” was “not squared away until February.” In response to the GAO’s question regarding how long it took to get problems fixed, the on-site manager for AT&T explained that the problems “lasted at least a month.” When the GAO asked an OA staff member with over 30 years’ experience at the White House when the place was “cleaned up,” he responded that “just the cleaning” was done “3 to 5 days” after January 20th. When the GAO asked an employee how long did it take to get the phones operational, she answered “bout a week and a half. Three or 4 days to get people a working phone. To get people phone numbers took a week and a half.” An employee told the GAO it took approximately “3 weeks” before things were “back to standard.” 94. PAGE 70. The GAO states that “The OA associate director for facilities management said that about 20 offices were vacant before January 20. He said that it took 3 to 4 days after January 20 to complete the cleaning.” That is not what this individual said. He said that there was “some list of offices that could have been cleaned before the th,” and the list was given to a GSA manager. He further explained that there were “not a lot of offices on the list” – “maybe 20.” He also said that it took “3 to 5 days” to complete “just the cleaning.” 95. PAGE 70. The GAO also misquotes the same individual when it writes: “This official said that he saw some a limited amount of trash that appeared to have been left intentionally.” The GAO asked this individual, “Was there intentional trashing?” And he responded yes, a “limited amount.” Therefore again the GAO has mistakenly equated “trash” that was left behind with the “trashing” of offices. 96. PAGE 70. We believe that the GAO has again misquoted this individual when it reports that “e also said that it would have taken an ‘astronomical’ amount of resources to have cleaned all of the offices by Monday, January 22.” Rather, he said that they “could not have had enough people to clean it by the 22nd because dirtier than past transitions.” Indeed, when the GAO asked him expressly, “Is it legitimate to think people could start working on Sunday,” January 21, he replied, “yes, in my opinion, people should leave their offices in an orderly fashion.” He explained that it was “realistic” to expect offices to be cleaned by Monday night, January 21. 97. PAGES 70-71. Again the GAO improperly redefines the observations to simply a discussion of excessive “trash.” But the observations were not so limited. The GAO reports that “ said that what he observed was probably a combination of some trash having been dumped intentionally and an accumulation built up over the years.” We believe this employee’s statement was far more direct and covered more than just “trash.” The GAO asked whether the condition of the offices – which included, among other things, “filth” and trash, was “intentional or neglect,” and the employee responded, “a combination.” 98. PAGES 71 and 72. In addition, the GAO should add similar statements by an employee who has worked at the White House since 1998, a second employee who has observed five prior transitions, a third employee (a Bush Administration official), and others who likewise told the GAO that it appeared that the offices were “intentionally” or “deliberately” trashed. The first of these individuals said that the NSC office was “deliberately made to look like someone was communicating a message.” The second said that it looked like there were a “large number of people who deliberately trashed the place.” And the Bush Administration official said the conditions he observed were “more than wear and tear.” The fact that many observers concluded that the acts were intentional is important, because, if many people reached the same conclusion, it is more likely that the conclusion was correct and a reader will perceive the conclusion to be correct. In addition, since the GAO reports on page 72 that, “none of the 67 former Clinton Administration staff we interviewed who worked in the White House complex at end of the administration said that trash was left behind intentionally as a prank or act of vandalism,” it is only appropriate that the GAO also report that many current staff members – including staff who worked for the Clinton Administration – believe otherwise. 99. PAGE 71. The GAO’s discussion of the costs associated with cleaning the “trashed” offices is incomplete. The GAO fails to mention the January 30, 2001, facility request form (No. 56713) which shows that Cabinet Affairs asked for someone to clean the carpet, furniture, and drapes in Rooms 160, 162, and 164. GSA charged $2,905.70 for that service. As the GAO acknowledged earlier in its report (at page 12), this request involved an office that a White House Office “employee said was ‘filthy’ and worn and dirty furniture.” That same employee, as well as others from her office, also told the GAO about significant damage to furniture in those offices, including a desk drawer with its drawer-fronts removed, chairs without legs, and a chair with its entire back broken off. The GAO could – but did not – determine how much time and money was actually spent paying the cleaning staff and how much time and money should reasonably have been spent (based on the amounts spent during past transitions or estimates provided by administrative staff). The difference in those amounts would provide a rough estimate of the costs attributable to the poor condition of the offices. We already know that the costs exceeded what was expected because the OA manager responsible for facilities management told the GAO that there was “lots of money that was spent that shouldn’t have to be spent.” Nor did the GAO include in its estimate of costs all of the facility request forms that show that the new staff had to request that carpets, furniture, and draperies be cleaned. While in some cases, the GSA pays for the costs associated with such cleaning (and hence no dollar amount appears on the form), actual costs exist and presumably could be estimated. If the GAO is unwilling to estimate these costs, we believe that it should at least say that additional costs exist, and that the GAO did not attempt to quantify them. And again, the problem was far more than simply “excessive trash that needed to be discarded,” as the GAO reports. 100. PAGE 72. Although the GAO is willing to report that ormer Clinton administration staff generally said the amount of trash that was observed during the transition was what could be expected when staff move out of office space after 8 years,” the GAO fails to mention that one employee, who also served during the Clinton Administration, told the GAO that what she observed “was way beyond what you’d expect to see in a large move”; she was “surprised” and “embarrassed” by the condition of the offices on Inaugural weekend; and she knew that the same offices were in pretty good shape during the weeks and months before the transition. 101. PAGE 72. The GAO states that “ne former employee who worked in an administrative office said that she did not observe much cleaning of offices before January 20, and she believed that GSA did not have enough supervisors and decision makers to oversee the cleaning.” We previously told GAO that, if the report was going to include this comment, it should also state (either here or elsewhere in the report) how many cleaning staff were on duty and the hours they worked. Without that information, we believe the reader has no basis for evaluating the comments made by the former staff. In a letter sent to us in January 2002, the former director of the Office of Management and Administration and the former senior advisor for presidential transition said that, for months before the transition, they had been assured that additional cleaning crews would be detailed to the White House complex to assist GSA cleaning crews during the final week of the administration. However, the former officials said that they did not observe any cleaning crews during the evening of January 19 or the morning of January 20. Again, we believe that if the GAO is going to include this criticism of the cleaning staff, it must also provide the reader with an estimate – based on the GAO’s review of the GSA’s work and payroll records (records that the GAO already has) – of the number of cleaning staff and contractors who worked that weekend and the numbers of hours worked. Otherwise, the reader has no means of evaluating the comment – either its credibility or its relevance. 104. PAGE 73. The GAO reports that “he office manager for the office where an EOP employee told us that it appeared that a pencil sharpener was thrown against the wall and that pencil shavings were on the floor said the sharpener in that office did not work and may have been placed on the floor with other items to be removed.” The employee told the GAO that two pencil sharpeners were found broken and on the floor along with shavings. In addition, with respect to one of the two sharpeners, there was a distinct mark where the pencil sharpener struck the wall. The comment of the former office manager thus does not rebut the employee’s observations. Six EOP staff reported observing writing on the wall of a stall in a men’s restroom that was derogatory to President Bush. In addition, two EOP staff and one GSA employee said that they observed messages written on an office wall. Two of those three employees said that the writing was on a writing board that could be erased. Two other White House employees said that they saw pen and pencil marks on walls, but no written words. The graffiti in the men’s restroom was vulgar, in addition to being derogatory to the President. It said, “What W did to democracy, you are about to do in here.” It was an act that was plainly intentional and, given its content, the GAO could reasonably conclude that it was written shortly before the transition. The writing on the wall in the Scheduling Office, while not profane in nature, said something like “Republicans, don’t get comfortable, we’ll be back,” thus again indicating that it was written shortly before the transition and by a member of the outgoing staff. One of the three observers who saw the room shortly after noon on January 20, told the GAO that he was certain that the writing was directly on the wall. The GAO’s final sentence – that “wo other White House employees said that they saw pen and pencil marks on walls, but no written words” – does not, in our view, adequately describe what the GAO was told. These were not observations of a stray pen mark, as the sentence suggests. Rather, one White House employee said that an entire wall in one office was covered in lines that appeared at a distance to be cracks. That observation was confirmed by an OA employee, who said that she too had heard that someone had etched a wall like marble. A second White House employee said that a wall in or near Room 158 was covered in pencil and pen marks, which she described as “slasher marks” and “beyond normal” wear and tear. 106. PAGES 75-76. We believe that the GAO has downplayed the number of the signs, the number of locations where they were observed, and their content. While in some cases such signs are easily removed and, in a few cases, were probably meant as a joke, we believe the GAO should describe the signs more fully and with greater detail for at least three reasons. First, the number, tone, and location of the signs may indicate the mindset of certain former staff members in offices where other damage was found. Second, these details allow the reader to compare the 2001 transition and prior transitions. Notably, the GAO has included considerable detail about the number and content of signs found by former members of the Clinton Administration during the 1993 transition. Yet the same level of detail is lacking when the GAO discusses the 2001 transition. Third, and similarly, if the report is going to include a former staff member’s comment that the signs were “harmless” (Report at 76) or not “obscene” (Report at 75), we believe that the GAO should provide the signs’ contents, or how the observer described the signs (e.g. “vulgar”), so that the reader can decide whether the characterizations are accurate. We also believe that stickers that were permanently affixed to government property (copiers and cabinets) are not the same as prank signs or messages that were simply taped on a wall or placed in copy machines and printers. Yet the GAO treats these things as equivalent. The tables below detail the number, location, and content of some of the signs that were SIGNS AFFIXED TO FURNITURE AND OTHER GOVERNMENT PROPERTY Sticker affixed to filing cabinet that reads “jail to the thief”; shown to Key broken off in file cabinet with Gore bumper sticker with the words “Bush Sucks” stuck to the inside of the cabinet (observed by two persons) Gore bumper sticker stuck to the bottom of paper tray in the copier (not including signs affixed to property) “Vulgar words” on white board** Sign comparing President Bush to a chimpanzee found “in a number of printers”; “laced” throughout the reams of paper *** Three copies of the same sign taped to wall (observed by two persons)**, **** 15-20 copies of the same sign laced throughout ream of paper in fax machine and copier (observed by two persons) In location where people “dumped” supplies, a sign read “Gifts for the New President” (Head Telephone Operator*****) Sign taped to a desk of a mock MasterCard ad that includes a picture of President Bush and reads, “NEW BONG: $50, COCAINE HABIT: $300, FINDING OUT THAT THE GOOD-OLD-BOY NETWORK CAN STILL RIG AN ELECTION IN THE DEEP SOUTH: PRICELESS. For the rest of us there’s honesty.” The GAO was provided with a copy of this sign. T-shirt with tongue sticking out draped over chair ** Sign that read “just laugh” taped to the wall “Inappropriate” message in printer or fax tray “Quite a few signs” Picture of former First Lady taped to cabinet Photo in safe that had the word “chad” spelled out in paper punch holes (observed by two persons) Notes in the desk drawers Sign addressed to and disparaging of “Bush staffer” on wall Sign of a mock Time magazine cover that read “WE’RE ******” on wall (observed by five persons) Pictures of President Clinton and notes about President Bush “were Signs inserted into office nameplates, including signs outside of the former First Lady’s Office (Room 100-104), the OMB, and the Office of Faith-Based and Community Initiatives (observed by four persons) °°, °°°, ++) *GSA employee, worked in the White House complex during Clinton Administration **OA employee, worked in the White House complex during Clinton Administration *** OA employee, worked in the White House complex during Clinton Administration **** OA employee, worked in the White House complex during Clinton Administration ***** OA employee, worked in the White House complex during Clinton Administration ° OA employee, worked in the White House complex during Clinton Administration °°OA employee, worked in the White House complex during Clinton Administration °°° OA employee, worked in the White House complex during Clinton Administration +GSA employee, worked in the White House complex during Clinton Administration ++ GSA employee, worked in the White House complex during Clinton Administration 107. PAGE 77. It is not accurate, in our view, for the GAO to say that the statement that trucks were needed to recover new and usable supplies “generally was not corroborated.” OA’s Associate Director for the General Services Division told the GAO that, because the excess supplies had been “dumped” in the basement hall and were piling up down there – leaving “much of it unusable” – he instructed his staff to take the supplies to the off-site warehouse where the staff could re-sort the supplies and salvage what was still reusable. As the GAO itself reports, eight truckloads were needed to recover these new and usable supplies from the basement. Had these trucks not been dispatched, all of the supplies (instead of just of portion) would have been rendered unusable. Thus the statement in the June 2001 list was “corroborated.” 108. PAGE 78. Two employees (not one) told the GAO that they had found classified materials left unsecured in multiple locations. An employee with more than 30 years of service in the White House complex told the GAO that he found classified materials in an unlocked safe during the night of January 19, when he toured the offices. In addition, a GSA employee said she found “classified information” in “quite a few rooms.” It is understandable if the Director of Records Management did not find these documents himself, since he toured offices looking for documents for less than two-and-one-half hours before his attention was diverted to the West Wing at approximately 2:30 a.m. on January 20. Also, as the GAO notes, a White House employee reported that he found a selection of sensitive documents, including some pardon-related materials and some fundraising materials, in the Counsel’s Office in the EEOB. It is not surprising that the Director of Records Management did not find these documents since the occupants of the Counsel’s Office did not depart their offices until long after he stopped checking rooms in the EEOB at approximately 2:30 a.m. 109. PAGE 80. Appendix II addresses the condition of the White House complex during previous presidential transitions and compares that to the 2001 transition, where the GAO states that an “EOP employee showed us writing inside a desk that was dated January 1993.” The writing in the desk is neither profane nor disparaging of the incoming President or his administration. It reads: “Take care of this place. We will be back in four (4) years! (1/93).” 110. PAGE 81. The GAO has included only some of the statements made by current staff members about past transitions. The GAO, for instance, fails to mention that several employees, including longtime staff members, said that the 2001 transition was “worse” (and not only with respect to the amount of trash) than what they had seen during past transitions. Omitted statements include the following: After an individual employed at the White House since 1973 described problems found with the phones, the GAO asked, “Is this sort of thing unusual?” This employee responded yes, “this was unusual”; “every administration has pranks,” but this was “worse.” When the GAO asked the same individual whether it looked like the prior occupants had “purposely trashed the place,” he replied that it was “not sloppiness, it looked like one big party” had been there and that he “never remembers seeing anything like this before.” The same employee told the GAO explicitly that the offices “shined” during the Reagan Administration and that, when President George H.W. Bush left office, “ never encountered any problems with telephones”; perhaps “unplugging of phones, but that was it.” An individual who observed the transitions from Nixon to Ford, Ford to Carter, Reagan to Bush, Bush to Clinton, and Clinton to Bush, said that he had “never seen anything like it” and had “never seen this building in such bad condition.” Another individual, an OA employee for roughly 17 years, said that the trash was worse this time than in prior transitions; in addition, he told the GAO that the condition in which the building was left “was a bit juvenile” and suggested the prior occupants were “not cognizant of responsibilities of people coming behind .” A GSA manager told the GAO that there were “far more” personal belongings left behind during the 2001 transition than during the 1989 transition. In addition to telling the GAO that the offices were “dirtier than past transitions,” an OA employee with more than 30 years of service said that the amount of trash “was beyond the norm.” A Bush Administration official, who was in charge of the transition out of government in 1992, told the GAO that he personally took a tour of four floors of the OEOB and West Wing on January 20, 1993, and he saw “nothing comparable” to what he saw during this transition. He twice told the GAO that the damage during this transition was “more than ’d seen in other transitions.” An OA employee who has worked in the complex for 23 years and observed seeing problems during the 2001 transition, told the GAO that she “didn’t notice anything at all” during Bush-to-Clinton transition; nor did she recall anything when the Carter Administration left office. The OA associate director for facilities management said that every transition had had a problem with missing historic doorknobs. Similarly, the director of GSA’s White House service center said that doorknobs are favorite souvenirs of departing staff. The telephone service director said that telephone cords were unplugged and office signs were missing in previous transitions and that unplugging telephones is a “standard prank.” The GAO fails to mention that the GSA director has observed only two transitions – the 2001 transition and the 1989 transition. He said that he had only heard that doorknobs went missing during the 1989 transition; he did not observe anything himself. The Director of White House Telephone Services did not say that office signs were missing in previous transitions. He recalled that occurring in one prior transition. He recalled that, when the Carter Administration left office, “door signs were missing and cords unplugged.” 112. PAGE 82. The GAO states that “he director of GSA’s White House service center during the 2001 transition said that the condition of the office space during the 2001 transition was the same as what he observed during the 1989 transition.” But the GSA employee observed little in the way of pranks, damage, or vandalism during the 2001 transition; saying that he “saw much the same thing” during the 1989 transition means that he claims not to have observed much in either transition. 113. PAGE 82. The GAO’s reference to what the GSA Acting Administrator said in his March 2, 2001, letter may be misleading to the reader. The GSA’s letter references only “the condition of the real property” – and not the telephones, the computers, the furniture, the office signs, etc., which were the focus of the damage, vandalism, and pranks that occurred during the 2001 transition. 114. PAGE 83. The GAO reports that “even former employees . . . said that computers were not operational or were missing hard drives at the beginning of the Clinton administration. Two of those employees said it took 1 to 2 weeks for the computers to work.” The GAO was told that computers were not working and hard drives were missing because the prior Bush Administration was required to remove the hard drives in connection with a case captioned Armstrong v. Bush. The GAO obliquely refers to the case in footnote 64, but a reader will not understand the relevance without further explanation. 115. PAGE 83. The GAO reports that “wo former employees said that telephones were piled on the floors or were disconnected. (One of those former employees said she was told that staff would receive new telephones.)” An employee with over 30 years of service told the GAO that, when the Clinton Administration came into office, he was instructed to “get rid of Republican phone system.” This would explain why the former employees found phones disconnected and were “told that staff would receive new telephones.” 116. PAGE 83. We again note the GAO’s willingness to include a characterization by a former staff member who says that damage “appeared to have been . . . intentional,” but the GAO omitted from its report similar statements made by members of the current staff. The White House telephone services coordinator told the GAO that the missing phone labels “must have been intentional.” An employee who has worked at the White House since 1998 told the GAO that the rooms he observed were “deliberately made to look like someone was communicating a message.” A former White House manager told the GAO that some of conditions he saw looked “intentional.” An individual who has observed five prior transitions said the offices looked like a “arge number of people . . . deliberately trashed the place.” A current employee told the GAO that the desk drawers were clearly damaged intentionally and not just wear and tear. An employee who worked at the White House from 1999-2001 told the GAO that “it was intentional, not accidental” with respect to the damage he observed in dozens of rooms. A Bush Administration official who has participated in a prior transition told the GAO that the conditions he observed were “more than wear and tear.” A current employee said that the broken key in the file cabinet looked “deliberate” to him. An OA employee responsible for facilities management said that some of the damage was the result of “intentional trashing.” An employee with over 30 years of service in the White House said it looked like the prior occupants had “purposely trashed the place.” One former employee who started working at the White House in January 1993 and left in January 2001 said that the officers were messier in January 1993 compared with January 2001. Another former employee said that on January 20, 1993, his office contained leftover food and that the walls needed repainting. A third former employee said the offices were still not cleaned by the afternoon of January 21, 1993. Another former employee said that there were ‘dusty and dirty’ typewriters on desks. Three former staff said they saw a total of at least six Bush bumper stickers in different offices, on cubicle walls, in a desk, on a telephone. One former employee said she saw one to two photocopies of political cartoons left in a copy machine, a bottle of aspirin with a prank note inside a desk, a large banner on the balcony of the EEOB, and a tarp for a tent left behind. Again, we note that the same level of detail – for precisely the same sort of allegations – is lacking when GAO describes observations made during the 2001 transition. By not including this information for the 2001 transition, the GAO has failed, in our view, to include all information needed to satisfy the audit objective to compare the 2001 transition with past transitions. See Government Auditing Standard 7.50 and 7.51 (“The report should be complete. . . . Being complete requires that the report contain all information to satisfy the audit objectives, promote an adequate and correct understanding of the matters reported, and meet the report content requirements. It also means including appropriate background information.”). 118. PAGES 84-85. The GAO was able to find only one news report that mentions the condition of the White House complex during previous transitions. The GAO claims that “the Washingtonian magazine indicated that incoming Reagan administration staff had some complaints about the condition of the EEOB that were similar to observations made by EOP staff in 2001.” The Reagan administration staff complaints were, according to the article, finding memoranda taped to the walls; lampshades torn by paperclips hung on them to hold messages; a refrigerator with thick mold; and a large coffee stain on a sofa outside the vice president’s office. These allegations are hardly “similar,” as the GAO maintains, to what was found in 2001 transition. By analogizing the circumstances, the GAO trivializes what was observed in 2001. Part III: Comments on Recommendations 119. PAGES 86-87. Although Appendix III is entitled “Steps to Help Prevent Damage to Government Property during Future Presidential Transitions,” the draft report does not actually contain any “steps” or recommendations in this section. It simply discusses the check-out process used during the Clinton Administration and the procedures followed on Capitol Hill when offices are vacated. 120. The GAO fails to include anywhere in its report two of the factors that OA officials, who have been through many transitions, identified as contributing to the problems found in the January 2001 transition. First, an employee who has worked at the White House for over 30 years told the GAO that he felt “hampered” in doing his job because he was “not allowed to have any contact with the incoming Administration.” He indicated that, in the past, he was allowed to confer with incoming staff regarding their telephone needs and expectations; but this was not permitted during the 2001 transition. Likewise, an employee who has observed five prior transitions told the GAO that this transition was unusual because, for other transitions, there was a transition team from the new Administration on-site in the complex. This time, the person said, the incoming administration did not get access to the space until three days before the Inauguration and did not get “legacy books” – books that explain how things work within the complex and within particular offices – until after the Inauguration. Second, a number of longtime employees told the GAO that problems could have been averted or remedied sooner if members of the Clinton Administration had vacated their offices earlier. By way of example, one OA manager recalled seeing a woman simply watching television in her office; precisely at noon, she turned her TV off and left. Documents that we provided the GAO show that 325 passes of White House Office employees were terminated on January 19 and January 20, 2001. We believe that the points made by these employees are valid ones, and deserve to be addressed in the GAO report. GAO’s response to the White House’s specific comments follow. We have grouped the comments in the categories listed below. The White House said that we had underreported the number of observations in various categories, including the signs and messages, computer keyboards, missing items, furniture, offices with trash, telephones, writing on walls, and classified documents. In comment 8, the White House said it believed that we had substantially underreported the number of signs and messages observed in the letter portion of the report. However, as indicated in the results section, the letter portion of the report only contains observations made in specific locations, and additional observations that staff identified by floor or building, but not by room or office, are provided in appendix I. Moreover, we reported some observations of signs and messages differently from the White House. For example, we reported observations of writing in desks in the section regarding furniture-related problems. In addition, we reported two observations that the White House included in the category of signs and messages (observations of paper hole punches arranged on a floor to spell a word and a T-shirt draped over a chair with a picture of a tongue sticking out) in a different category relating to observations of trash and personal items left behind. We also added to our count two Gore stickers that staff told us were found in a file cabinet, which we had not included in our draft report. The White House also said in comments 9 and 106 that we should have reported the specific content of all of the signs and messages. We addressed these comments in the White House’s general comments about the amount of detail provided. In comment 23, the White House said that writing was found on the walls of four rooms, rather than two rooms, as the report indicated. The statement concerning writing on the walls in the letter portion of our report summarized additional details provided in appendix I. Further, by “writing,” the report referred to observations of actual words written on walls. As explained in appendix I, other staff observed pen and pencil marks on the walls of two other rooms, but no words. For the purposes of clarification, we revised the statement to indicate that staff observed writing “(words)” on the walls of two offices. The White House said in comments 68 and 87 that we failed to include the statement of an EOP employee who told us about statements made by a repairman, who while fixing a broken copy machine, said that he found a pornographic or inappropriate message when he pulled out the copier’s paper drawer. We did not include the repairman’s statement because we did not include information people relayed to us from third parties, which is generally not regarded as competent evidence. The White House disagreed with the range of keyboards that were observed with missing or damaged “W” keys in comments 10 and 51. We previously explained how we calculated the range of observations in response to the White House’s general comment regarding the number of observations reported. In comment 54, the White House also said that we did not report that the Office of Administration (OA) associate director for information systems and technology saw some glued-down space bars. Although we modified our report, we note that this official first told us that the problem was inoperable space bars and subsequently said it was glued down space bars. In comment 55, the White House said that we underreported the number of “W” keys taped or glued to walls; that we failed to mention that other staff reported that they found “W” keys sitting next to keyboards and computers; and that an additional employee saw piles of keyboards or computers or a computer monitor overturned that we did not report. Our range of “W” keys taped or glued to walls differed from what the White House had indicated in its comments. Further, the White House counted a least two keys when people said they saw “some” keys taped or glued to walls, but did not specify a number. However, we did not estimate numbers in those cases and disclosed that in the report. We did not report the observations of “W” keys sitting next to keyboards or on computers because we believed that reporting the number of keys glued or taped to walls provided sufficient detail to support the observation of keyboards with missing or damaged keys. We revised the report to indicate that five, rather than four, employees observed piles of keyboards or computers or a computer monitor overturned. In comment 56, the White House said that we did not consider the statement of the OA branch chief for program management and strategic planning in the information systems and technology division. The White House pointed out that, on one of the branch chief’s last deliveries of broken items to the temporary workroom, someone had told her that the count of damaged keyboards was up to 150. We did consider her statement. Our report contained a statement attributed to the branch chief that 150 keyboards had to be replaced. The White House also said that, by contrast, we provided more details regarding the observations made by the OA associate director for information systems and technology, but had omitted the fact that this official said that she was not focused on the keyboards during the transition, but that she personally saw only about 10 keyboards with missing “W” keys, a concentration of keyboards with missing “W” keys in certain offices, and some keyboards with glued-down space bars, and that she was very upset at the condition in which some of the keyboards were left. In addition, the White House said that during our second interview with the OA associate director for information systems and technology, we had asked this official to estimate the number of keyboards with missing “W” keys without reminding her that we had asked her the same question during our first interview with her. To address the White House’s comments, we added to the report statements contained in our interview record with the OA associate director for information systems and technology indicating that she said that she was not focused on the keyboards during the transition, but that she saw about 10 keyboards with missing “W” keys, some with glued-down space bars, and a lot of keyboards that were “filthy.” We also added, on the basis of our interview record, that she believed that more of the keyboards with problems were found in the offices of the first lady and vice president than in other offices. However, contrary to the White House’s assertion, our record regarding the follow-up interview with this official indicated that we did remind her about her earlier statement about the number of keyboards with missing “W” keys when we asked her that question again. As indicated in the report, we asked to conduct a follow-up interview with this official after obtaining an estimate from the branch chief for program management and strategic planning in the information and technology section that about 150 keyboards had to be replaced because of missing or damaged “W” keys. Also in comment 56, the White House said that we did not report what the OA associate director for information systems and technology said the contract employee who packed the keyboards told her regarding the number of damaged keyboards. However, we did not include observations people relayed to us from third parties. Further, the contract employee’s statements that she provided to us during an interview were provided in the report. The White House also noted that we did not meet with the branch chief, but interviewed her by telephone; we made the appropriate change. Finally, the White House said that we had misquoted the OA associate director for information systems and technology when we indicated that she said that of the 100 keyboards that had to be replaced, about one-third to one-half would have been replaced anyway because of their age. The White House said that this official told us that one-third to one-half of the keyboards would have been replaced if they had not been changed out in 4 to 8 years. Although our interview records indicated that this official said that one-third to one-half of the keyboards would have been replaced anyway, they did not indicate that she also said “if they had not been changed out in 4 to 8 years” as the White House indicated, so we did not change the report. In comment 57, the White House said that, regarding the 12 boxes of computer equipment that were discarded, we should have explained that the contract employee personally packed some of the boxes; and that for those, she filled the boxes with keyboards and then used excessed speakers, cords, and soundcards to fill in gaps and ensure that the keyboards would not shift in the box. We did not believe these details to be relevant. The White House said in comments 19 and 44 that 11 to 13 doorknobs were observed missing, compared to the 10 to 11 contained in the report. Our total differed from the White House’s because (1) the White House counted 0 to 2 missing doorknobs in its range when an Executive Office of the President (EOP) employee said a doorknob was missing in the Eisenhower Executive Office Building (EEOB) but did not specify any location (room, office, or floor); however, we did not include it to prevent possible double counting of missing doorknobs where specific locations were identified; and (2) the White House counted two missing doorknobs when an EOP employee said that a doorknob was missing on a certain floor of the EEOB, but did not identify the room. However, because we did not know whether a doorknob was missing on both sides of the door in that case, we used a range of one to two. (Although this employee did not specify the room or office where the doorknob was observed missing, we counted this because it was on a floor of the EEOB where no other doorknobs were observed missing.) In comment 41, the White House noted that four of the six EOP staff who told us that they observed a total of 5 to 11 missing office signs were OA employees and worked in the White House complex during the Clinton administration, and that the fifth employee who worked for the White House Office also served during the Clinton administration. We did not believe these details were needed and did not revise the report in response to this comment because we generally did not differentiate among staff who had worked in the White House complex before or after January 20, 2001, in reporting the observations. Also in comment 41, the White House said that one of the employees told us that a former Clinton administration employee told her that he also observed two missing brackets on the morning of January 20. However, we did not report this statement because we did not include observations people relayed to us from a third party. Nonetheless, we also interviewed that former Clinton administration employee, who said that he noticed that some office name signs were missing, but could not recall how many. He also said that he did not see any metal frames for the signs that were missing. In comment 22, the White House asked that we quote from a facility request form that asked GSA to “put doorknob on” an interoffice door. In addition, in comment 45, the White House said that we should state that the recollection of a General Services Administration (GSA) planner/estimator regarding this repair is inconsistent with the request form and the recollections of at least three current staff members. The statement contained in the letter portion of the report summarized information provided in more detail in appendix I, where the facility request form was quoted directly. However, we revised the statement contained in the letter portion of the report to quote from the form. Regarding the White House’s request that we state that a GSA employee’s recollection is inconsistent with the facility request form and the recollections of at least three current staff members, the report indicated an EOP employee told us that he had observed two pairs of missing doorknobs in this office. Because no other EOP staff told us that they observed missing doorknobs in this office, including the employee who prepared the request, (who did not request to be interviewed by us) we did not include the statements contained in the White House’s comments. Further, in the White House’s table of missing doorknobs provided in comment 19, the White House only provided the account of one person who observed missing doorknobs in that office. The White House also said in comments 22 and 45 that, if we include a statement by a GSA planner/estimator that he received no written facility requests made to GSA for replacing office signs, medallions, or doorknobs during the transition, we should cite facility requests to “put…on” a doorknob and for “replacement of frames & medallions,” dated February 7 and April 19, 2001. The February 7 request was contained in the report. In response to the White House’s comments 22, 43, and 45, we added the April 19 request, even though it was prepared 3 months after the transition. The White House also said we should report statements made by two OA officials and a White House Office employee about missing building fixtures. However, we did not believe these additional comments were essential, and one of the statements was information that was relayed to us from a third party, so we did not include them. The White House also said in comments 20 and 46 that the report should have included an additional television remote control that was observed missing. Our interview notes indicated that one employee initially told us that five or six remotes were missing in a certain office, but later in the interview said that five were missing, which we had used in our draft report in reporting the total number of remote controls observed missing by all EOP staff. However, in response to the White House’s comments, we changed the number that she observed to five or six. The White House also said in comment 46 that we should note that one of the observers had worked in that office during the Clinton administration, which we added because we believed it could be relevant to the observation. However, we did not discuss the two observations of missing television remotes separately, as the White House suggested, because we did not believe the additional detail would add any essential information. The White House said in comments 14 and 59 that we underreported the number of reports of damaged furniture and the number of observers. We did not underreport this information. Our lists of furniture-related problems that were observed were substantially the same as the list that the White House provided in its comments. However, we broke out observations of furniture-related problems into various subcategories, such as broken furniture, furniture with damaged locks, chairs with torn fabric, and desks with burns and scratches. In comments 16 and 66, the White House said that the report failed to include a January 29, 2001, facility request form that documented a request to obtain a key to a file cabinet that was locked in an office where an EOP employee said he had observed damaged furniture. The report had cited a January 25, 2001, facility request made by the same employee to gain access to a locked file cabinet in the same room that was cited in the January 29 request. However, in response to the White House’s request, we added the January 29 request to the report, even though it did not indicate any additional problems were reported. The White House said in comments 17 and 62 that we underreported the number of pieces of furniture that were observed overturned. We compared our interview records to the information provided by the White House and found that our records of the interviews differed from the White House’s account of the interviews in some cases. In one case, when we interviewed an official, he mentioned various pieces of furniture that he had observed overturned, but when he provided a tour of that office to show what he had seen, he did not mention all of the pieces of furniture. We added three additional pieces of furniture to reflect the statement he made during the interview. However, we did not add, as the White House did, observations of furniture in locations that staff could not recall because they could have duplicated ones reported observed in specific locations. In comment 64, the White House disputed a GSA official’s statement that furniture could be overturned for a variety of reasons, such as to reach electrical or computer connections. We obtained this comment directly from GSA on April 30, 2002, and GSA did not raise any objection to it in its comments on our draft report. It is important to note, however, that this statement was a generic possible explanation that did not relate to a specific observation. The White House said in comments 18 and 65 that our description of observations of a sticky substance that was found on desks was inaccurate and incomplete, and it also provided further details. We believe that the report generally provided a sufficient level of detail regarding these observations. However, to address the White House’s comments, we added more information about these observations in appendix I. In comment 29, the White House disagreed that the observations of damaged furniture differed from the June 2001 list in terms of total numbers and extent of damage. In our discussion of furniture-related observations in the letter portion of the report, we summarized the extent of damage that staff said they observed regarding broken furniture and stated that no information was provided that identified which offices some of the broken furniture came from or exactly when the damage occurred. Further, no one reported actually observing furniture being intentionally damaged, and no definitive evidence was provided regarding whether the damage was intentional. Consequently, we were unable to conclude whether the furniture in six offices was intentionally damaged severely enough to require complete refurbishment or destruction, as indicated in the June 2001 list. In comment 61, the White House said that we mistakenly reported that five staff said they observed writing inside drawers of five desks and that we were shown writing in four of those five desks. Instead, the White House said, five staff observed writing in or on six desks, that not all of the writing was inside drawers, and that we observed writing in five of the six desks. However, the White House included a sticker on a desk that we had counted in another category of observations (signs and written messages). The report indicated that we had observed that sticker. Finally, the White House said in comment 72 that we were wrong in saying that, during the first of two interviews we held with an EOP employee, she said that her observations, which included desks with burn marks and scratches, pertained to a particular office, rather than a suite of offices. The White House also pointed out that we were taken into the two offices that she was referring to. However, our record of this interview indicated that her observations pertained to a particular office and that she repeatedly referred to the previous occupant of that specific office. Further, when we toured the office suite in question, she did not stop to discuss furniture in an adjacent reception area as well. In any event, we reported that in a follow-up interview with this employee, she said that her observations pertained to two rooms in an office suite. In comment 4, the White House said that the statement “ultiple people said that …they observed (1) many offices that were messy, disheveled, or contained excessive trash or personal items” was an understatement and provided other observations that were made in the office space, such as “W” keys glued to the walls and overturned furniture. This statement was a part of a summary paragraph of certain observations regarding trash and personal items that were left behind; other types of observations that the White House mentioned are contained elsewhere in the report. In comment 12, the White House said that the report’s description of the seven photographs that were taken of offices in the EEOB on January 21, 2001, was incomplete. The description of the photographs provided in the letter portion of the report summarized a more detailed description of the photographs that is provided in appendix I. In comments 13 and 92, the White House said that our list of facility request forms in appendix II that document the condition of the offices was incomplete. It cited two facility request forms dated January 30 and others dated January 25, February 17, and February 21. One of the January 30 request forms was already cited in the report, and we added the other one. We also added the January 25 request form to the report, which requested cleaning services in the same room as the February 17 request and was in the report. We did not include the February 21 facility request form because it was unclear whether the request for carpet cleaning necessarily corroborated reports of pencil shavings, paper, and files on the floor, which were made during the first days of the administration. The request was made a month after the observations were made and we did not know whether cleaning was needed as a result of the observations that were made during the first days of the administration or some other reason. In comment 13, the White House said that, in describing one of the January 30 facility request forms, our description of the condition of the office where work was requested was incomplete. The White House noted that staff also told us about significant damage to furniture in that office suite, including a desk drawer with its drawer fronts removed, chairs without legs, and a chair with its entire back broken off. However, we did not mention those additional observations with respect to the facility request form because the form did not corroborate them. With respect to furniture, the January 30 request form that the White House cited in comment 13 only requested furniture cleaning. The additional observations that the White House referred to actually pertain to a different office for which another January 30 facility request was made. However, that January 30 request form also did not corroborate observations of broken furniture. With respect to furniture, that form only indicated that furniture cleaning was requested. In comment 90, with regard to the section heading “trash,” the White House said that we apparently equated a statement in the June 2001 list that offices were left in a state of general trashing, which is not the same as saying that they had trash in them. The White House said that we should revise our “trash” section heading to “trashing of offices.” Although some portion of the observations reported in this section could have been “trashing,” i.e., vandalism, many of them were only observations of trash and personal items left behind. Further, although the White House included in the June 2001 list “glass top smashed and on the floor” under the category of “offices were left in a state of general trashing,” we reported observations of broken glass desk tops in the section of appendix I regarding furniture. Therefore, we did not change the section heading to “Trashing of Offices,” but to “Trash and Related Observations.” In comment 91, the White House said that we had made a gross understatement by indicating staff had observed offices that were messy, dirty, and disheveled. The White House asked that we accurately report what we were told, rather than recharacterize it, and provided a table providing statements that staff had made regarding “trashed” offices. We believe that we already reported a sufficient amount of information about these types of observations. First, we reported the total number of people who observed offices that were messy, disheveled, dirty, or containing trash or personal items left behind (a broader category that the White House indicated in its comments) in specific rooms or offices, on certain floors, or in locations they could not recall. Second, we provided several examples of how offices were described. Third, we reported related observations in several related categories, such as food left in refrigerators; furniture, carpet, or drapes that were dirty; contents of desk drawers or filing cabinets dumped on floors; pencil sharpener shavings and paper hole punches on the floor, as well as several singular observations. Fourth, we reported detailed observations about trash made by the OA associate director for facilities management and a White House management office employee. Fifth, we described photographs of messy offices that the White House provided. As in several other comments, the counsel to the president asked that we expand our reporting of certain problems by providing selected additional details. However, our goal was to be objective and not only provide additional details that supported a single perspective. In comment 97, the White House said that we improperly redefined the observations to simply a discussion of excessive trash, when the observations were not limited to such. The White House cited a statement contained in the report made by a White House management office employee who told us what he observed was probably a combination of some trash having been dumped intentionally and an accumulation built up over the years. However, the White House said that this employee’s statement was far more direct and covered more than just trash. According to the White House, when we asked this employee whether the condition of the offices, which included, among other things, filth and trash, was intentional or a result of neglect, he responded that it was a combination. Our interview record indicated that this employee said that he saw trash everywhere, but did not know whether the amount of trash left was intentional or was due to a lack of maintenance. He said the “filth” that he found was probably an accumulation from over the years and that some looked like it had been dumped intentionally. He also mentioned that he had found trash in desks and food left behind. We believe that these observations were sufficiently reported and that no additional information needed to be added. In comment 100, the White House said that we failed to report a statement made by an employee who also served during the Clinton administration who told us that what she observed was way beyond what you would expect to see in a large move, that she was surprised and embarrassed by the condition of the offices during the inaugural weekend, and that she knew that the same offices were in pretty good shape during the weeks and month before the transition. We did not add the statement that the White House suggested because the report already included in appendix II the views of several staff who said that more cleaning was required during the 2001 transition than during previous ones. The White House said in comments 24, 75, and 79 that we underreported the number of telephones observed with missing labels and the number of observers. The report contained a different number of missing telephone labels observed than the White House indicated for several reasons. First, our records of observations differed from the table that the White House provided in its comments in some cases. For example, the White House included the observations of 3 to 5 missing labels by two employees that we did not have in our interview records. One of those two employees did not request to be interviewed by us, and we have no record of obtaining comments from that individual. Our record of interview with the other employee (the telephone service director) did not indicate that he observed any labels missing from that room. The interview record also indicated that he said the telephones with missing labels that he observed were all on the first floor of the EEOB; however, the room that the White House cited was on another floor. Because we were informed that this individual had retired from the EOP since we interviewed him, we were not in a position to resolve this. Second, the White House double counted the number of telephones with missing labels in a certain office, which increased the high end of its total range of missing labels, which we did not do. Third, when we interviewed the telephone service director, he provided some different information during his interview than he did during a tour he provided to show us where he observed telephones with missing labels. We used the information that he provided during the tour when he provided more specific numbers and locations than he had during the interview. By contrast, the White House appeared to have counted the information that he provided both during the interview and the tour. Fourth, in its tally, the White House counted at least two missing labels when an individual did not provide a specific number, but said “labels” or “some” were missing, which we did not do in our final count. The total number of missing telephone labels contained in our draft report had included our assignment of one missing label to reflect an instance where the specific number observed was not provided. However, for consistency in reporting all observations when people did not cite the specific number of incidents, we did not estimate the number of telephones with missing labels in this instance and revised our total count by reducing it by one. We also added a footnote explaining that the total range of missing telephone labels does not reflect a number that the telephone service director said he observed in a room, but did not specify how many. In comment 25, the White House said we did not report how many telephones were unplugged or piled up or how many offices were affected. According to the White House, telephones were piled up or unplugged in 25 or more offices in the EEOB. We do not know how the White House determined this number. According to our records, many of the observations were not precise regarding the locations. In appendix I, we reported that staff observed telephones unplugged or piled up on two floors of the EEOB and in four specific rooms on those floors, but that was the extent to which we could quantify the number of locations. Further, our records indicated that although one official said that he observed seven or eight telephones piled outside an office, the other six employees who said they observed telephones that were unplugged or piled up did not indicate how many they saw. The White House said in comment 26 that the report failed to mention the telephones that were forwarded and reforwarded throughout the complex during the transition. The White House said that, according to its records, roughly 100 telephones were forwarded to ring at other numbers. These observations were not reported in the letter portion of the report, but they are discussed in appendix I. As indicated in the results section, the observations contained in the letter portion of the report were those made in specific locations in the main categories, and the employee who said that about 100 telephones had been forwarded to ring at different numbers, with one exception, did not cite the specific locations of those telephones. The White House said in comments 27 and 74 that the report did not adequately and correctly disclose information about telephone lines that were observed ripped from walls. In comment 27, the White House said that, if we had reported that the people who made the observations did so early in the morning on January 20, the comments made by a former Clinton administration employee who said the cords were probably torn by moving staff would be less credible because the moving staff did not begin work until later in the day. In response to the White House’s comments, we added additional information to appendix I about when EOP staff observed cords pulled out of walls. We also revised a statement made by a former Clinton administration employee who said that (1) the cords were probably pulled from walls by moving staff to clarify that the cords she had seen pulled out of walls were not observed around the time of the transition, and (2) she intended to provide a possible explanation on the basis of a previous observation. In comment 74, the White House said that our data on the number of cut and pulled cords is not accurate. Our total number of observations and observers in this category were substantially the same, but reported differently. We reported observations separately of telephone lines ripped or pulled from walls; other types of cords pulled from walls; damaged plugs; and a telephone cord that appeared to have been cut with scissors. In addition, it appeared that the White House counted an observation of a ripped cord that was not made in a specific location, which we did not count. In comment 75, the White House questioned why a footnote contained in the draft report reported a range of telephones in a certain office. We could not determine the exact number of telephones in that office from the documentation that the White House provided. Accordingly, we changed the number to reflect an estimate provided by the White House. The White House also said that a total of five, not four, staff observed missing labels, which we revised in the report. Also in comment 75, the White House said that our report did not include an observation that telephone labels in one room were replaced “before noon” on January 20 and were missing again later that day. We added that to the report. The White House also said in comment 75 that, in addition to the number of missing labels that were reported in specific rooms and offices, we should have reported the observations of missing labels by the telephone service director, who said that he personally saw more than 20 telephones with missing labels; the OA associate director for facilities management, who said that there were many instances of missing labels on telephones; and another employee who said she was the “middleman” between EOP staff and contractors regarding the telephones during the first month of the administration and said that the majority of telephones in the EEOB and the White House (roughly 85 percent) had removed labels or contained incorrect numbers. The telephone service director’s recollections regarding the number of telephones he observed with missing labels in specific rooms or offices were included in the total number observed by all staff, and we did not believe it was necessary to break out the number he personally observed missing. Although the OA associate director for facilities management did not indicate how many telephones he observed with missing labels, his observations were made in two offices where others observed specific numbers of missing labels, and the other people’s observations are reported in the total. Finally, the observation of the employee who was the “middleman” between EOP staff and contractors regarding the telephones during the first month of the administration was already contained in the report. According to the White House, this employee said that a majority of labels on telephones, or about 85 percent, had been removed “or contained incorrect numbers.” Our record of this interview indicated that she said that about 85 percent of the telephones were missing labels “or did not ring at the correct number,” so we did not revise the report. In comment 76, the White House said that we underreported the number of telephones that were forwarded and reforwarded to ring at different numbers throughout and between the EEOB and the West Wing, and indicated that seven White House staff reported that roughly 100 telephones were forwarded to ring at other numbers. Further, the White House said that it did not know why we treated the observations of the employee who coordinated telephones during the first month of the administration differently from the other observers. The White House also questioned why we did not report that this employee told us that the chief of staff’s telephone was forwarded to a closet. We did not underreport the number of reports of telephones that were forwarded and reforwarded. Our count of the number of forwarded telephones was substantially the same as what the White House indicated in its comments. However, we reported the observations made in specific locations separate from the observation made by the employee who coordinated telephones during the first month of the administration. As explained in our response to comment 26, that employee said that about 100 telephones had been forwarded to ring at different numbers, and with one exception, she did not cite the specific locations of those telephones. Further, according to its comments, the White House counted the observation of an employee who said that the telephone number did not ring if the number on the telephone was dialed. Our record of interview with that employee was different and indicated that his telephone had a number for an extension that was different from his actual telephone number. We did not count that statement as an instance of a forwarded telephone. In addition, as indicated in the report, we had included the observation made by the employee who coordinated telephones during the first month of the administration of a forwarded telephone in a specific location among the 100 telephones that she said were forwarded to other numbers. With respect to the one specific telephone that she cited, our interview records indicated that she told us that the chief of staff’s telephone had been forwarded, but did not indicate that it was forwarded to a closet. The White House said in comment 78 that we had dramatically understated the number of telephones that were not working by failing to report that one EOP employee said that no telephones were working on the south side of the EEOB. Our record of the interview indicated that she told us that, because many telephones were not working in a section of a floor of the EEOB, the switchboard forwarded calls from that area to other offices where telephones were working, and that she walked from office to office delivering telephone messages; we added that to the report to address the White House’s comment. However, we did not estimate the number of telephones that were not working in that part of the building and did not know whether they were not working because of an intentional, malicious act. In comment 80, the White House said that we failed to provide important information regarding the extent of the problem with voice mail messages and the consequences of this problem—that no one had voice mail service for the first days and weeks of the administration. The White House said those facts concerned the reports of obscene voice mail messages that were heard by the telephone service director and the OA associate director for facility management. The White House also said that we should have reported that when these two officials began touring offices and checking telephones in the EEOB at approximately 1:00 a.m. on January 20, the telephone service director listened to about 30 greetings, approximately 10 of which were inappropriate. Further, of those 10 inappropriate messages, the telephone service director said 5 or 6 were vulgar. In addition, the White House noted that the telephone service director said that White House telephone operators notified him that there were obscene messages on some of the voice mail greetings. The White House said that after encountering the high ratio of inappropriate and vulgar messages, and because of these messages, a decision was made around 1:00 a.m. to take the entire system down. Further, the White House said that the telephone service director explained that he erased some messages around 1:00 a.m. on January 20, and they were rerecorded later that day. Our interview records indicated the OA associate director for facilities management heard an inappropriate voice mail message, but he did not tell us about hearing obscene voice mail messages. The report had indicated that two EOP employees who helped establish telephone service for new staff, including the telephone service director, said they heard a total of six to seven obscene voice mail messages that were left on telephones in vacated offices. In addition, we had reported that the telephone service director said that inappropriate and vulgar voice mail messages were initially erased on an individual basis, but it was eventually decided to erase all of them. Further, we reported that the OA associate director for facilities management said that so many complaints were received about voice mail that voice mail service was discontinued for a while to clear out the system, and that no one had access to voice mail for at least 5 days and possibly up to 2 weeks. To provide additional detail about when the inappropriate and vulgar voice mail messages were heard, in response to the White House’s comments, we added that the telephone service director said that he heard inappropriate and vulgar voice mail messages during the early morning hours of January 20. We did not report what the telephone service director said he was told by telephone operators about hearing obscene voice mail messages because it was information that was relayed to us from a third party. Further, according to our record of interview with the chief telephone operator, she told us that operators received some calls from staff complaining about not getting their voice mail and that their telephones were not working correctly, but she did not mention complaints about obscene voice mail messages. Finally, regarding the messages that the telephone service director said he erased during the early morning hours of January 20 and were rerecorded later that day, he said that those messages were not inappropriate in nature. Because they were not inappropriate in nature and could have been left for business reasons, we did not believe that this additional information needed to be reported. In comment 105, the White House said that the report’s description of two observations of pen and pencil marks on walls, but no words, did not adequately describe what we were told. The White House noted that these were not observations of a stray pen mark, as it said the report suggested. Rather, the White House said, one observation was that an entire wall in an office was covered in lines that at a distance appeared to be cracks. Further, the White House said this observation was confirmed by an OA employee who said that she too had heard that someone had etched a wall like marble. However, the report already indicated, regarding the observation, that the employee who observed it said that there were cracks in the paint, but because the marks washed off, he thought it looked like someone had used a pencil on a wall. Further, because it was information relayed to us from a third party, we did not report what someone had told the OA employee about a wall etched like marble. Regarding the other observation, the White House noted that an employee said that a wall was covered in pen and pencil marks, which she described as slasher marks and beyond normal wear and tear. According to our interview record, this employee said she requested that the walls be repainted in one room because there were pen and pencil marks on them, but no words were written. We did not believe that these additional details were essential and needed to be added to the report. The White House said in comment 108 that we failed to include the telephone service director’s statement that he found classified documents in a safe during the night of January 19. We added that observation. The White House also noted that it was not surprising that the director of records management did not find sensitive documents in the counsel’s office because the occupants of those offices did not depart their offices until after he had checked for documents there. However, his statement related to classified, and not sensitive, documents. The White House said that we had underreported or failed to report the costs of various items, including those associated with cleaning, telephones, missing items, keyboards, furniture, and other costs. In comments 30 and 99, the White House said the report omitted the costs associated with a January 30, 2001, facility request form asking for cleaning services. GSA provided two copies of this form, both with the same document number. On one copy, cleaning services were requested. No costs were provided on that copy of the form, which indicated that the services were completed on January 31, 2001. The second copy said “making new drapes,” and that the work was completed on March 2, 2001, at a cost of $2,906. We attributed the $2,906 cost to the making of new drapes and not cleaning. During our interviews with staff working in this office, no one mentioned observing problems with the drapes in this office. Also in comment 99, the White House said that we could have, but did not, determine how much time and money was spent paying the cleaning staff and how much should have reasonably been spent on the basis of the amounts spent during past transitions or estimates provided by administrative staff. Further, the White House said that we already knew that the costs exceeded what was expected because the OA associate director for facilities management told us there was “lots of money that was spent that shouldn’t have to be spent.” Our record of the interview with the OA associate director for facilities management did not indicate that he told us this. He did say that during the last couple of years, Clinton administration staff kept some rooms in a “much less desirable fashion,” and the space did not look much different during the transition. He also said more people were working the EEOB during the Clinton administration than during previous administrations. The director of GSA’s White House service center similarly said that he did not see any difference in the condition of the rooms during the transition than when he saw them 2 to 3 years before. He said that he did not think the departing Clinton administration staff were being intentionally messy on January 20 and that they had been like that all of the time. He also said that he observed more personal belongings left behind during the 2001 transition than during the 1989 transition, but that the condition of the offices during the 2001 transition was the same as that during the 1989 transition. Accordingly, we did not estimate or include incremental cleaning costs, as the White House suggested. In comments 30 and 81, the White House said that our report was inaccurate and incomplete with regard to the cost of replacing removed labels and rerouting forwarded telephones. It is unclear why the White House said that our report was inaccurate regarding these costs. We did not report any aggregate costs for replacing labels or rerouting forwarded telephones, but cited hourly rates for telephone service work that are the same as those contained in the White House’s comments. We also cited the cost of removing a telephone from an office, which the White House did not dispute. With respect to the completeness of cost data, we did not report a total cost figure for replacing missing labels or correcting forwarded telephones because we did not believe the documentation provided by the White House was clear and descriptive enough for us to do so. For correcting forwarded telephones, the White House provided one telephone service request that said a telephone line did not ring on a particular set. However, it did not state the cause of the problem, so we did not know whether the cause was forwarding or something else. Most of the White House’s points in comments 30, 79, and 81 addressed the costs associated with replacing missing labels. It said that (1) we should estimate how much it would cost to replace the number of missing labels reported to us as missing, (2) our statement that orders included other services is incorrect and that placing button labels on telephones means replacing missing labels beyond a doubt, (3) we never discussed the closed orders log with OA’s telephone services coordinator, and (4) the closed orders log does more than mention labels. The White House estimated that $6,020 was incurred to replace missing labels and correct forwarded telephones, and said that we had ignored the information it had provided on this issue. As its basis for the $6,020 estimate, the White House cited two blanket work orders and related bills for work that included relabeling telephones on January 20 and 21, 2001. The costs attributed by the White House to replacing labels and correcting forwarded telephones for both of these orders was $2,490. The White House arrived at its $2,490 estimate for relabeling telephones and correcting forwarded numbers, which it considered conservative given the number of missing labels and forwarded telephones, by assuming that technicians spent 10 percent of their time on these two days fixing these two problems. While we do not question that labels were missing or that telephones were forwarded and that the government incurred costs for replacing missing labels or correcting forwarded telephone calls, we have no information on the extent to which technicians spent their time fixing these problems on January 20 or 21, 2001, nor any basis to develop an estimate for this. Furthermore, if technicians replaced the labels reported missing under the blanket work orders as the White House suggests, then it is unclear why there would also be individual work orders to replace those same missing labels. The White House’s support for the remaining $3,530 (of the $6,020 estimate) consisted of items shown on the closed orders log for the period January 20, 2001, through February 20, 2001; individual service requests provided that cite placing labels on telephones; and AT&T invoices. We reviewed this information. In fact, we reviewed it carefully, and our record of interview indicated that we did discuss the closed orders log with the OA telephone services coordinator. We did not believe the closed orders log, the individual service requests, or invoices that the White House provided had enough information for us to definitively conclude that the costs shown were solely for replacing missing labels or provided a sufficient basis to compute an estimate of those costs. With one exception, neither the closed orders log nor the individual service requests the White House provided specifically cited replacing missing labels that had been removed, and in every case for which we have a telephone repair document, another service was cited along with placing labels on telephones, including the service requests for the one exception referred to above. For example: For one service request cited in the White House’s comment letter as needing a label placed on a telephone by a technician, the actual service request said: “need line 65240 to ring on my phone 66522. On 66522 add 65240 on button 7 and 8. Need label placed on set by a technician.” According to the White House, the charge for this service was $75.92. Another service request the White House included in its $6,020 estimate was for, it says, placing labels on sets. The White House said the estimated cost of this work order was $151.84 based on being billed for 2 hours of work. The corresponding entry for this service request on the closed orders log says, “INSTALL (2) 8520 SETS IN RM-200, NEED LABELS PLACED ON SETS.” The White House did not provide the individual service order for this repair. The one service request cited above as an exception, which was dated January 29, 2001, read: “Replace labels on all phones that removed” along with other services in a room for which the White House said the bill was $75.92. The corresponding entry in the closed orders log for this order was “INSTL NEW# 62926, 65961 / REPLACE LABEL.” We do not have any additional information to explain the difference between the individual service request and the log. A number of service requests that involved placement of labels also involved programming or reprogramming of telephones. For example, the White House cited a work order indicating that labels were needed, among other things, in several rooms at a cost of $341.64, which read: “Disconnect 6-9008 in Room 271 OEOB. Reprogram sets in Rooms 263, 265, 266, 267, 268, 269 and 271. Need labels placed on each set.” The requirements portion of the work order indicated “change” and “disconnect.” Thus, it is unclear from the information provided, whether labels were needed because (1) they were missing, (2) there was a change in telephone service or functions as a result of the reprogramming that could have affected the labels, or (3) both conditions existed. It is also unclear to us from the information provided by the White House why telephones had to be programmed or reprogrammed if the only problem was a missing label and why 4 hours of work were required solely to place labels on telephones for each of four service requests. In cases where labels were missing, it appears that a new label could have been needed in some cases due to changes in telephone service or functions desired by new occupants, such as adding a new number to a telephone. Regarding the White House’s statement that placing button labels on a set means replacing missing labels, in addition to the above examples, we note our discussion with the OA telephone services coordinator during which she said that service orders mentioning labels listed on the closed orders log do not necessarily mean that telephones were missing labels. We did not discuss each entry with her on the closed orders log that cited labels because it did not appear necessary at the time of our interviews with her, and it was clear that we were discussing the closed orders log. An associate counsel to the president attended our meetings and raised no objection or concern about this issue at the time of the meetings. Further, although the OA telephone services coordinator told us that she had records from which she could estimate the total number of telephones with missing labels and the associated costs to replace them, we did not receive this information. While there could have been a misunderstanding between us and the telephone services coordinator on the meaning of the terms on the closed orders log, we believe she clearly understood that we were seeking information about the number of missing labels and the associated costs, and because she said she would provide this information to us, we saw no need to request additional documentation on this issue at that time. As a related issue, the White House said in comment 81 that it explained to us that there is no separate charge when a system analyst performs work, such as reprogramming a telephone, that does not require a technician to be dispatched to an office. According to the White House, if a technician must go to the office to replace a label, there is a minimum charge for each hour or portion of an hour even if it is only a few minutes to perform the work. The White House did not document this until after we had sent our draft report. While we do not question that situations may have existed in which the only service provided for which a cost was incurred was to replace a missing label, we cannot determine to our satisfaction the extent to which these situations occurred from the documentation provided to us. Given the examples we cited above in which other services besides placing labels on telephones were provided, the extent to which costs were incurred just for replacing missing labels is unclear. The extent to which new labels would have been needed anyway due to changes desired by new office occupants is also unclear. Further, given the OA telephone services coordinator’s statement about the little time needed to replace telephone labels, it is unclear why technicians would have spent 4 hours just placing labels on telephones in some cases where the service order shows the only other service besides placing labels on sets as programming telephones. It is also unclear why a generic or blanket service request to replace missing labels was not prepared if this was the only service needed. It would appear that such an order would have been less costly to the government than preparing individual service orders for individual telephones or offices given that it only takes a short time to place a label on a telephone. Given all of the questions we have related to the information the White House provided on costs associated with replacing labels, we are not making any estimates of such costs. To do so would require additional details on the work that was done in response to requests for telephone service involving placing labels on telephones. Obtaining this information could have required discussions with the technicians who performed the work, which could have involved additional costs to the government. Given this and the time and effort that would be required by us and White House staff, we did not believe further exploration by us of the costs involved with replacing labels would have been cost beneficial to the taxpayers. Finally, we modified our report to reflect the White House’s comments 79 and 81 that the closed orders log does more than mention labels, as well as to address comment 30 regarding replacing labels, as we deemed appropriate. In comment 31, the White House objected to our deducting the value of one doorknob to reflect the statement of a GSA employee who said that a facility request form regarding work in an office where two pairs of doorknobs were observed missing was not done to replace a missing doorknob, but to perform maintenance on a worn-out part. The White House pointed out that the GSA employee’s statement is inconsistent with the facility request form and the recollections of at least three current staff members. We discussed the observations regarding these doorknobs in our response to comment 22. Regarding the related cost issue, we recognized the GSA employee’s statement in this case because he said that he was responsible for repairing and replacing building fixtures in the EEOB, including doorknobs. The report still included the cost of replacing three of the four doorknobs that were observed missing in this office, totaling $700. The difference in deducting the cost of one doorknob in this case was $100. In comment 47, the White House said it was untrue when we reported that we did not obtain any information about the possible historic value of the seal that was stolen. The White House pointed out that we were told in writing that the $350 purchase price would not purchase an exact replica of the brass seal that was stolen; that the seal was purchased in the mid-1970s, and is no longer available; and that the $350 would purchase a plastic-type casting. The statement that was included in the report about this historic value was intended to convey that we did not obtain a dollar value associated with the historic value of the seal; we clarified that statement accordingly. In addition, to address the White House’s comment, we added the additional details provided. In comment 58, the White House disagreed with our reporting of costs associated with replacing damaged keyboards for three reasons. First, it said that our estimate of 30 to 64 keyboards that were observed missing was incorrect and should be 58 to 70, using a different counting methodology. It also said that the numbers only represented observations made in specific rooms or offices and do not account for the observations of other EOP staff who told us about additional damaged keyboards, such as the branch chief for program management and strategic planning in the information systems and technology division, who said that 150 keyboards had to be replaced. We addressed this point in our response to the White House’s general comment about the number of observations reported and in our response to comments 10 and 51. We also revised the table in the report to clarify that the range of keyboards pertained to observations made in specific rooms or offices. The statement by the branch chief for program management and strategic planning in the information systems and technology division, who said that 150 keyboards had to be replaced, was already included in the table and apparently overlooked by the White House. Second, the White House noted that we included an estimate that the OA associate director for information systems and technology provided in February 2002, even though she said that her memory regarding that matter was not as good as when we interviewed her in June 2001. However, this official’s statement in June 2001 that 64 damaged keyboards had to be replaced was also included in the table. Because we did not know which figure was correct, we included both statements made during the two interviews. Third, the White House said that it was not accurate to represent that the OA associate director for information systems and technology said that one-third to one-half of the keyboards may have been replaced every 3 or 4 years because of their age. We addressed this point in comment 56. In comment 69, the White House said that we failed to mention costs attributable to damaged furniture and did not attempt to estimate the costs of replacing furniture that was discarded because it was beyond repair. However, as indicated in the letter portion of the report and appendix I, the OA director told us that no record existed indicating that furniture was deliberately damaged and that no inventory of furniture of the EEOB exists. Further, although in April 2002, an associate counsel to the president provided us with photographs of four pieces of furniture that she indicated were moved to an EOP remote storage facility, no information was provided regarding from which offices these pieces had been taken or when or how the damage occurred. In comment 69, the White House also said that we had failed to quantify very real costs incurred, such as in having movers remove damaged furniture and return with replacement furniture, having movers make overturned furniture upright, and removing the glue-like substance from desks. We did not believe it would have been cost-effective for us to attempt to estimate these costs, and our report clearly indicated that we did not attempt to obtain cost information related to all observations reported to us. In comment 32, the White House said that we failed to quantify certain additional costs that were incurred as a result of damage, such as the time expended by computer staff and contractors to replace damaged keyboards; the time spent on removing “W” keys and prank signs affixed to the walls; and the time spent to clean up trash and dirt that exceeded reasonable amounts or amounts seen in prior transitions. The White House said that it would have been possible for us to have generated a range of estimates, but that we chose not to, resulting in a substantial underreporting of the very real costs associated with the damage, vandalism, and pranks that occurred during the transition. Although it is possible that we could have estimated some additional costs potentially attributable to intentional acts, we did not believe it would have been cost-effective for us to have done so. For example, we did not believe that our time and resources should have been expended on estimating any possible incremental costs to remove “W” keys and prank signs that were placed on walls, or that any such estimates would likely have been material. Further, we did not have a sufficient basis to conclude that all of the damage that the White House cited, such as broken furniture and copy machines, was caused by intentional acts. Accordingly, we did not provide such costs in our report. The White House said additional details should have been reported about certain observations, such as those relating to telephones, furniture, keyboards, a missing office sign, a copy machine, and writing on walls that would have allowed readers to determine whether incidents were done intentionally and, in some cases, that they were likely done by former Clinton administration staff. In comment 28, the White House said that, in many cases, the undisputed facts indicated when incidents occurred and who the likely perpetrators were and cited several examples. In particular, the White House took issue with a statement in the report that we were generally unable to determine who was responsible for the incidents that were observed, and said we simply failed to determine who was responsible. For example, the White House said we did not try to contact the former occupants of offices where messages other than those of “goodwill” were left. Examples that the White House cited regarding telephone labels and furniture are discussed in comments 6 and 15 below. The White House also cited examples regarding the placing of glue on desks; the leaving of prank, inappropriate, and obscene voice mail messages; and the removal of keys from keyboards, which are discussed below. We agree that the likely perpetrators could be identified from the observations and available information with regard to a few of the observations that were made. For example, because the telephone service director said that a passcode was needed to record voice mail greetings, it was fair to conclude that the previous occupants left the voice mail greetings that were heard. Moreover, we had concluded in the report that the leaving of certain voice mail messages, the placing glue on desks, and the removal of keys from keyboards were done intentionally. However, the White House is incorrect in asserting that we did not try to contact the former occupants of offices where messages other than those of goodwill were left. As explained in our scope and methodology section, we contacted 72 former Clinton administration staff, most of whom had worked in offices where observations were made, including numerous staff who worked in offices where signs and messages were observed and heard, and not only those that were of goodwill. When we contacted them, we described or showed lists of the observations that were made in their former offices and asked for any comments or explanations. However, former Clinton administration staff we contacted did not provide explanations regarding every observation, and we did not contact all former Clinton administration staff because we did not know where they were and because of the level of resources that would have been required. In addition, regarding the reports of obscene or vulgar voice mail messages that were left, specific information was not provided about which telephones those messages were left on, so we could not ask any particular former staff about them. Moreover, it is speculative to suggest that, had we contacted additional former Clinton administration staff, we would have obtained undisputed facts regarding when the incidents occurred and the likely perpetrators. The White House also said in comment 28 that our report suggested that contract movers and cleaners were responsible for vandalism, damage, and pranks, which it believed to be an insult to the contract personnel. Our report did not state that these contract personnel intentionally caused any damage. However, they were among other individuals in the complex during the transition besides former Clinton administration staff, which made it more difficult to narrow down people who were possibly responsible, either intentionally or unintentionally, for the problems reported observed. We made a written request to the White House for a list of the number of visitors cleared into the EEOB during the weekend of January 20 and 21, 2001, and their respective organizational affiliations. However, the White House declined to provide that information, indicating that it was available from the individuals responsible for hiring and supervising contractors who may have already provided us with estimates regarding the number of contractors. We were provided with information regarding a certain number of GSA contractors who were in the complex that weekend, but not about other contractor staff, such as those working with computers, or any other visitors to the complex. In comment 28, the White House cited observations made in the vice president’s West Wing office, including an oily glue-like substance smeared on desks; prank signs that were on walls and interspersed in reams of paper in printer trays and copy machines, and vulgar words that were on a white board that were all discovered between midnight on January 19 and noon on January 20. The White House said that it could be reasonably concluded from these observations that the damage occurred shortly before the inauguration and that former Clinton administration staff were the likely perpetrators because it can be presumed that the former office staff did not work under those conditions. However, in certain respects, our interview records differed from what the White House indicated in its comments regarding these observations. Although all three staff told us they observed the glue-like substance and prank signs, none of them said they saw vulgar words written on a white board. One of the employees said that her staff told her that they had seen vulgar words written on a white board there, but we did not interview anyone who personally saw that, and we did not include information people relayed to us from third parties. We would agree that, on the basis of the timing of these observations, they were likely carried out shortly before the inauguration, but in the absence of witnesses or other evidence we are not in a position to conclude who was responsible. In comment 35, the White House said that our list of incidents that were done intentionally was incomplete and provided several additional cases that it said appeared to have been done deliberately by former Clinton administration staff. Our conclusion that the leaving of signs and written messages was intentional was meant to encompass certain observations that the White House cited in comment 38, including a Gore bumper sticker stuck to the inside of a copy machine, writing on and in desks, and a sticker in a filing cabinet. Further, our conclusions were not meant to be comprehensive in the same level of detail that the White House indicated, but did include damage to “W” keys, in addition to “W” keys removed from keyboards; “W” keys glued to walls and placed in drawers; the removal of an office sign that was witnessed by an EOP employee; and desk drawers turned over. Finally, we could not conclude, as the White House did, that certain incidents, such as a lamp placed on a chair and pictures and other objects placed in front of doors, were done deliberately by former Clinton administration staff. It seemed equally as likely that they could have been done as part of the moving out process. Further, the White House’s statement that most, if not all, printers and fax machines were emptied of paper in vacated offices was not contained in our interview records, and it was not clear whether that would have been done intentionally. Other incidents that the White House listed relating to telephone and furniture are discussed below. In comments 38 and 68, the White House said that we should report the views of many staff who said that, on the basis of their first-hand observations, damage appeared to have been done intentionally. In our report, we included examples of statements made by some individuals who told us they believed the incidents they observed were done intentionally and some individuals who told us they did not believe what they observed was done intentionally. However, we did not include all statements made by all individuals about views on whether incidents were done intentionally. In any event, without having observed the incidents being carried out, people’s views on whether incidents were intentional were speculative in many cases. In comment 6, the White House said that it did not understand why the report indicated that the documentation provided indicated that much telephone service work was done during the transition, but did not directly corroborate allegations of vandalism and pranks regarding the telephones when several staff members reported observing telephones with missing labels. However, the documentation provided did not show what caused the needed work or that the labels were intentionally removed from offices as acts of vandalism. Further, our conclusion is consistent with the OA director’s April 18, 2001, statement that “…repair records do not contain information that would allow someone to determine the cause of damage that is being repaired.” As noted in the report, some former Clinton administration staff said that telephones were missing labels during the Clinton administration, primarily because those telephones were only used for outgoing calls. Although the OA telephone services coordinator said she believed that telephone labels were removed intentionally, she said the documentation regarding telephone service requests that mentioned labels did not necessarily mean that the telephones had been missing labels and that new labels might have been needed for variety of reasons. In comment 28 and 36, the White House noted that, according to the telephone service director, some of the missing telephone labels that were replaced before noon on January 20 were found missing again later that day, which indicated that the removal of at least some of the labels was an intentional act, occurred before January 20, and that outgoing staff were almost certainly responsible. We would agree that, on the basis of the telephone service director’s observation on January 20, some telephone labels were intentionally removed. Although these circumstances may suggest that some telephone labels were removed by departing Clinton administration staff, in the absence of any witnesses we were not in a position to conclude who was responsible. No documentation was provided relating specifically to these observations. The White House also said in comment 6 that staff noted that telephones were left on the floor and that the documentation showed a request for a technician to retrieve a telephone found on the floor of an office. Although this telephone service request corroborated a request to retrieve a telephone in an office where an EOP official observed telephones piled on a floor, we did not conclude that this corroborated an act of vandalism because the request did not indicate why the telephone was left on the floor. In comment 36, the White House said that we should report the views of many staff who said that, on the basis of their first-hand observations, damage appeared to have been done intentionally, including the OA telephone services coordinator, who said that missing telephone labels must have been intentional. The OA telephone service coordinator’s comment was included in the report. In comment 82, the White House objected to a statement attributed to the director of GSA’s White House service center, who said that there were any number of reasons why problems could have been observed with telephone and computer wires besides people having cut them deliberately because, for example, the cleaning staff could have hit the wires with the vacuum cleaners or computer staff could have been working with the wires. According to the White House, this statement would be relevant only if the cut and pulled wires were observed after the cleaning and computer staff had entered the offices. The White House noted that the two employees who reported the cords pulled from the walls observed the damage in the early morning hours of January 20 before any cleaning staff had entered the rooms and before the computer staff entered the rooms to archive computer data. However, although the cleaning crew for the transition began on January 20 and the archiving of data from computers was taking place in the morning of January 20, other cleaning and computer work undoubtedly was done in offices at some point before January 20. Further, even though the staff made these observations on January 20, we did not know when and how the wires became separated from the walls. In addition, the employee who observed at least 25 cords pulled out of walls, who the White House did not mention in this comment, said that she made her observation on January 22. In addition, the January 24, 2001, GSA facility request that this employee requested did not state that cords were separated from the walls; the request was to “organize all loose wires and make them not so visible.” In comments 15 and 36, the White House objected to a statement attributed to former Clinton administration staff who said that some furniture was broken before the transition and could have been the result of wear and tear, and little money was spent on repairs and upkeep during the administration. According to the White House, the statement could not be squared with the circumstances surrounding the reported damage. It also noted in comment 36 that it would be odd behavior for office occupants to have broken chairs through normal wear and tear and leave them unrepaired for some time. Further, the White House provided examples of additional details regarding observations made by EOP staff regarding furniture problems, which it said suggested that the damage was intentionally done by former Clinton administration staff or was done shortly before the inauguration. As previously explained, we did not obtain comments from former Clinton administration regarding every observation, including all furniture-related problems. Therefore, we agree that the above statement made by former Clinton administration staff does not necessarily apply to all observations of furniture-related problems. With respect to the White House’s assertion that it is difficult to believe that office occupants would not remove certain broken furniture, as indicated in the report, the former director of one office where EOP staff told us they observed pieces of broken furniture said that the office furniture had been in poor shape for some time, but the staff tolerated it. The former director added that they did not want to send the furniture away to be repaired because it was uncertain how long it would take or whether the furniture would be returned. We also note that, in August 2001, we observed a desk in the EEOB with detached drawer fronts that had not been repaired, and the staff in that office said the desk had been in that condition since they arrived in January 2001. Further, although the White House said in comment 15 that the details regarding certain observations suggested that furniture was intentionally damaged by former Clinton administration staff or occurred shortly before the inauguration, we could not make any definitive conclusions about how the damage occurred and who may have been responsible for it on the basis of those details or the statements of some EOP staff who said that it appeared that certain damage had been caused intentionally. In comments 28 and 36, the White House cited several cases in which it said the undisputed facts indicated when furniture was damaged and the likely perpetrators. Also, in comment 67, the White House said that the overwhelming circumstantial evidence indicates when the damage occurred, whether it was intentional, and who the likely perpetrators were. In comments 15, 28, 36, 60, and 67, the White House described a case involving a key that was observed broken off in a file cabinet, still hanging in the lock by a metal thread, and when the locksmith opened it, a Gore bumper sticker with an anti-Bush statement was prominently displayed inside. According to the White House, the circumstances in this case suggested that the damage occurred not long before the inauguration, was intentional, and was done by a former Clinton administration employee. Our interview records regarding this incident differed in certain respects from what the White House indicated in its comments. Although the staff said they saw a broken key in the cabinet and one employee said that he found two Gore stickers inside, none of them said they observed an anti- Bush statement prominently displayed inside. One of the employees said that another person told him he saw a Gore sticker with a message that was derogatory about the president written on it. We did not report what the other person had told him because it was information relayed to us from a third party. Further, when we interviewed the person who reportedly observed the anti-Bush statement written on a sticker, he told us about seeing two Gore-Lieberman stickers inside the cabinet, but he did not mention any writing on them. Although we believe that it is likely that political stickers were left in a cabinet around the time of the election, it is speculative to conclude that the individual who left the sticker inside the cabinet was the same person who broke the key off in the lock, and that the key was intentionally broken off in the lock. Also in comments 28, 36, 60, and 67, the White House cited a similar case about locked desk drawers that, when pried open, contained two pieces of paper with anti-Bush statements. We had already concluded in the report that these written messages were done intentionally. The White House also cited cases in comments 28 and 67 that it said suggested the damage occurred shortly before the inauguration. In one case, the White House cited the statement of an employee who said that she saw damaged furniture in offices where things looked pretty good weeks or months earlier, which the White House said suggested that damage was done shortly before the inauguration weekend. According to our interview record with this individual, the only observations that she made regarding furniture were of doors on a wall cabinet hanging on only one hinge and upholstered furniture that was filthy, which she attributed to dirt that had built up over time. Although the cabinet doors could have been damaged around the time of the transition, the upholstered furniture probably did not become dirty then. In the other case, the White House said the nature of damage suggests that it occurred shortly before the inauguration because the offices’ prior occupants and cleaning staff would not have let the damage remain in the office for long. For example, the White House said that it would be hard to believe that occupants would not fix or remove a bookcase with shards of broken glass inside. While we would agree that we would not expect shards of glass inside a bookcase to remain for long, we did not have any information indicating when the damage occurred, or whether it was done accidentally or intentionally. In comment 36, the White House said that, with respect to our statement that we did not know whether furniture was broken intentionally, and when and how it occurred, it was not plausible to think the cleaning staff completely broke off the backs and legs of multiple chairs within the same office and then left that furniture in the offices for the new occupants. We did not suggest that the cleaning staff broke furniture. However, we note, as discussed above, that some former Clinton administration staff said that certain pieces of furniture were already broken prior to the inauguration and had not been repaired. The White House also said in comments 38 and 67 that the nature of some of the damage and the surrounding conditions suggested that it was done intentionally and/or was done shortly before the transition weekend. For example, the White House cited the observation of an EOP employee who said that her desk drawers clearly had been kicked in and this damage was not just wear and tear. Our interview record with this individual indicated that she observed a desk where the locks on a drawer had been damaged and the drawers could not be opened, but did not indicate that she said the drawers had been kicked in. In another case cited in comments 36 and 67, the White House cited an observation of two seat cushions slit in an identical manner on apparently new upholstery, indicating that this was not done accidentally. Although it is possible that this observation was of vandalism, it was unknown when and how it occurred and who may have been responsible. No information was available about from which offices these chairs were taken (they were observed in a hallway on January 21), and we did not observe these chairs ourselves to inspect the damage. Also in comment 36, the White House said that it was not reasonable to conclude that furniture was not overturned unintentionally because most of the witnesses observed overturned furniture before the cleaning staff or new occupants entered the rooms, and it was not plausible to think that cleaning staff would have upended extremely heavy furniture in the manner described. Further, the White House pointed out that two GSA officials said that cleaning staff would not move large pieces of furniture, and none of these things would happen in the normal course of moving out of an office. According to our interview records with these individuals, one GSA official said that while cleaning staff do not normally move furniture to clean offices, furniture could be overturned for a variety of reasons, such as to reach electrical outlets or computer connections. The other GSA official said that he did not see any damage or pranks during the transition and did not mention overturned furniture, according to our interview record. Although we would agree that furniture would be overturned intentionally and that it was unlikely that cleaning staff would have upended extremely heavy furniture in the manner described, some former Clinton administration staff who occupied the former offices where overturned furniture was observed said that it would have been difficult or impossible for them to move certain pieces of furniture. Moreover, the cleaning staff did not enter these offices for the first time on January 20; according to GSA, cleaning is done continuously. Although we would agree with the White House that it is reasonable to conclude that furniture was overturned intentionally, we do not believe that a sufficient basis existed to conclude, as the White House did in comment 36, that most of the people who observed overturned furniture made their observations before the cleaning staff or new occupants entered the rooms. According to our interview records with the seven staff who observed overturned furniture, none of whom were new occupants of those rooms, two said that they made these observations in the early morning hours of January 20 before the transition cleaning crews arrived; three said that they made those observations during the afternoon of January 20; and the other two did not tell us the time they observed the overturned furniture. Although the descriptions provided by the observers suggested that the offices where overturned furniture was observed had not yet been cleaned, we do not know when particular offices were cleaned on January 20; the time that new occupants entered these offices, or who else may have been in these offices on January 19 and 20. The cleaning crew leader for the EEOB floor where overturned furniture was observed said that the cleaning began at 6:45 a.m. on January 20. In comment 60, the White House said that it did not recall anyone complaining about missing keys, which would not be considered damage, vandalism, or pranks. Rather, the White House said, the observations pertained to keys that may have been purposefully broken off in the locks or drawers locked intentionally and keys taken or discarded. However, an employee told us that, when he started working in the EEOB on January 20, his desk drawers were locked with no keys available to unlock them and that the movers helped him open the drawers. Other EOP staff told us about broken off or damaged keys in cabinets. In comment 68, the White House took issue with how we had characterized two employees’ statements about whether they believed the damaged furniture they observed was intentionally damaged. In the first instance, the White House said that an employee said that while it was possible that legs on a chair were broken through wear and tear, she thought it was unlikely that a broken chair would be kept in an office in that condition. Our interview record regarding this employee indicated she said that the chair legs could have been broken because of wear and tear and were not necessarily done intentionally in January 2001. In addition, the White House said that we had not included additional statements made by EOP staff who said that the damage, previously discussed in this section, appeared intentional. The White House said an employee told us that her desk drawers were clearly damaged intentionally, and not just by wear and tear, and another employee said that the a broken key in the file cabinet looked deliberate. In the first example, according to our interview record, this employee did not say how the desk drawers were damaged. In the second example, the employee said the key looked like it had been broken intentionally, but he did not know if it was. We also note that other people, whom the White House did not cite, said they did not believe that broken furniture was intentionally damaged. For example, the management office director told us that during the first 2 weeks of the Bush administration, she saw a building (the EEOB) filled with furniture that had exceeded its useful life and that a lot of furniture had to be taken out of offices. She said the problems with furniture that she saw, such as broken pieces, were the result of wear and tear and neglect, and not the result of something that she thought was intentional. In comment 28, the White House said that it is unlikely that Clinton administration staff worked for long without having “W” keys on their keyboards, which suggested that the vandalism occurred shortly before the inauguration. We agree. In comments 42 and 48, the White House said that we failed to report sufficient detail about an EOP employee who observed a volunteer remove an office sign from a wall in the EEOB. According to the White House, when we reported that an employee said she saw a volunteer remove an office sign outside an office, that the person who removed the sign said that he planned to take a photograph with it, and that the volunteer tried to put the sign back on the wall, it implied that the person intended all along to put the sign back. The White House believes that only when the volunteer was confronted by the EOP employee, did he claim that he planned to take a photograph with it, that he tried to put the sign back, and ultimately did not take it. Further, the White House said that the employee did not believe that the volunteer intended all along to return the sign as our statement suggested. However, our record of interview did not indicate that this employee told us what she believed the volunteer intended to do with the sign. We also did not know whether this individual planned to take the office sign. We were not provided with the volunteer’s name and thus were unable to contact him. Further, we did not speculate, as the White House did, about whether it was only after having been confronted by an employee that he claimed that he wanted to take a photograph with the sign and tried to put it back on the wall. In comment 48, the White House also said that we failed to mention that an EOP employee said that a former Clinton administration employee told her that he saw that the office sign was missing at some point during the night of January 19. We did not report this statement because it was information relayed to us from a third party. Further, when we interviewed this former Clinton administration employee, he did not say that he observed a sign missing from outside this office. In comments 68 and 87, the White House said that we had failed to report a statement made by an employee who said that the repairman who fixed the copy machine found a pornographic or inappropriate message when he pulled out the copier’s paper drawer, and that the repairman thought the paper drawers had been intentionally realigned so that the paper supply would jam. We did not include the repairman’s statement because it was information relayed to us from a third party. The White House said in comment 105 that graffiti observed in a men’s restroom was vulgar, in addition to being derogatory to the president, which was plainly intentional. Given its content, the White House said that we could conclude that it was written shortly before the transition. We agree. Similarly, the White House said that writing observed on an office wall that said something like “Republicans, don’t get comfortable, we’ll be back,” while not profane in nature, also would indicate that it was written shortly before the transition and by a former Clinton administration employee. We agree. As previously mentioned, the report already concluded that written messages were done intentionally. In comments 4 and 11, the White House also said that if the report included a statement by former Clinton administration staff that the amount of trash was “what could be expected,” it should also include the statements of longtime staff members who said the opposite. This statement was also part of a summary paragraph, and additional comments regarding trash that was observed and comments made by other staff with different views were provided in appendix I. In comment 5, the White House said that, when we reported that some former Clinton administration staff said that some of the observations were false, it was disappointed that they would make such a reckless statement. According to the White House, the statement is neither based on nor supported by a single shred of evidence. Further, the White House said that such self-serving accusations like this illustrate why it was important for us to provide the reader with many of the details that we had omitted. For example, the White House said, if the reader is told that a particular observation was made by a staff member who worked in the complex for many years, including the Clinton administration, or that the damage was found in a location where others observed a lot of other damage, then the reader can determine for himself the credibility of the observation. The statement referenced above was included in part of a summary paragraph, and many additional details regarding the observations are provided throughout the report. Further, we did not make judgments about the credibility of the observations when current and former EOP staff had different explanations and recollections. Regarding the White House’s request that we indicate when observations were made by EOP staff who had worked in the White House complex for many years because it would help the reader determine the credibility of the observation, we did not do this because we generally did not have a basis to conclude that EOP staff we interviewed who had worked in the White House complex for many years were more credible than staff who arrived with the Bush administration. On the one hand, one would not necessarily expect Bush administration staff to have positive views of the Clinton administration. On the other hand, EOP staff could have strong views on various administrations. Many of them work at the pleasure of the president, and the associate counsel to the president participated in all of the interviews with EOP staff. We did not speculate about what influence these factors may have had on the people we interviewed. For example, one individual we interviewed who had worked for the EOP under several administrations expressed considerable disagreement during our interview with the Clinton administration’s handling of a matter related to his area of responsibility. Although we do not know the extent to which, if any, the individual’s views regarding the Clinton administration influenced his conveyance of observations to us, we reported his observations in the same manner as those of incoming Bush administration staff we interviewed. In comment 49, the White House questioned a comment made by the former director of an office where two pairs of doorknobs were observed missing, that the office had several doors to the hallway that at some time had been made inoperable, and he was not sure whether the interior sides of those doors had doorknobs. According to the White House, even if it were true that the doorknob in the interior side of the door was missing, that fact would not explain the observation that the door was missing both an interior and exterior doorknob. We only reported what the former director told us and were not suggesting that his comment fully explained the observation. In comment 70, the White House noted that, regarding the statement by the former manager of an office where at least six pieces of furniture were observed, he provided comments on only two broken chairs (that the arms had become detached a year or two before the transition, that carpenters tried to glue them back, but the glue did not hold). According to the White House, the additional reports of damaged furniture as well as other damage found in the office suite undermine the former manager’s innocent explanation for the two chairs. In addition, the White House said that because we were unwilling to specify the locations where damage was found and have not reported more details, readers are unable to assess for themselves the credibility of the former manager’s explanation. The former manager’s explanation regarding these two chairs appeared to be plausible because, as we reported, we found two GSA facility requests made by him in 1999 requesting that chairs in that office be repaired. We only reported the comments and explanations that former Clinton administration staff provided on observations made in their respective offices, and did not note, for example, that this former office manager did not comment on the other pieces of broken furniture. Similarly, throughout the report, when we cited an observation made by an EOP employee, we did not point out what that person did not see, even in cases where other people made additional observations in that same location. Further, our record of this interview indicates that the employee who observed the other pieces of broken furniture told us she saw four chairs that had been placed in the hall and that she believed the damage could have occurred due to normal wear and tear and that the chairs were not necessarily broken in January 2001. In comment 71, the White House questioned the comments of three former staff who had worked in an office where staff told us they found glue or a sticky substance on desks that they were not aware of glue being left on desks. One of those former employees also said that her desk was missing handles when she started working at that desk in 1998, and it was still missing them at them at the end of the administration. The White House said that these statements are inconsistent with the statement of an employee who said that a handle was found inside the desk with more of the oily-glue-like substance on top of it. The White House also said that the reader is unable to evaluate the credibility of the comments made by the former staff because the report does not say where these desks were located and that various other damage and pranks were found in the same location. We do not believe the additional details that the White House cited about these observations, which we did not report, would have allowed readers to more fully evaluate the credibility of the statements made by the former Clinton administration staff. For one reason, incidents could have taken place in this location after the former Clinton administration staff we interviewed had left, which they said was between midnight on January 19 and 4:30 a.m. on January 20. Our record of the interview with the employee whom the White House indicated observed a desk handle inside a desk with more of the glue-like substance on top of it did not contain the level of detail that the White House provided in its comments. Our interview record indicated that she observed a desk drawer that had a handle removed and glue that was placed on the bottom of a drawer. Further, as indicated in our discussion regarding comment 28, although all three staff told us they observed the glue-like substance and prank signs in this area, none of them said they saw vulgar words written on a white board. One of the employees said that her staff told her that they had seen vulgar words written on a white board there, but we did not interview anyone who personally saw that, and we did not report information relayed to us from a third party. In comment 73, the White House said that if we included detailed comments made by former Clinton administration staff about overturned furniture, we should explain that two of the individuals who observed the overturned furniture have worked in the White House complex for 30 and 32 years, respectively, and that they both observed overturned furniture between approximately 1:00 a.m. and 5:00 a.m. on January 20. Likewise, the White House noted, the director of GSA’s White House service center, who served during the Clinton administration, reported seeing overturned furniture. In addition, the White House said that we should report that two other staff said they observed overturned furniture at approximately 12:15 p.m. on January 20. To address the White House’s comment 73 and 36, we added a range of time during which these officials said they observed overturned furniture. However, we did not add, as the White House suggested, that two of the people who observed overturned furniture had worked in the White House for more than 30 years because, except in appendix II, when we discussed observations regarding past transitions, we did not report how long other people who made observations had worked in the White House complex. In comment 77, the White House said that we did not report the number of offices in which telephones were observed unplugged or piled up. In addition, the White House said we did not report that the telephone service director was one of the staff who observed telephones that were unplugged or piled up. According to the White House, his observation is particularly noteworthy because he had more than 30 years of experience managing telephone services in the White House complex. Further, the White House said that because the telephone service director observed the unplugged telephones on January 19 and during the early morning of January 20, it is clear that the telephones were not unplugged by the telephone service personnel or by the cleaning staff, who had not yet entered these rooms. Moreover, the White House said that this information is particularly important because of comments provided by former Clinton administration staff who worked in offices where telephones were observed unplugged or piled up. (One of those former staff said that no one in that office unplugged them, and another employee said that there were extra telephones in that office that did not work and had never been discarded.) The White House said that because we had not mentioned that there were observations of unplugged and piled telephones in 25 or more offices, the reader does not know that the comments of the former Clinton administration staff, even if true, explain what happened in only 2 of 25 or more offices. Thus, according to the White House, the reader has no basis for placing the comments of the former staff in context, nor for understanding that the former staff apparently have no explanation for the remaining observations. We addressed the issue regarding the number of offices in which telephones were observed unplugged or piled up in our response to comment 25 in the section of this appendix pertaining to reporting the number of observations. Regarding the White House’s comment about the noteworthiness of the telephone service director’s observations, we added to the report that he was one of the staff who made these observations. However, we do not agree that because he made these observations on January 19 and the early morning of January 20, it is clear that the telephones were not unplugged by telephone services personnel or by cleaning staff who had not yet entered these rooms. Although the cleaning crew for the transition started on January 20, according to GSA, cleaning in these offices is continuous. Further, we did not have information regarding when telephone service or other personnel had been in these offices before the transition. Regarding the White House’s assertion that we had deprived readers of information that would place the comments of former Clinton administration staff in context, or help readers understand that the former staff apparently had no explanation for the remaining observations, as previously noted, we did not obtain comments from former Clinton administration staff regarding every observation. Moreover, the fact that certain former Clinton administration staff had no explanations for certain observations does not necessarily mean that they were responsible. In comment 83, the White House said that we should have reported additional statements made by EOP staff that would counter a statement made by the former senior advisor for presidential transition who said that it would have been technically possible to erase voice mail greetings for most departing staff without also deleting greetings for staff who did not leave at the end of the administration. The White House said that, to present a fair and balanced report, we should have explained that two OA staff, who served during the Clinton administration, disagree with the former senior advisor’s statement. According to the White House, they included the OA associate director for facilities management, who worked closely with the former senior advisor and told us that a proposal to delete all voice mail greetings at the end of the Clinton administration was discussed, but they decided not to do it because it would have erased the greetings of all staff, including the 1,700 staff who were not vacating the building. In addition, the White House noted that the OA associate director for facilities management said that it was his decision not to proceed with the proposal, although the former Office of Management and Administration staff, including the former senior advisor, were aware of the decision. Further, the White House said, the OA telephone services coordinator told us that, until November 2001, the EOP’s telephone system did not have the capability to erase voice mails all at once. According to the White House, she explained that it was not until November 2001 that the EOP had purchased the software and had performed upgrades to the switch that were necessary to allow voice mails to be deleted on other than a manual basis. We believe that we provided a sufficient amount of information to reflect the views on this issue that differed with the former senior advisor’s statement. Indeed, many of the details that the White House provided in its comments were already reported. In addition to reporting statements made by the telephone service director about erasing voice mail, we reported that the OA associate director for facilities management said that he made the decision not to erase all voice mail messages and greetings at the end of the administration because doing so would have deleted voice mail for all EOP staff, including staff who did not leave at the end of the administration, and not just for the departing staff. We also reported that the OA telephone services coordinator said that voice mail greetings and messages were not removed on a systemwide basis at the end of the Clinton administration because the EOP had not yet done an equipment upgrade, which was done later. Further, we footnoted the senior advisor’s statement to indicate that contrary views on this matter were provided earlier in the report. In comment 84, the White House questioned a comment made by the former senior advisor for presidential transition who said that regarding reports of telephones that had been forwarded, some telephones were forwarded to other numbers for business purposes at the end of the Clinton administration. He said that some of the remaining staff forwarded their calls to locations where they could be reached when no one was available to handle their calls at their former offices. The White House said that this explanation may sound plausible until one learns how and where the telephones were forwarded and cited, for example, that the chief of staff’s telephone was forwarded to a closet. Further, the White House said that, because we have not provided details such as this, the reader does not have the facts to judge the credibility of the statements made by former Clinton administration staff. As noted in our discussion regarding comments 26 and 76, our interview record with the employee who told us that the chief of staff’s telephone had been forwarded did not indicate that we were told the telephone was forwarded to a closet. Even if our interview did indicate this, because we did not obtain a comment from former Clinton administration staff on every observation, the former senior advisor’s statement did not necessarily address all instances of forwarded calls. In comment 93, the White House said that, although we reported that the OA director said that the offices were in pretty good shape by the evening of January 22, we had failed to include other people’s observations on how long it took to get the offices in shape and provided five examples. However, two of the five additional statements related to telephone service, not trash, and the report had included a statement by the OA associate director for facilities management regarding how long it took to complete the cleaning. We believed that reporting his statement was sufficient. In comment 98, the White House said that we should have included more statements by EOP staff who said they believed that offices were intentionally or deliberately trashed because we had reported that none of the 67 former Clinton administration staff we interviewed who worked in the White House complex at the end of the administration said that trash was left intentionally as a prank or act of vandalism. The White House said, for example, that we should have reported an observation by a National Security Council (NSC) employee who said the NSC office was deliberately made to look like someone was communicating a message; the OA director, who said that it looked like there were a large number of people who deliberately trashed the place; and the chief of staff to the president, who said the conditions he observed were more than wear and tear. The White House said that if we had included these statements, it is more likely that the conclusion that these people reached---that what they observed was intentional—is correct. We had already reported the views of the OA associate director for facilities management and a management office employee who said they observed some trash that appeared to have been left intentionally, as well as the observations of other EOP staff who used words such as “extremely filthy” or “trashed out” to describe the conditions they had observed, and that office space contained a “malodorous stench” or looked liked there had been a party. We had also reported observations such as the contents of desk drawers or filing cabinets having been dumped on floors, which were likely to have been done intentionally, but we did not know by whom. However, to address the White House’s comments, we added the statements of two other staff cited in its comments. In comments 101 and 103, the White House said that we should have reported how many cleaning staff were on duty and the number of hours they worked. According to the White House, without that information, the reader has no basis for evaluating (1) comments made by a former Clinton administration employee who worked in an administrative office who said that she did not observe much cleaning of offices before January 20, and that she believed GSA did not have enough supervisors and decision makers to oversee the cleaning; and (2) a statement contained in a letter to us from the former senior advisor for presidential transition and the former deputy assistant to the president for management and administration who said they did not observe any cleaning crews during the evening of January 19 or the morning of January 20. However, we did report the number of GSA and contract staff who cleaned the EEOB during the weekend of January 20 and 21, 2001; when the cleaning began on January 20; the observations of the crew leaders; and the number of hours that the cleaning crew leaders worked on January 20. We believe that this was a sufficient amount of information to report about the cleaning effort. We also reported that, according to the OA associate director for facilities management, maybe 20 offices were vacant before January 20, and that it took 3 or 4 days after January 20 to complete the cleaning. We attempted to evaluate how many former Clinton administration staff left on January 19 and 20, 2001, which would have helped to determine when the cleaning could have begun. We were provided data indicating when building passes were terminated for EOP staff at the end of the administration, but the White House also informed us that the data were unreliable. We asked the White House to arrange a meeting with an appropriate official to discuss the pass data, but this was not done. In comment 102, the White House questioned why we included a comment made by the former administrative head of an office who said that he asked 25 professional staff to help clean the office before he left. The White House said this comment was irrelevant because no one alleged that this particular office was left dirty, and that we had misled the reader by including it in the report because we did not explain that it does not rebut or relate to any observation. In contacting former Clinton administration staff, we not only sought any explanations they had regarding the observations, but also asked for their observations regarding the condition of the White House complex during the transition. In this case, although it did not rebut a specific observation about his former office, the former official explained the condition of his office at the end of the administration. (He also said that the EEOB and the West Wing were “filthy” at the end of the administration, but that he did not believe that trash was left as an act of vandalism.) However, for the purposes of clarification, we added to the report that no one told us that this office was dirty. In comment 104, the White House said that a statement by a former office manager in which an EOP employee said it appeared that a pencil sharpener was thrown against the wall and that pencil shavings were on the floor did not rebut this observation. The former office manager said that a pencil sharpener in that office did not work and may have been placed on the floor with other items to be removed. The White House noted that an employee told us that two pencil sharpeners were found broken and on the floor with shavings. In addition, the White House noted, with respect to one of the two pencil sharpeners, there was a distinct mark on the wall where the pencil sharpener had struck. We recognize that the former manager’s comments did not address both pencil sharpeners and the mark on the wall, but they could explain why a pencil sharpener was found on the floor. We only reported what he told us in response to the observation. In comment 109, the White House noted that the content of the message written inside a desk that was dated January 1993 was neither profane nor disparaging of the incoming president or his administration. The report did not indicate that it was, and we did not describe the specific content of similar messages that were found during the 2001 transition, so we did not revise the report. In comment 117, the White House said that the descriptions provided by former Clinton administration staff regarding the condition of the White House office space during the 1993 transition in the report contain more detail than the descriptions provided regarding the 2001 transition. We do not believe that the descriptions provided regarding the 1993 transition are more detailed than were provided regarding the 2001 transition. Further, in addressing comment 98, we added the statements of two additional staff who had provided detailed descriptions of the condition of the office space during the 2001 transition. In comments 33 and 110, the White House said we failed to report the statements of several staff members who said that the damage was worse in 2001 than during previous transitions. Comment 33 pertained to the letter portion of the report, where we summarized the information provided in appendix II. To address the White House’s comments, we added in appendix II the statement of another official who said that the condition of the White House complex was worse in 2001 than previous transitions. We also note that our records of many of those interviews, as well as the quotes the White House provided in its comments, do not necessarily indicate that they were referring to damage observed, but to trash. The White House also said in comment 118 that, while pranks and damage may have been observed in prior administrations, the reported observations are not the same in number or kind as those observed during the 2001 transition, and we failed to mention this in the report, which hampers the reader from drawing his or her own conclusion. In addition, the White House also said that we seem to overstate the extent of damage reported during previous transitions and did not quantify the number of incidents observed. However, we clearly indicated that only a limited number of people were available to comment on previous transitions. Further, we lacked definitive data that would allow us to compare the extent of damage, vandalism, and pranks during the 2001 transition to past ones, such as records of office inspections. Moreover, although fewer in number, many of the observations that were made regarding previous transitions were of the same kind that were observed during the 2001 transition, such as missing office signs and doorknobs, a message written inside a desk, prank signs and messages, piles of furniture and equipment, and excessive trash. In addition, observations regarding the 1993 transition included messages carved into desks, which were not observed during the 2001 transition. One significant difference between the 2001 and earlier transitions is that no one reported observing keyboards with missing or damaged keys during previous transitions. In comment 33, the White House said that, when we reported that piles of equipment were observed (by only one person), we failed to explain that the telephone service director said that he never encountered any problems with the telephones during the 1993 transition, that perhaps some telephones were unplugged, but “that would be it.” According to our interview record, this official also said that every transition has some pranks and said that unplugging telephones is a “standard prank.” Further, in comment 115, the White House attributed observations of piles of telephones during the 1993 transition to a statement made by the telephone service director who said that he was instructed to get rid of the “Republican phone system,” which the White House said apparently resulted in the replacement of all telephones. However, our scope of work did not include reviewing the installation of a new telephone system in the White House complex around the time of the 1993 transition to determine if it could relate to the piles of telephones that were observed at that time. Also in comment 33, the White House said, with respect to a statement in the draft report that observations regarding previous transitions included missing building fixtures such as office signs and doorknobs, that no other building fixtures besides office signs and doorknobs were observed. Accordingly, we revised the report to indicate that office signs and doorknobs were the only building fixtures reported being observed missing during previous transitions. The White House also said, regarding a statement that messages were carved into desks, that it is aware of only one observation of a message written inside a desk, which the White House noted, for some reason, we repeated in the sentence in the report that followed. Further, the White House said, there were only three observations of carvings in desks used by staff who served only during the Clinton administration. The observations of three messages carved into desks were made by former Clinton administration staff, as reported in appendix II. The discussion regarding previous transitions contained in the letter portion of the report combined the observations by current EOP staff and former Clinton administration staff. We mentioned the writing that was seen inside of a desk because we observed it, and it contained a date indicating when it was written. Further, we do not understand why the White House noted that there were only three observations of carvings in desks by people who served “only” during the Clinton administration. Many of the observations that were reported regarding the 2001 transition were by staff who served only during the Bush administration. In comment 111, the White House said that we failed to mention that the director of GSA’s White House service center had observed only two transitions (1989 and 2001), and that he only heard that doorknobs were missing during the 1989 transition, but did not observe them himself. Accordingly, we deleted his statement that doorknobs are favorite souvenirs of departing staff. Also in comment 111, the White House said that the telephone service director did not say that office signs were missing in previous transitions, but only during one prior transition. According to the White House, he said that when the Carter administration left office, door signs were missing and cords were unplugged. According to our interview record, this official told us that, during previous transitions, telephone cords were unplugged and some door signs were missing. He told us that some problems were found when Carter administration staff left, although he could not recall any specific examples. In comment 112, the White House noted that the director of GSA’s White House service center said that he observed little in the way of damage, vandalism, or pranks during the 2001 transition, so when he said the condition of the office space during the 2001 transition was the same as what he observed during the 1989 transition, this means that he claims not to have observed much in either transition. For the purposes of clarification, we added that he said that he observed little during the 2001 transition in terms of damage, vandalism, or pranks. In comment 113, the White House said that what the GSA acting administrator said in his March 2, 2001, letter may be misleading because he referred only to real property and not to the telephones, computers, furniture, office signs, etc., that were the focus of the damage, vandalism, and pranks that occurred during the 2001 transition. Some of the observations made by EOP staff, such as holes in walls and missing paint on walls, did relate to real property. To address the White House’s comment, we added a definition of real property. In comment 116, the White House noted that we included a statement by a former Clinton administration employee who said that the damage that was observed in the 1993 transition was intentional, but did not include similar statements made by EOP staff about the 2001 transition. As noted in our discussion regarding comment 68, we included the statements of some individuals who told us they believed the incidents they observed were done intentionally and some individuals who told us they did not believe what they observed was done intentionally. However, we did not include all statements made by all individuals about views on whether things were done intentionally. In any event, without having observed the incidents being carried out, people’s views on whether incidents were intentional or not were speculative. In comment 118, the White House objected to a statement in the report that, according to the March 1981 issue of the Washingtonian magazine, incoming Reagan administration staff had some complaints about the condition of the EEOB that were similar to observations made by EOP staff in 2001. The White House said that the allegations are “hardly” similar to what was found in the 2001 transition and, by analogizing the circumstances, we trivialized what was observed in 2001. Although the Washingtonian certainly did not cite as many observations regarding the 1981 transition, the types of observations were indeed similar, such as memoranda taped to walls, pieces of damaged and dirty furniture, and a dirty refrigerator. Further, according to the Washingtonian, a visitor to the EEOB in 1981 described the building as being “trashed,” which is the same word used by some EOP staff to describe its condition during the 2001 transition. In comment 2, the White House said that we misidentified the units that comprise the EOP and incorrectly referred to EOP units as agencies. We addressed this comment in the White House’s general comment regarding use of the term “EOP.” In comment 21, the White House said that the report should have identified the name of the office where the cellular telephones could not be located and that the report suggested that we had interviewed all of the former employees of the Office of the Vice President, which it said was not true. We did not identify the names of offices in the report unless they were relevant to the observation or comment. We had no reason to identify the name of this office, nor did the White House explain why we should have. Also in comment 21, the White House said the report suggested that we had interviewed all former employees of the office of the vice president, and that all former staff from that office said they did not take them, which is not true. Accordingly, we clarified the report to indicate that the former occupants of offices during the Clinton administration whom “we interviewed” where items were observed missing said that they did not take them. In comment 34, the White House said that it had repeatedly told us that some current EOP staff who also worked during the Clinton administration believe that check-out procedures were often not followed at the end of the administration, and that building passes in particular were not turned in. However, as indicated in appendix III, we did not review whether these check-out procedures were followed because it was not within the scope of our review. Further, this information was provided to us orally by an associate counsel to the president, not directly by any EOP staff with responsibilities in this area. Moreover, we referred to a check-out procedure in appendix III as a means of indicating that it did not include an office inspection. In comment 39, the White House disagreed with the statement that, in the overwhelming majority of cases, one person said that he or she observed an incident in a particular location. According to the White House, in many, if not most, cases, more than one person reported the same incident in the same location. We concluded from a careful review of all of the observations that, although generally more than one person observed the same types of incidents, in the overwhelming majority of cases, only one person said that he or she observed an incident in a particular location. In comment 40, the White House disagreed with a statement in the report that, in some cases, people said that they observed damage, vandalism, and pranks in the same areas where others said they observed none. The White House said that, without a specific description of the instances where one current staff member recalled seeing something and another expressly disavowed seeing the same thing, it was impossible to know whether the apparent conflict in testimony could be reconciled or whether our statement is factually accurate. The White House also said that the vague statement provided no indication of how many conflicts existed or what types of incidents were involved. Further, the White House cited two examples that it said we had indicated the sentence referred to, and said the observations and circumstances indicated in those examples were not instances of a direct conflict where one person said he or she observed damage in a location where others observed none. In the examples the White House said we had referred to, the White House excluded the statements made by former Clinton administration staff and a National Archives and Records Administration (NARA) official who were working in the EEOB in the late morning of January 20. In those comments, people said they did not observe damage, vandalism, and pranks in the late morning of January 20 in the same rooms where others said they had observed them later that afternoon. For example, two former occupants of an office where furniture was observed overturned in the afternoon of January 20 said they left between 10:00 a.m. 11:55 a.m. that day and did not observe any overturned furniture. In another situation, the former senior advisor for presidential transition said that when he was in a certain office after 11:00 a.m. on January 20, he did not see a broken glass top smashed on a floor or files dumped on a floor, which were observed there during the afternoon of January 20. Further, as noted in the report, a NARA official said that, although she did not remember the specific rooms she went to during the morning of January 20, she went to various offices in the EEOB with the former senior advisor for presidential transition around 11:00 a.m. that day and did not see any evidence of damage, vandalism, or pranks. In reporting the comments of former Clinton administration staff regarding these situations, we clarified when the EOP staff made the observations. In comment 94, the White House said that we did not accurately quote what the OA associate director for facilities management told us about cleaning. We had reported that he said that “about 20” offices were vacant before January 20 and that it took 3 or 4 days after January 20 to complete the cleaning. However, the White House said that this official actually said that there was “some list of offices that could have been cleaned before the 20th,” and that the list was given to the director of GSA’s White House service center, and that there were “not a lot of offices on the list”—“maybe 20.” Although we were not directly quoting this official when we reported that he said “about 20” offices were on the list, our interview record agreed with the White House’s comments that he said there were “not a lot” of offices on the list and that “maybe 20” were on it, and we revised the report accordingly. The White House also indicated that this official said that it took “3 to 5 days” to complete “just the cleaning.” However, our record indicated that he said that it took 3 or 4 days after January 20 to complete the cleaning, and we did not revise the report in that regard. In comment 96, the White House said that it believed we had misquoted the OA associate director for facilities management when we indicated he said that it would have taken an “astronomical” amount of resources to have cleaned all of the offices by Monday, January 22. Rather, the White House indicated that he said that they could not have had enough people to clean it by January 22 because the offices were dirtier than in past transitions. The White House also noted that the official said that, in response to a question about whether it was legitimate to think people could start working in the complex on Sunday, January 21, he replied that, yes, in his opinion, people should leave their offices in an orderly fashion. We checked our record of interview with this official and believe that we accurately reported his comments, and we also believe that they are substantially the same as what the White House indicated in this comment. For example, we had reported that this official said that there was more to clean during the 2001 transition than during previous ones and provided the reasons why; he said that, in his opinion, departing staff should have left their offices in a condition so that only vacuuming and dusting would have been needed. Thus, we did not believe that any revisions were needed to the report regarding this comment. In comment 107, the White House said that it was not accurate for us to indicate that the statement that trucks were needed to recover new and usable supplies generally was not corroborated. According to the White House, the associate director for the general services division told us that because the excess supplies had been dumped in the basement hall and were piling up down there, leaving much of it unusable, he instructed his staff to take the supplies to the off-site warehouse where the staff could re- sort the supplies and salvage what was reusable. The White House also noted that eight truckloads were needed to recover these new and usable supplies from the basement, and had these trucks not been dispatched, all of the supplies, instead of just a portion, would have been rendered unusable; therefore, the statement was corroborated. However, when we interviewed this official, he said that the statement contained in the June 2001 list that six to eight 14-foot trucks were needed to recover new and usable supplies that had been thrown away “bothered” him. He said that nothing usable was thrown away intentionally. Further, although trucks were reportedly used to transport supplies from the EEOB to the warehouse so that they could be sorted and to salvage what could be used, as indicated in the report, the former senior advisor for presidential transition said that the supplies were brought to the basement of the EEOB so that staff could obtain them from there, rather than obtaining them from the supply center. Therefore, we could not corroborate the portion of the statement in the June 2001 list that supplies had been “thrown away.” In comment 120, the White House said that we failed to report two of the factors that OA officials, who have been through many transitions, identified as contributing to the problems found in the 2001 transition. First, the telephone service director said that he felt hampered in doing his job because he was not allowed to have any contact with the incoming administration. According to the White House, he indicated that, in the past, he was allowed to confer with incoming staff regarding their telephone needs and expectations; but this was not permitted during the 2001 transition. Likewise, the White House said, the OA director said that this transition was unusual because, for other transitions, there was a transition team from the new administration on-site in the complex but, during the 2001 transition, the incoming administration did not get access to the space until 3 days before the inauguration and did not get “legacy books,” (books that explain how things work within the complex and within particular offices) until after the inauguration. We did not evaluate the transition coordination issues that the White House raised in this comment because they were outside the scope of our review. However, former Clinton administration staff did provide some related information. The former senior advisor for presidential transition said that some Bush administration staff were given walk-through of offices in the weeks before January 20, that officials from the president-elect’s staff attended several meetings before January 20, and that each office was instructed to prepare briefing books for the incoming Bush staff. Further, the deputy assistant to the president for management and administration said the president-elect’s staff were involved in planning the transition and had an unprecedented level of access. Because we did not evaluate these issues, we are not in a position to comment on them. Also in comment 120, the White House said that a number of longtime employees, such as the OA associate director for facilities management, told us that problems could have been averted or remedied if former Clinton administration staff had vacated their offices earlier. The White House noted that this official said he observed a woman watching television in her office on January 20 and turning it off and leaving precisely at noon. Further, the White House said that 325 passes of White House Office employees were terminated on January 19 and 20, 2001. As indicated in our discussion regarding comments 101 and 103, we attempted to evaluate how many former Clinton administration staff left on January 19 and 20, 2001, which would have helped to determine when the cleaning could have begun. As previously noted, we were provided data indicating when building passes were terminated for EOP staff at the end of the administration, but the White House also informed us that the data were unreliable. We had asked the White House to arrange a meeting with an appropriate official to discuss the pass data, but this was not done. We revised the report, as appropriate, to address the White House’s comments 1, 3, 7, 37, 50, 52, 53, 63, 85, 86, 88, 89, 95, 114, and 119. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | Damage, theft, vandalism, and pranks occurred in the White House complex during the 2001 presidential transition. Several Executive Office of the President (EOP) staff claim that they observed (1) messy offices containing excessive trash or personal items, (2) numerous prank signs containing derogatory and offensive statements about the president, (3) government property that was damaged, and (4) missing items. Further, EOP staff believed that what they observed during the transition was done intentionally. Some former Clinton administration staff acknowledged that they observed some damaged items and prank signs. However, the former Clinton administration staff said that (1) the amount of trash found during the transition was what could be expected; (2) they did not take the missing items; (3) some furniture was unintentionally broken before the transition, and little money was spent on repairs and upkeep during the administration; and (4) many of the reported observations were not of vandalism. This report makes several recommendations regarding the prevention and documentation of vandalism during future presidential transitions. |
In 1989, the Army recognized that it needed to replace some of its aging air defense systems, including the Homing All-the-Way to Kill (HAWK) missile. The Army wanted the HAWK’s replacement to be rapidly deployable, capable against weapons of mass destruction, and able to defeat a wide range of targets. The Under Secretary of Defense for Acquisition and Technology approved concept exploration for a new surface-to-air missile but stated that the Army needed a draft agreement for allied participation before system development would be approved. The Army was successful in finding U.S. allies that were interested in jointly acquiring a new air and missile defense system. In February 1994, the United States officially invited Germany to participate in the system’s development and production. Because of Germany’s desire to make the program a U.S.-European cooperative initiative, the program was subsequently expanded to include France and then Italy. Representatives of the four countries signed a multilateral statement of intent in February 1995 to collaborate in the development of a system capable of meeting the requirements of all four countries. The effort became known as the MEADS program. Before DOD allows a military service to negotiate for the acquisition of a weapon system in cooperation with another country, DOD generally requires the program’s sponsor to assess the likely impact of the proposed program by developing a summary statement of intent. The statement should include information on the benefits of an international program to the United States, potential industrial base impacts, funding availability and requirements, information security issues, and the technologies that will likely be involved in the program. Various officials within the Office of the Secretary of Defense are responsible for reviewing the statement of intent and recommending whether an international agreement should be negotiated. Because of budget problems, France dropped out of the MEADS program before the memorandum of understanding was signed in May 1996. The other nations proceeded with the project definition and validation phase. The countries agreed that, during this phase, the U.S. cost share would be 60 percent; Germany, 25 percent; and Italy, 15 percent. According to the memorandum of understanding, new agreements would be negotiated before initiating other phases of the program, cost share percentages could change, and any of the countries could drop out of the program at the start of any new program phase. MEADS, as envisioned by the Army, is part of the lower tier of a two-tier umbrella of air and missile defense. The Theater High Altitude Area Defense (THAAD) and Navy Theater Wide systems are upper tier systems that provide protection primarily against theater ballistic missiles. Existing and planned lower tier systems, such as the Patriot Advanced Capability 3 (PAC-3) and Navy Area systems, will engage shorter range theater ballistic missiles, fixed- and rotary wing aircraft, unmanned aerial vehicles, and cruise missiles. The Ballistic Missile Defense Organization (BMDO) has responsibility for the MEADS program. DOD believes the MEADS program represents a new and innovative approach to the acquisition process. If the program is successful, DOD expects that MEADS will be a model for future collaborative efforts because it addresses problem areas associated with past transatlantic cooperative endeavors. The program reflects the mission needs of all countries, involves technologies from all participants, and requires competition between two transatlantic contractor teams. MEADS is being designed to add capabilities to the battlefield that currently fielded and planned air and missile defense systems do not provide. It will be more mobile than current systems, counter a wider range of targets, and intercept incoming missiles from any direction. Because of its unique capabilities, warfighting commands with theater ballistic missile defense missions support MEADS. The Army plans to use MEADS to protect important access points on the battlefield, troop forward area assembly points, and maneuver force assets (such as refueling points and stores of ammunition) that must travel with troops as they move toward the enemy. To move with the maneuver force, MEADS must transition from defensive operations to a traveling configuration and return to defensive operations quickly. Similar to the maneuver force, MEADS must also be able to travel over unimproved roads and cross country. In addition, the Army wants to be able to move MEADS within theater aboard small transport aircraft, such as the C-130. Combatant commanders control the use of C-130s and can use them to move MEADS as necessary. MEADS must be able to defend against a wide range of targets. It must counter short-range, high-velocity theater ballistic missiles carrying conventional explosives or weapons of mass destruction. The system is also required to detect and destroy low- and high-altitude cruise missiles launched from land, sea, or air platforms and carrying various types of offensive weapons. MEADS is required to counter remotely piloted vehicles and unmanned aerial vehicles carrying observation equipment or weapons and defend against slow, low-flying rotary wing aircraft and maneuvering fixed-wing aircraft employed in a variety of missions. MEADS is expected to be the only land-based theater missile defense system designed to defend against targets approaching from any direction. The system will counter slow and low-flying cruise missiles that take advantage of terrain features to mask their approach and attack from virtually any direction. No other existing or planned air and missile defense system meets all of the MEADS requirements. The Patriot system cannot keep pace with the maneuver force because it takes too long to assemble and disassemble for movement, and it cannot travel cross country. Also, Patriot was not designed to provide protection from all directions, and will require more aircraft to reach a theater of operation because of the system’s size. Even though the Army plans to use large transport aircraft, such as the C-141, C-17, or C-5, to transport both Patriot and MEADS to a conflict, MEADS requires fewer aircraft. For example, the Army will need 77 C-5 aircraft sorties to transport 1 Patriot battalion but only 36 sorties to transport 1 MEADS battalion. In addition, Patriot can only be transported within theaters of operation aboard the larger transport aircraft. The ability of other systems to meet MEADS requirements is also limited. The Navy Area system may not be capable of protecting the maneuver force because its defended area will be limited by the distance from which it must stand off shore and the range of its interceptor. The THAAD and Navy Theater Wide systems are being designed to engage primarily medium-range ballistic missiles but cannot defend against theater ballistic missiles launched from very short ranges, aircraft, or low-altitude cruise missiles. Table 1 shows the capabilities of existing and planned air and missile defense systems in meeting MEADS requirements. Combatant commanders whose forces are most vulnerable to theater ballistic missile attacks identify MEADS as a priority system. Each year the Commander in Chief of each unified combatant command lists, in order of importance, key program shortfalls that adversely affect the capability of their forces to accomplish assigned duties. All commanders with a theater missile defense mission—the U.S. Central, European, and Pacific Command—believe that a shortfall exists in their ability to perform this mission. Each of these commanders either lists MEADS as a system needed to correct the shortfall or, according to command officials, considers MEADS a high priority. A U.S. Central Command official said that, although the Commander in Chief considers MEADS a high priority, he does not want to acquire that system at the expense of other theater missile systems. The official said that PAC-3, THAAD, and Navy Area systems are expected to be fielded sooner than MEADS and that the Commander does not want those systems delayed. BMDO will be unable to acquire MEADS without impacting higher priority missile defense programs unless DOD or the Army provide additional funds. BMDO’s budget plan does not include funding for MEADS after fiscal year 1999 because the organization’s budget is dedicated to missile systems that will be available sooner. Over the next 6 years, for which BMDO is currently budgeting, the organization needs $1.4 billion to execute the planned MEADS program. Because it has had difficulty funding MEADS, BMDO is considering various program options to find a less costly acquisition program. In March 1998, BMDO developed, in cooperation with the Army, a cost estimate for a MEADS system that would meet Army requirements. According to this estimate, the United States expects MEADS total design and development cost to be about $3.6 billion. The United States expects to pay about one-half of this amount, or $1.8 billion. In addition, BMDO estimates that the United States needs approximately $10.1 billion more to procure eight battalions of system hardware. BMDO is interested in the MEADS’ design and development cost because it is developing budget plans for the years when many related activities are scheduled. During design and development, engineers will work out the details of MEADS’ design, perform engineering tasks that are necessary to ensure the producibility of the developmental system components, fabricate prototype equipment and software, and test and evaluate the system and the principal items necessary for its support. In addition, the contractor will fabricate and install equipment needed to produce hardware prototypes and develop training services and equipment. BMDO expects the system radars to be the most costly system components to design and develop. Army engineers said that they believe two separate radars—a surveillance and fire control radar—will be required and that three prototypes of each radar are needed for adequate test and evaluation. The fire control radar will be expensive because it contains thousands of transmit and receive modules that send and receive messages with the missile and simultaneously determine the target’s location. Engineers believe the efficiency of existing transmit and receive modules must be improved to meet the MEADS hit-to-kill requirement. The surveillance radar is expensive because, to fulfil MEADS’ mission requirements, it must accurately detect targets at long ranges. Figure 1 shows the percentage of design and development cost attributable to each of the system’s components. A BMDO official said that the March 1998 cost estimate was reduced more than $400 million because Army engineers believed that MEADS could benefit from some technology developed and paid for by other missile programs. In a March 1997 cost estimate, BMDO recognized that existing technology could benefit MEADS and this reduced MEADS cost by about $200 million. However, contractor personnel believe that actual program savings from technology leveraging could be more than $400 million. The MEADS program would realize the largest cost reductions if existing radars or missiles could meet MEADS requirements. The use of existing components would eliminate design, prototype manufacturing, and producibility engineering costs. Army engineers said that existing missiles, such as PAC-3, might be capable against the theater ballistic missile threat that MEADS is expected to counter. However, the Patriot Project Office has not simulated PAC-3’s performance against MEADS entire ballistic missile threat and cannot do so without additional funds. In addition, the Army stated that PAC-3 may have limitations against the long-term cruise missile threat. Current existing radars do not meet MEADS requirements. For example, Army engineers said that the THAAD system ground-based radar cannot provide protection from all directions and is much too large and heavy for a mobile system. The engineers also said that the Marine Corps TSP 59 radar, being used with the Marine Corps HAWK, takes too long to move and is much too heavy to be mobile. BMDO’s cost estimate shows that, to acquire and field MEADS as planned, it needs approximately $11.9 billion over the next 18 years. The funds are expected to pay for the U.S. share of MEADS estimated research and development cost and the procurement of eight battalions of equipment. BMDO needs about $1.4 billion between fiscal years 2000 and 2005 to develop a system that meets all of the Army’s requirements. BMDO has spent the last year reviewing program options that could reduce MEADS cost. However, as of April 1998, the agency had not changed its acquisition strategy. BMDO considered reducing MEADS requirements so that an existing missile could be used in the system. In addition, BMDO considered extending MEADS development schedule, delaying initial fielding of hardware, or relying on other radars to detect targets for MEADS. The organization also considered developing and fielding the system in two stages or designing a system that relies on a currently undeveloped tracking network to detect and engage targets. Finally, BMDO considered tasking contractors to develop a system that meets critical requirements for a limited amount of funds. The Army’s Deputy Program Executive Officer for Air and Missile Defense said that, if contractor funds are limited, some MEADS requirements might be eliminated to decrease the cost of the new system. However, the official did not know which requirements might be eligible for elimination. The official also said that, if BMDO cannot fund the program as it is currently planned, the Army favors either fielding MEADS in two stages or limiting development funds. MEADS partners are aware that the United States is considering other options. According to German and Italian government officials, they are willing to discuss program changes. However, until the Army and BMDO agree on a specific option, DOD cannot be sure its partners will find that option acceptable. BMDO cannot provide the $1.4 billion needed for fiscal years 2000 through 2005 unless DOD (1) increases BMDO’s total obligational authority; (2) stretches out development and production of programs, such as PAC-3, THAAD, and Navy Area systems; or (3) drastically reduces BMDO funding earmarked for targets, systems integration and test, and management. BMDO’s Deputy for Program Operations said that these program changes are undesirable because they increase program cost and delay fielding of important assets. Figure 2 shows that, if BMDO included MEADS research and development funding in its planned budget for fiscal years 1999 to 2003, the agency would exceed its budget authority. The United States, Germany, and Italy are collaborating in the development and production of MEADS because each needs an improved air and missile defense system but cannot afford to acquire a system by itself. DOD also believes that international cooperation in weapon systems acquisition can strengthen political ties, create a more effective coalition force, and increase the self-sufficiency of allied nations. However, BMDO did not fully address funding or technology transfer issues before initiating the international program and may not be able to achieve these benefits. In addition, security problems that might have been avoided if security specialists had been involved in negotiation of the international agreement continue to hinder the program’s execution. Officials in all three countries said that, given their current and expected defense budgets, MEADS is affordable only if it is acquired jointly. Total design and development and production cost reductions will depend on the acquisition strategy that BMDO and its partners choose. In addition to reducing the U.S.’ cost to develop MEADS, combining the production quantities of the three countries will lower unit production costs and reduce the total U.S. cost, according to BMDO documents. DOD generally requires the approval of a summary statement of intent before the negotiations to acquire a weapon system in cooperation with another country. The DOD directive that established BMDO, however, gives the organization the authority to negotiate agreements with foreign governments and then obtain approval of those agreements. In implementing this authority, BMDO did not finalize its summary statement of intent until after negotiations to establish the international program had begun. In addition, the assessment was not sent to reviewers at the Office of the Secretary of Defense until all negotiations were complete and agreement had been reached on the $108 million, 27-month project definition and validation phase of the MEADS program. The summary statement of intent that BMDO eventually prepared did not fully address important issues that continue to plague the MEADS program. For example, although the multilateral statement of intent shows that the partners intended to develop and produce MEADS together, little attention was given to MEADS funding needs subsequent to project definition and design. The summary statement of intent did not address long-term funding needs by fiscal year, instead, it indicated that funding beyond fiscal year 1999 would be derived from funds budgeted to develop an advanced theater missile defense capability. However, in February 1996—about the same time that BMDO completed international agreement negotiations—a DOD review of BMDO’s mission reduced the organization’s budget and resulted in the deletion of advanced capability funds earmarked for MEADS. Because BMDO did not fully assess the availability of funding for MEADS future program phases, the U.S. political ties with Germany and Italy could be affected. Some U.S. and European officials suggest that the United States may be viewed as an unreliable partner if it is unable to fund MEADS. The officials said that U.S. withdrawal from the development effort could affect its ability to participate in future international programs. BMDO’s summary statement of intent did not address technology transfer issues that continue to trouble the MEADS program. Although the statement recognized that classified information developed for other missile programs would be transferred to the MEADS program, it did not address whether the programs that owned that information had concerns about its release. Also BMDO did not address the impact that a decision to withhold critical information could have on the execution of the program. The United States has established procedures for releasing sensitive national security-related information to foreign governments and companies. These policies aim to preserve U.S. military technological advantages. Control policies limit the transfer of advanced design and manufacturing knowledge and information on system characteristics that could contribute to the development of countermeasures. Technology release policies present special challenges for the MEADS program because it involves several sensitive technologies critical to preserving the U.S. military advantage. For example, MEADS could employ electronic counter countermeasures that offset jamming and intentional interference, signal processing techniques to enhance accuracy, and advanced surveillance techniques. The United States has been reluctant to release information about these critical technologies into the program and slow in responding to many release requests. For example, release approvals have taken as long as 259 days. Some requests made at the start of the program are still awaiting a decision because program offices have been reluctant to release the information. This reluctance, as well as the approval time, reflect the rigorous release-consideration process. Program offices in each of the services that own particular technologies perform a page-by-page review of the requested data to identify releasable and nonreleasable data. In some cases, the program controlling the data will not directly benefit from its release and will risk giving up data that could expose system vulnerabilities. These policies may limit the ability of contractors to leverage the use of existing missile system technology and pursue the cheapest technical solution. MEADS contractors said that, when data is not released on a timely basis, they are forced to explore alternative technical approaches or propose development of a component or subcomponent that may duplicate existing systems. In some cases, the United States has approved release of technology into the program but restricted the information to U.S. access only. This restriction has undermined the functioning of integrated teams and efforts to strengthen ties among the participating countries. German and Italian defense officials and the European contractors involved in the MEADS program said that, unless they can assess the U.S. technology that U.S. contractors are using, they cannot be sure that the technology is the best or the cheapest available. The European contractors also said that, if this technology must be improved or adapted for MEADS use, they are asked to accept the U.S. estimate of the cost to perform these tasks. The reluctance to share technology may also make it difficult to design and build a MEADS system that can exchange engagement data with other battlefield systems. For the international system to be truly interoperable, DOD may have to provide information that it has been reluctant to share. If DOD officials decide that this information is too sensitive to share with MEADS partners, the United States may have to drop out of the program and develop MEADS alone or modify its capability. The international MEADS program has been plagued by two issues that Army security officials believe could have been avoided if security specialists had been involved in negotiation of the international agreement. First, the program does not have a secure communications system. The absence of secure telephone and facsimile lines has hindered the program’s execution. Army and contractor officials said that it takes up to 6 weeks to get classified information to MEADS contractors in Europe. Also, unsecured lines increase the possibility that unauthorized parties can access classified information. Second, the failure of the participants to agree to MEADS-specific security instructions also increases the potential for unauthorized use of MEADS data. Pursuant to 22 U.S.C. 2753(a), no defense article or service may be sold or leased to another country unless the recipient agrees not to transfer title to, or possession of, the goods or services to a third party. However, Germany and the United States disagree on the definition of a third party. One of the German contractors participating in the MEADS program employs a British citizen and Germany wishes to give access to MEADS classified data to this employee. DOD security officials told us that they do not believe that the German government could penalize the British employee if MEADS data was not safeguarded. German and Italian contractor officials said that, with the formation of the European Union, European citizens cross country boundaries just as U.S. citizens cross state borders. The officials said that if a contractor’s ability to hire personnel is limited by the U.S. interpretation of a third party, the MEADS program may lose valuable expertise. If MEADS is designed to meet established requirements, it will give warfighters capabilities that are not present in any existing or planned air and missile defense systems. MEADS should be able to engage a wide range of targets, be easily transported by small transport aircraft, be capable of moving cross country and over unimproved roads, and be sufficiently lethal to destroy both conventional warheads and weapons of mass destruction. Because of these unique capabilities, war-fighting commands place a high priority on the acquisition of MEADS. DOD believes that jointly developing and producing MEADS with U.S. allies will reduce the U.S. investment in the weapon system and strengthen political ties, creating a more effective coalition force and increasing the allies’ ability to defend themselves. However, DOD does not know whether it is willing to share information to create a truly interoperable system, whether an international program can utilize existing U.S. missile system technology to its maximum advantage, how it will fund the U.S. share of the international program, or how it can alter the MEADS system or acquisition strategy to make the program affordable and acceptable to its partners. In addition, potential security risks exist because security specialists were not involved in negotiating the international agreement. An international program impacts the political ties between the United States and its allies, and its outcome impacts DOD’s ability to negotiate future collaborative efforts. Because DOD is considering other cooperative programs, the MEADS experience could provide valuable lessons. These lessons include careful consideration of all available program information before entering into an agreement to jointly develop a weapon system and assurance that funds will be available for program execution. In addition, areas that warrant attention include the (1) technology that is likely to be released into the program, (2) effect that the technology’s release could have on U.S. national security, and (3) impact of a determination to withhold information on both the execution of the program and U.S. allies. We recommend that the Secretary of Defense take steps to ensure that, for future international programs, the approval process includes careful consideration of the availability of long-term program funding and an in-depth assessment of technology transfer issues. In addition, we recommend that the Secretary of Defense include security experts in all phases of the negotiations of international programs. In commenting on a draft of this report, DOD generally concurred with our recommendations (see app I). DOD said that it would take steps to ensure that (1) the approval process for future international programs includes a careful assessment of long-term funding needs and technology transfer issues and that (2) security personnel are included in negotiations of international agreements. Regarding the MEADS program, DOD stated that all parties to the memorandum of understanding understood that long-term funding would be subject to later determination and availability and that technology transfer issues were considered to the extent possible prior to entering into the agreement. In addition, DOD said that Army security personnel have been included in all MEADS negotiations. We agree that the memorandum of understanding limits the U.S. commitment for the MEADS program to funding the project definition and validation phase of system development. However, the memorandum of intent signed by the three countries clearly stated that the United States, Germany, and Italy intended to continue the program through production. DOD regulation 5000.1, dated March 1996, states that, once a military component initiates an acquisition program, that component should make the program’s stability a top priority. The regulation further states that to maximize stability, the component should develop realistic long-range investment plans and affordability assessments. However, DOD approved the MEADS program without a full assessment of BMDO’s ability to fund the system’s development beyond project definition and validation. With future funding in doubt, BMDO has spent the last year reviewing program options that could reduce MEADS cost and enhance the organization’s ability to finance further development efforts. In a stable program, this time could have been used to further the program’s primary mission of developing an effective weapon system. DOD further commented that technology transfer issues could not be resolved because of the lack of detailed information on the transfers that would be requested. We believe a more detailed assessment, one that involved key program offices that would be asked to approve the release of information to the MEADS program, was feasible. In March 1995, the Army developed a strawman concept of MEADS’ predecessor, the Corps Surface-to-Air Missile (SAM) system. On the basis of this concept, the Army said it could reduce Corps SAM’s cost by utilizing technology from existing missile programs, such as PAC-3 and THAAD. The Army’s belief that Corps SAM/MEADS would make extensive use of other systems’ technology indicates that it could reasonably be expected to require information about those systems. At the very least, project offices that were expected to provide technology to the MEADS program should have been consulted to determine what type of information the offices would be willing to release to foreign governments. This knowledge would have allowed the United States, during negotiations with its potential partners, to communicate the type of information that could be transferred. On the basis of the memorandum of understanding, which states that successful cooperation depends on full and prompt exchange of information necessary for carrying out the project, European officials said that they believed the United States would freely share relevant technology. DOD stated that security experts should support all phases of the negotiation process, although they may not be able to participate in the formal negotiations. In addition, DOD said that Army security personnel were involved in the creation of the MEADS delegation of disclosure letter and program security instruction. We agree that it may not be possible to include security personnel in the primary negotiations and recognize that the MEADS participants have established a tri-national security working group to address specific security issues. However, Army security personnel said the tri-national group’s primary function, thus far, has been to resolve issues that prevent Germany from signing the MEADS program security instruction. Army, DOD, and BMDO security specialists said that, so far, they have not been asked to support the negotiations for the next phase of MEADS development. In addition, Army security personnel said that they were not involved in the creation of MEADS security documents, such as the program security instruction and the delegation of disclosure letter, until after the memorandum of agreement that initially established the MEADS program was signed. To assess MEADS contribution to the battlefield and warfighter support for the system, we compared MEADS requirements with those of other systems designed to counter theater ballistic and cruise missile threats. We also reviewed the integrated priorities lists of U.S. Central Command, MacDill Air Force Base, Florida; U.S. European Command, Stuttgart, Germany; and U.S. Forces Korea, Seoul, South Korea. When possible, we obtained the Commander in Chief’s written position on theater missile defense in general and MEADS specifically. We discussed MEADS required capabilities with officials at the U.S. Army Air Defense Artillery School, Fort Bliss, Texas; Patriot Project Office, Huntsville, Alabama; and Program Executive Office for Air and Missile Defense, Huntsville, Alabama. In addition, we discussed warfighter support for the acquisition of MEADS with officials of the U.S. Central Command; U.S. European Command; U.S. Forces Korea; and U.S. Pacific Command, Camp H.M. Smith, Hawaii. We reviewed BMDO’s fiscal years 1999-2003 budget plan and other budget documents to determine if the organization had identified funding for MEADS. We also examined BMDO’s acquisition cost estimate to determine the system’s cost, the effect on cost of using existing technology, and the cost of design and development tasks. In addition, we discussed the budget estimate and BMDO’s ability to fund another major acquisition program with officials in BMDO and the Office of the Under Secretary of Defense for Acquisition and Technology, Washington, D.C., and the U.S. MEADS National Product Office, Huntsville, Alabama. To determine the impact of an international program on MEADS development, we examined work-sharing, cost-sharing, system requirements, and technology transfer documents and held discussions with Ministry of Defense officials in Rome, Italy, and Bonn, Germany; Army officials in the U.S. MEADS National Product Office; and officials in the State Department and various DOD offices, Washington, D.C. We also examined documents and met with contractor officials in Bedford, Massachusetts; Orlando, Florida; Rome; and Bonn. In addition, we examined security documents and held discussions with officials of the Office of the Under Secretary of Defense for Policy, Washington, D.C.; Intelligence Office of the Assistant Chief of Staff of the Army, Washington, D.C.; and the Army Aviation and Missile Command Intelligence and Security Directorate, Redstone Arsenal, Alabama. We performed our review between April 1997 and April 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the Senate Committee on Appropriations, Subcommittee on Defense, the House Committee on National Security, and the House Committee on Appropriations, Subcommittee on National Security; the Secretaries of Defense and the Army; and the Director of the Ballistic Missile Defense Organization. Copies will also be made available to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are Karen Zuckerstein, Barbara Haynes, and Dayna Foster. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) Medium Extended Air Defense System (MEADS) program, focusing on the: (1) unique capabilities that MEADS will add to U.S. air and missile defense; (2) development cost of MEADS and its affordability within the expected ballistic missile defense budget; and (3) impact that international development will have on MEADS cost and capability. GAO noted that: (1) if the Army is successful in meeting established requirements, MEADS will have capabilities that no other planned theater missile defense system will possess; (2) the system should defeat a wide range of threats arriving from any direction, be transportable within theater by small transport aircraft, be mobile enough to travel cross country or over unimproved roads with the maneuver force, and be sufficiently lethal to negate weapons of mass destruction; (3) acquiring MEADS will affect higher priority missile programs or the infrastructure that supports those programs unless DOD increases the Ballistic Missile Defense Organization's (BMDO) budget allocation; (4) BMDO forecasted in March 1998 that it needed about $1.8 billion for fiscal year (FY) 1999 though FY 2007 to pay its portion of MEADS' estimated $3.6-billion design and development cost; (5) in addition, BMDO will need another $10.1 billion for FY 2005 through FY 2016 to acquire eight battalions of equipment; (6) the European partners are expected to contribute about one-half of the design and development funds; (7) thus, for FY 1999 through FY 2005--the years for which BMDO is now budgeting--the U.S. cost could be reduced to about $1.4 billion; (8) BMDO has no funds budgeted for MEADS after FY 1999 and has been reviewing various program options to find a less expensive acquisition strategy; (9) DOD officials believe that a joint cooperative effort with U.S. allies is the best means of acquiring MEADS because it reduces cost, improves political ties, and builds a more effective coalition force; (10) however, DOD did not fully assess funding and technology transfer issues before initiating the international program and may not be able to achieve these benefits; (11) U.S. and European program participants said that the United States may be viewed as an unreliable partner if it cannot fund its portion of the program, which could threaten the U.S.' ability to participate in future collaborative efforts; (12) even if the United States remains in the program, it may have difficulty developing a truly interoperable weapon without sharing valuable technology; (13) the international structure may also prevent contractors from pursuing the most cost-effective system solution; (14) contractors are finding it difficult to use existing technology developed for other systems because the process for transferring U.S. information to foreign countries is slow and the United States is reluctant to transfer some critical technology; and (15) difficulties might have been avoided if security experts had been included in negotiations of the international agreement. |
Pension advances and pension investments are products that, while based on or related to pension benefits, are generally distinct from the pensions themselves. A pension advance is an up-front lump sum provided to a consumer in exchange for a certain number and dollar amount of the consumer’s future pension payments plus various fees. Pension investments, the related product, provide investors a future income stream when they make an up-front lump-sum investment in one or more pensioners’ incomes. Multiple parties can be involved in pension advance transactions, including consumers (pensioners), investors, and pension advance companies. After the pensioner signs the pension advance contract, the pension advance company gives the lump sum to the pensioner after deducting, if applicable, life-insurance premiums or other fees from the lump sum. Pension advance companies may also be involved in the related pension investment transaction. These companies can identify financing sources (investors) to provide the lump-sum monies to a specific pensioner or to multiple pensioners. The investor pays the lump- sum amount by depositing the funds into the bank or escrow account that was previously established. The investor receives periodic payments, such as on a monthly basis, over the agreed-upon period either from the pension advance company or through the escrow account. See figure 1 for an illustration of the parties that we identified as part of our June 2014 report in the multistep pension advance processes that we reviewed. Various state and federal laws could potentially apply to pension advances, depending on the structure of the product and transaction, among other things. For example, certain provisions that prohibit the assignment of benefits could apply to pension advances, depending on whether these advances involve directly transferring all or part of the pension benefit to a third party. In addition, potentially applicable state laws include each state’s consumer protection laws such as those governing Unfair and Deceptive Acts and Practices (UDAP) and usury laws that specify the maximum legal interest rate that can be charged on a loan. Depending on the overall structure of the products involved, state securities laws could also apply. Various state and federal agencies have oversight roles and responsibilities related to consumer and investor issues. CFPB, FTC, and SEC may have consumer and investor-related oversight roles related to pension advance transactions depending on a number of factors, including the structure of the pension advance product and transaction. Many other federal agencies may have pension oversight roles related to the pension itself depending on whether the pensioner was a private- sector or federal employee or a military veteran: EBSA, Treasury, and PBGC have oversight over private-sector pensions; OPM has oversight over federal civilian pensions; DOD has oversight over military pensions; and VA has oversight over a needs-based benefit program called a “pension.” States may also oversee and investigate pension advance transactions. As we describe later in this testimony, the state of New York worked with CFPB to file a lawsuit in August of 2015 against two of the firms that we referred to CFPB for review and investigative action. In June 2014, we reported on the number and characteristics of entities offering pension advances and the marketing practices that pension advance companies employ. During our review, we identified at least 38 companies that offered lump-sum advance products in exchange for pension payment streams. Eighteen of the 38 companies we identified were concentrated in one state and 17 of these 38 companies also offered lump-sum cash advances for a wide range of other income streams, in addition to pension advances, including lottery winnings, insurance settlements, and inheritances. Another 17 companies exclusively focused on offering pension advances. We also found that at least 30 out of 38 companies that we identified had a relationship or affiliation with each other, including working as a subsidiary or broker, or the companies were the same entity operating with more than one name. However, only 9 out of those 30 companies clearly disclosed these relationships to consumers on the companies’ websites. While companies having affiliations is not uncommon, the lack of transparency to consumers regarding with whom they are actually conducting business can make it difficult to know whom to file a complaint against if the pensioner is dissatisfied or make it difficult to research the reputability of the company before continuing to pursue the business relationship. See figure 2 for an illustration of some of the relationships between companies that we identified during the June 2014 review. At least 34 out of 38 pension advance companies that we identified marketed and offered their services to customers nationwide, operating primarily as web-based companies and marketing through websites and other social-media outlets. Twenty-eight of the 38 companies that we identified used marketing materials or sales pitches designed to target consumers in need of cash to address an urgent need such as paying off credit-card debts, tuition costs, or medical bills, or appealed to consumers’ desire to have quick access to the cash value of the pension that they have earned. Eleven of the 38 companies that we identified used marketing materials or sales pitches designed to target consumers with poor or bad credit. These 11 companies encouraged those with poor credit to apply, stating that poor or bad credit was not a disqualifying factor. We also observed this type of marketing during our undercover investigative phone calls. For example, a representative from one company stated that the company uses a credit report to determine the maximum lump sum that it can provide to the pensioner, and stated that no application would likely be declined. Six pension advance companies provided our undercover investigator with quotes for pension advances with terms that did not compare favorably with other financial products such as loans and lump-sum payment options provided directly through private-sector pension plans. We compared the 99 offers provided to our undercover investigators by six pension advance companies in response to phone calls and online quote requests with those of other financial products. Specifically, we compared the terms with: (1) relevant state usury rates for loans and (2) lump-sum options offered through defined-benefit pension plans. As discussed below, we found that most of the six pension advance companies’ lump-sum offers (1) had effective interest rates that were significantly higher than equivalent regulated interest rates, and (2) were significantly smaller than the lump-sum amounts that would have to be offered in a private-sector pension plan that provided an equivalent lump- sum option. We determined that the effective interest rate for 97 out of 99 offers provided to our undercover investigator by six companies ranged from approximately 27 percent to 46 percent. Most of these interest rates were significantly higher than the legal limits set by some states on interest rates assessed for consumer credit, known as usury rates or usury ceilings. For example, in comparison to the usury rate for California of 12 percent, we determined that the quotes for lump-sum payments that our undercover investigator received from three pension advance companies for a resident of California had effective interest rates ranging from approximately 27 percent to 83 percent. The effective interest rates on some of these offers could be even higher than the rates we calculated to the extent some pension advance companies require the pensioner to purchase life insurance, and “collaterally assign” the life- insurance policy to the company, to protect the company in the event of the pensioner’s death during the term of the contract. For many of the quotes our undercover investigator received, it was unclear whether the pensioner would be responsible for any life-insurance premium payments. See table 1 for additional examples of usury-rate comparisons for states where our fictitious pensioners resided for our case studies. We compared pension advance offers that our undercover investigator received to lump-sum options that can be offered in pension plans, where a lump sum can be elected by plan participants in lieu of monthly pension payments. The amount of such a lump-sum option of a private-sector plan must comply with Employee Retirement Income Security Act of 1974 (ERISA) and Internal Revenue Code requirements that regulate the distribution of the present value of an annuity by defining a minimum benefit amount to be paid as a lump sum if the plan offers a lump-sum option and a private-sector pensioner chooses that option. We determined the minimum lump-sum amount under ERISA rules for private defined-benefit plan sponsors. On the basis of our analysis of 99 pension advances offered by six companies, we determined that the vast majority of the offers our undercover investigator received (97 out of 99) were for between approximately 46 and 55 percent of the minimum lump sum that would be required under ERISA regulations. This means that if these transactions were covered under ERISA regulations, the pensioners would receive about double the lump sum that they were offered by pension advance companies. Again, to the extent pension advance companies require the pensioner to pay for life insurance, the terms of the deal would be even more unfavorable than indicated by these lump-sum comparisons. Additional information on the basis for the ERISA calculations is included in our June 2014 report. In January 2015, we reported that pension plan participants potentially face a reduction in retirement income if they accept a lump sum offer. Since the time of our review, Treasury announced plans to amend regulations related to the use of lump-sum payments to replace lifetime income received by retirees under defined benefit pension plans. Specifically, these amendments generally would prohibit plans from replacing a pension currently being paid with a lump sum payment. As noted above, our June 2014 comparison observed that ERISA-regulated lump-sum payments from pension plan sponsors were considerably higher than the lump sum amounts offered by pension advance companies. In the future, pension advance offers may appear more appealing to some consumers who require money immediately that do not otherwise have the option to obtain an ERISA-regulated lump sum payment. Our June 2014 report identified questionable elements of pension advances, such as the lack of disclosure and unfavorable agreement terms. Whether certain disclosure laws apply to pension advance products depends partly on whether the product and its terms meet the definition of “credit” as set in the Truth in Lending Act (TILA), and whether pension advances are actually loans and should be subject to relevant TILA laws is a long-standing unsettled question. During our June 2014 review, we found that the costs of pension advances were not always clearly disclosed to the consumer and some companies were inconsistent about whether the product was actually a loan. For example, 31 out of the 38 companies we identified did not disclose to pensioners an effective interest rate or comparable terms on their websites. For loans, under TILA, companies would be required to disclose an effective interest rate for the transaction. We also found that some of the offers provided to our undercover investigator by six pension advance companies were not clearly presented. Specifically, these companies provided a variety of offers based on differing number of years for the term as well as differing amounts of the monthly pension to be paid to the company. For example, one company provided a quote including 63 different offers with varying terms and monthly payment amounts to our fictitious federal pensioner. We considered this volume of information to be overwhelming while not including basic disclosures, such as the effective interest rate or an explanation of the additional costs of life insurance. In addition, the full amount of additional fees such as life-insurance premiums was not always transparently disclosed in the written quotes that six pension advance companies provided to our undercover investigator. We also found that some of the 38 companies we reviewed were not consistent in identifying whether pension advances are loans. For example, while nine companies referred to these products as a loan or “pension loan” on their websites, six of these companies stated elsewhere on their websites that these products are not loans. During our review we found that there was limited federal oversight related to pension advances. Both CFPB and FTC are authorized to protect consumers and to regulate the types of financial and commercial practices that consumers should be protected against, some of which appear to be relevant to practices that we describe in our June 2014 report. However, at the time of our 2014 review, neither agency had undertaken any direct oversight or public enforcement actions regarding pension advances. According to CFPB officials, they were concerned about the effect of pension advances on consumers, but stated that they had not taken an official position or issued any regulations regarding pension advance transactions or products, or taken any related enforcement actions. According to FTC officials, the agency had not taken any public law-enforcement action as they had not received many complaints regarding this issue. As noted in our 2014 report, conducting a review to identify whether some questionable practices—such as the ones highlighted in our report—are unfair or deceptive or are actually loans that should be subject to disclosure rules under TILA, and taking any necessary oversight or enforcement action, could help CFPB and FTC ensure that vulnerable pensioners are not harmed by companies trying to exploit them. Hence, we recommended that CFPB and FTC review pension advance practices and companies, and exercise oversight and enforcement as appropriate. CFPB agreed with this recommendation and took action by investigating pension advance companies with questionable business practices. We also referred the 38 companies that we identified in our review to CFPB for further review and investigative action, if warranted. In August 2015, CFPB filed suit against two of the companies included in our review for a variety of violations including, among others, unfair, deceptive, and abusive acts or practices in violation of the Consumer Financial Protection Act of 2010 and false and misleading advertising of loans. FTC also agreed with our recommendation and, according to FTC officials, the agency has also taken actions to review consumer complaints related to pension advances, pension advance advertising, and the pension advance industry overall. In our June 2014 report, we highlighted that consumer financial education can play a key role in helping consumers understand the advantages and disadvantages of financial products, such as pension advances. As we reported, it can be particularly important for older adults to be informed about potentially risky financial products, given that this population can be especially vulnerable to financial exploitation. The federal government plays a wide-ranging role in promoting financial literacy, with a number of agencies providing financial-education initiatives that seek to help consumers understand and choose among financial products and avoid fraudulent and abusive practices. CFPB plays a role in financial education, having been charged by statute to develop and implement initiatives to educate and empower consumers (in general) and specific target groups to make informed financial decisions. At the time of our 2014 review, we found that CFPB and four other agencies had taken some actions to provide consumer education on pension advances. However, several other federal agencies—including some that regularly communicate with pensioners as part of their mission—did not provide information about pension advance products and their associated risks and were not aware of CFPB publications at the time of our review. Also, these agencies reported that they had not identified many related complaints and some were just learning about pension advance products. We recommended that CFPB coordinate with the federal agencies that regularly communicate with pensioners on the dissemination of existing consumer-education materials on pension advances. CFPB agreed with this recommendation and released a consumer advisory about pension advances in March 2015. In addition, CFPB provided the Financial Literacy and Education Commission with material related to pension advances in April of 2015. Similarly, FTC—which educates consumers on consumer products and avoiding scams through multimedia resources— had not previously provided any specific consumer education about pension advances. However, in response to our review, in 2014, FTC also posted additional consumer-education information about pension advances on its agency website. In conclusion, some older Americans are both at greater risk of being in financial distress and of being financially exploited as they typically live off incomes below what they earned during their careers and assets that took a lifetime to accumulate. Some pension advance companies market their products as a quick and easy financial option that retirees may turn to when in financial distress from unexpected costly emergencies or when in need of immediate cash for other purposes. However, pension advances may come at a price that may not be well understood by retirees. As illustrated by examples in my statement and by related consumer complaints and lawsuits, the lack of transparency and disclosure about the terms and conditions of these transactions, and the questionable practices of some pension advance companies, could limit consumer knowledge in making informed decisions, put retirement security at risk, and make it more difficult for consumers to file complaints with federal agencies, if needed. CFPB and FTC have taken actions to implement the recommendations that we made to review pension advance practices and companies, and exercise oversight and enforcement as appropriate, as well as to disseminate consumer-education materials on pension advances. We believe their implementation of these recommendations will help to strengthen federal oversight or enforcement of pension advance products while ensuring that consumer-education materials on pension advances reach their target audiences, especially given that Treasury’s recent announcement restricting permitted benefit increases may make these products more desirable to pensioners. Chairman Collins, Ranking Member McCaskill, and Members of the Committee, this concludes my prepared remarks. I look forward to answering any questions that you may have at this time. For further information on this testimony, please contact Stephen Lord at (202) 512-6722 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Latesha Love, Assistant Director; Gabrielle Fagan; John Ahern; and Nada Raoof. Also contributing to the report were Julia DiPonio, Charles Ford, Joseph Silvestri, and Frank Todisco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recent questions have been raised about companies attempting to take advantage of retirees using pension advances. In June 2014, GAO issued a report on pension advances. The report (1) described the number and characteristics of pension advance companies and marketing practices; (2) evaluated how pension advance terms compare with those of other products; and (3) evaluated the extent to which there is federal oversight. This testimony summarizes GAO's June 2014 report (GAO-14-420) and actions taken by CFPB and FTC in response to GAO's recommendations. In June 2014, GAO identified 38 pension advance companies and related marketing practices. GAO conducted a detailed nongeneralizable assessment of 19 of these companies. GAO used undercover investigative phone calls to identify additional marketing practices and obtain pension advance offers. This information was compared with the terms of other financial products, such as personal loans. GAO also examined the role of selected federal agencies with oversight of consumer protection and pension issues. In a June 2014 report, GAO identified at least 38 companies that offered individuals lump-sum payments or “advances” in exchange for receiving part or all of their pension payment streams. The 38 companies used multistep pension advance processes that included various other parties. At least 21 of the 38 companies were affiliated with each other in ways that were not apparent to consumers. Some companies targeted financially vulnerable consumers with poor or bad credit nationwide. GAO undercover investigators received offers from 6 out of 19 pension advance companies. These offers did not compare favorably with other financial products or offerings, such as loans and lump-sum options through pension plans. For example, the effective interest rates on pension advances offered to GAO's investigators typically ranged from approximately 27 percent to 46 percent, which were at times close to two to three times higher than the legal limits set by the related states on the interest rates assessed for various types of personal credit. GAO identified questionable elements of pension advance transactions, including lack of disclosure of some rates or fees, and certain unfavorable terms of agreements. GAO recommended that the Bureau of Consumer Financial Protection (CFPB) and Federal Trade Commission (FTC)—the two agencies with oversight responsibility over certain acts and practices that may harm consumers—provide consumer education about these products, and that CFPB take appropriate action regarding identified questionable practices. Since the time of GAO's review, CFPB has investigated pension advance companies that GAO referred to the agency and disseminated additional consumer-education materials on pension advances. Similarly, FTC posted consumer education on pension advances on its website, and FTC officials report that they have reviewed consumer complaints related to pension advances, pension advance advertising, and the pension advance industry overall. CFPB's and FTC's actions are a positive step toward strengthening federal oversight or enforcement of pension advance products. In its June 2014 report, GAO recommended that CFPB and FTC review the pension advance practices identified in that report and exercise oversight or enforcement as appropriate. GAO also recommended that CFPB coordinate with relevant agencies to increase consumer education about pension advances. CFPB and FTC agreed with and have taken actions to address GAO's recommendations. |
The ALJ position was created by the Administrative Procedure Act of 1946 (APA) The APA separated the rulemaking functions from administrative adjudication proceedings in federal agencies, and sought to ensure fairness and due process in both. The APA provides for formal hearings in certain cases where a party has been affected by an agency decision or determination. Typically, ALJs have two primary duties in the administrative adjudication process. The first duty is to preside over the taking of evidence at agency hearings and act as the finder of facts in the proceedings. An ALJ’s other main duty is to act as a decision maker by making or recommending an initial determination about the resolution of the dispute. In these regards, ALJs, who are executive branch employees, function much like trial judges in the judicial branch. In general, ALJs hear cases that fall into four different categories: (1) enforcement cases; (2) entitlement cases; (3) regulatory cases; and (4) contract cases. Depending on the rules relevant to the particular issue in dispute, the hearings can be either adversarial, where the parties or their representatives debate evidence and law before the ALJ, or non-adversarial, where the ALJ investigates the facts and develops the arguments both for and against each party. In fiscal year 2008, the federal government employed 1,436 civilian ALJs at 25 agencies. The ALJ agencies are extremely diverse, ranging from components of cabinet-level agencies, such as the U.S. Coast Guard at the Department of Homeland Security, to independent agencies such as SSA, the National Transportation Safety Board and the Securities and Exchange Commission. SSA employed the largest number of federal ALJs with 1,192 ALJs (83 percent of the federal ALJ workforce), distantly followed by HHS, which employed 72 ALJs, about 5 percent of the ALJ workforce. Seventeen ALJ agencies each employed 5 or fewer ALJs. The conditions of employment for ALJs are unique among federal employees. In order to ensure ALJs carry out their duties impartially, the Administrative Procedure Act (APA) stipulates that ALJs are to be independent of their employing agencies in matters of appointment, tenure and compensation. To achieve this objective, the APA assigns responsibilities for the ALJs to three agencies: OPM, the ALJ agency, and the Merit Systems Protection Board (MSPB). The ALJ agencies are responsible for managing the ALJs they hire. MSPB has a role in disciplining ALJs. Under its authority to issue regulations implementing the APA, the OPM regulations divide the responsibilities for hiring, pay and performance management among OPM, the ALJ agency and the MSPB. Table 1 lists how the major hiring, pay and individual performance management responsibilities are divided between OPM and the ALJ agency. OPM has a number of responsibilities for ALJs under the statutory framework of the APA. OPM is responsible for administering the exam and creating a register of qualified candidates for ALJ positions. OPM also has the authority to prescribe regulations regarding (1) various sections of the APA governing ALJs; (2) implementing the section governing the appointment of ALJs; (3) implementing the requirements that ALJs be assigned cases in rotation as so far as is practicable and not perform duties inconsistent with their duties and responsibilities as ALJs; (4) implementing the detail provisions of the APA, which allow details of ALJs to agencies with occasional or temporary needs for ALJs as selected by OPM; (5) regulations excluding ALJs from the definition of employee for the purposes of performance appraisals; and (6) implementing the three levels of basic pay for ALJs and allowing OPM to provide for appointment of an ALJ in the lowest level at an advanced rate where OPM deems it appropriate. In the hiring of ALJs, OPM is responsible for examining applicants and certifying qualified candidates, while the ALJ agency is responsible for identifying the number of new ALJs they require and appointing individual ALJs from OPM’s list of certified candidates. As required by the APA, OPM sets the three levels of pay for ALJs, determines the qualifications required for appointment to each level, assigns each of the agency’s ALJ positions to one of the pay levels, and determines the time-in-service required to advance to a higher pay level. OPM must provide prior approval before an ALJ agency can appoint retired annuitants, pay an ALJ applicant a higher rate of pay due to superior qualifications, promote ALJs to higher pay levels, or execute noncompetitive placements (e.g. transfers), intra-agency details or temporary assignments. Once the ALJ is employed, OPM and the ALJ agency share responsibility for managing the ALJ’s performance. For example, OPM defines those management practices that ALJ agencies may not perform, such as issuing performance ratings and awards and the ALJ agency is responsible for day-to-day management. According to its implementing regulation, OPM shares the responsibility with the ALJ agency for ensuring the ALJ’s decisional independence. The APA divides the responsibility for disciplining of ALJs between the ALJ agency and the MSPB. The APA permits the agencies to take serious disciplinary action against an ALJ only for good cause as established and determined by the MSPB on the record, after an opportunity for hearing before the board. Policymakers, ALJ agencies, and other stakeholders have been discussing aspects of ALJ management for decades. Over the years, several options have been proposed to change the roles and responsibilities for the administration of the ALJ program. Three of these options are described in more detail later in this report. Over this same timeframe, to help support deliberations on ALJ issues, we have issued more than 10 reports where the focus was either on ALJs at specific agencies or on the federal ALJ program (see list of related GAO products at the end of this report). Most recently, we issued two reports relating to ALJ performance at SSA and the Department of Homeland Security. In 2007, OPM revised its examination of ALJ applicants by, among other things, revising the minimum qualification requirements, developing a set of competencies, assessing applicants against the competencies, and changing the examination scoring method. According to OPM, in fiscal year 2008, SSA hired 185 ALJs and HHS hired 7 ALJs from the register established as a result of OPM’s new ALJ examination. SSA officials told us that they are very pleased with the quality of ALJs they hired. HHS officials stated that they are satisfied that the process provided them with highly qualified candidates. OPM is responsible for scoring the results of the competitive examination and maintaining a register of qualified candidates in rank order of their final scores. According to OPM officials, after the job announcements in 2007 and 2008, it took about 6 months for OPM to complete the examination process and assign the final ratings to qualified applicants. The pool of potential ALJ applicants appears to be large because in 2007 and 2008 OPM was able to receive the requested number of applications in only a few days. According to OPM officials, when OPM reopened the ALJ register in 2007, they received the desired number of applications within 1 week of posting an ALJ vacancy announcement. In 2008, OPM received its desired number of applicants within 3 days. In November 2009, OPM opened a new vacancy announcement for ALJ vacancies. It received the requested number of applicants within 2 days. Upon request, OPM provides ALJ agencies with a certified list, referred to as a certificate, of the highest scoring candidates from the register who are available to serve at the vacancy locations. If agency officials choose to fill a vacant ALJ position with a new ALJ, then the agency must appoint one of the candidates listed on OPM’s certificate. The interview and selection processes vary across ALJ agencies, but all agencies must comply with federal law and regulations regarding competitive employment. For example, agencies must comply with the veterans’ preference requirement and the “rule of three”—agencies must select from the highest scoring three candidates available to serve in a given location. SSA and HHS officials told us that it took 8 to15 weeks from the date the agency requested an OPM certificate of candidates until a selected candidate reported to work. Despite their satisfaction with the quality of the ALJ candidates, SSA and HHS officials stated that the ALJ hiring process should have more flexibility in order for them to appoint candidates that best meet their agency-specific needs. According to SSA officials, OPM uses a one-size- fits-all approach in establishing its register of candidates. SSA officials’ reported position was that OPM’s ALJ examination of applicants should also weigh the specialized knowledge and skills needed to adjudicate SSA cases such as the ability to manage a large docket because SSA ALJs adjudicate a high volume of cases, and the temperament to work on non- adversarial cases with unrepresented claimants. SSA was also concerned about the process for assessing whether an ALJ candidate on an OPM list of certified candidates was actually suitable for selection and appointment. SSA officials told us that they currently try to assess the specialized abilities and the potential suitability of ALJ candidates through SSA’s ALJ interviewing process and investigating the candidates’ backgrounds. SSA officials told us that, in their opinion, the process was laborious, and requested that OPM assess the suitability of candidates listed on the certificates provided to agencies. Lastly, SSA raised concerns about the adequacy of the register to meet their hiring needs. Given SSA’s plans to hire more than 226 ALJs during fiscal year 2010, SSA officials reported to us their concern that the register would not provide an adequate number of suitable candidates to consider for selection. SSA requested that OPM refresh the register with new candidates as soon as possible and plan to do so, on a regular basis. The Chief ALJ of HHS’s Office of Medicare Hearings and Appeals (OMHA) also noted that OPM’s examination process does not provide HHS with candidates who have specialized knowledge important for adjudicating cases in HHS. He thought, for example, that having 3 years of Medicare experience would be an asset for an incoming OMHA ALJ. He suggested that there should be a more flexible process to enable the agency to select candidates who might be a better fit for the agency’s work. The Chair of HHS’s Departmental Appeals Board did not have specific comments regarding the current hiring process. This board had not had an ALJ vacancy to fill from 2003 through 2008, and thus, had not hired an ALJ from the OPM register in 2007 or 2008. The OPM official responsible for the competitive examination process reported that OPM experts concluded that having certain specialization or expertise would not produce a better cadre of ALJs. In OPM’s view, the most important characteristic that ALJs need is the ability to master lots of facts rather than specialized knowledge. Consideration of any additional flexibility in ALJ hiring must await the conclusion of pending litigation. With regard to SSA’s interest in assessing the suitability of all ALJ candidates on the register, OPM reported in July 2009 that it was reviewing the documentation SSA provided regarding specific candidates. OPM noted that the suitability review process encompassed both a background investigation and an adjudication, either at the hiring agency or at OPM, depending upon the nature of any issues identified during the investigation. Agencies are required to reimburse OPM for each background investigation it conducts. Although ALJ agencies could request OPM undertake a suitability investigation at any point in the process, selecting officials usually commence the suitability assessment process only when the agency is ready to make a selection because of the expense associated with conducting a proper suitability investigation. OPM indicated that there was no appropriate mechanism whereby OPM could undertake suitability assessments in advance on all the candidates on the ALJ register, and has not received an appropriation to conduct investigations at its own expense. Regarding SSA’s request for a routine refreshment of the ALJ register, OPM indicated that it refreshes its register of ALJ candidates by offering its ALJ examination to new applicants and completing its examination of the applicants. As examining ALJ applicants requires significant assistance of retired and sitting ALJs, OPM does not want to overburden these ALJs by offering the examination too frequently. According to OPM, the ALJ register was most recently refreshed in March 2009. The timing for opening the examination is based on several considerations, such as future hiring needs. OPM regularly queries agencies about their projected ALJ hiring needs and uses the agencies’ responses to plan when to re- administer the ALJ examination. As of July 2009, they anticipated they could issue certificates that would provide an ample number of choices from which to select candidates to meet the agencies’ reported hiring needs. OPM and SSA officials are addressing the issues SSA raised and, where appropriate, are developing new approaches and solutions. ALJ agencies could face skill and competency gaps unless ALJ agencies and OPM take concerted action to assure that, in the face of significant retirement eligibility, the ALJ agencies have developed ALJ hiring and succession plans. As of September 2008, the most current data available, 51 percent of employed ALJs were eligible to retire by the end of 2008. By 2013, 79 percent of ALJs will be eligible for retirement. To put these numbers in perspective, we recently reported that about one-third of the federal workforce on board at the end of fiscal year 2007 will be eligible to retire by 2012. The proportion of ALJs who were eligible to retire was not the same at each of the 25 ALJ agencies (see table 2). As of September 2008, at 9 of the 25 ALJ agencies, all of the ALJs were already eligible to retire and at 21 of the agencies half or more of the ALJs were eligible to retire. At 4 of the 25 agencies, less than half of the ALJ workforce was eligible to retire. Administrative law judges are typically older and have served the public longer than other federal employees. For example, as of fiscal year 2008, these ALJs were, on average, about 61 years old and had about 21 years of federal service. In contrast, as of 2005, the average age of the federal workforce governmentwide was about 46 with about 15 years of service. Despite the widespread retirement eligibility of the ALJ workforce, most ALJs do not retire immediately upon becoming eligible to retire. In 2007, about 72 percent of administrative law judges were still in the federal workforce more than 5 years after their eligibility date. Overall, the ALJ program has experienced a low annual retirement rate, ranging from 2 to 5 percent from 2002 through 2006, which was about the same as the total federal workforce, which we noted is younger and generally has fewer years of service. ALJ retirements could significantly affect agencies’ adjudication capacities in two ways. First, retirements could significantly affect those agencies employing a small ALJ workforce. For the 15 agencies employing fewer than 5 ALJs, one retirement represents a loss of 25 percent or more of their ALJ capacity, at least temporarily. Secondly, ALJ retirements could also have a more pronounced effect at those agencies facing increasing case workloads because the agency would be losing experienced ALJs at a time when demand for their services is increasing. For example, in 2008, SSA hearing offices received nearly 590,000 claims, an increase of about 6 percent from 2006. In March 2009, the SSA Commissioner projected that, due to the economic downturn, SSA would receive approximately 50,000 more hearing requests in fiscal year 2009 than in fiscal year 2008. HHS’s Office of Medicare Hearings and Appeals has also experienced an increasing workload in recent years. In January 2009, the HHS Inspector General reported that, from July 2006 to May 2008, the office’s caseload increased 37 percent to over 28,000 cases, while the number of cases with the 90-day decision requirement more than tripled, from 6,079 to 20,720 cases. Although it appears there are abundant candidates to fill vacant positions, we have reported that retiring employees can leave gaps in institutional knowledge and technical skills. These gaps can arise because, among other reasons, it can take several months for new hires to become fully productive. For example, at SSA, it takes 1 to 2 months to train a new ALJ, plus an additional 9 months of on-the-job experience, before SSA considers a new ALJ to be fully productive. While actual ALJ retirements lag eligibility by several years, the agencies cannot rely on either the low ALJ retirement rate or the lag between eligibility and retirement to remain constant. According to OPM, although demographic factors such as age and years of service can help predict time of retirement, other factors that are not available are likely to have a much larger impact on retirement decisions. Such factors include familial situations, illness, caretaker status, children in college, the cost of tuition for their children, and others. The lack of data for some of these factors may limit the accuracy of retirement forecasts. OPM is the lead agency in guiding federal human capital management at executive branch agencies. To assess federal agencies’ human capital management, OPM established the Human Capital Assessment and Accountability Framework (HCAAF). One of the assessment standards relates to ensuring agencies have the talented staff that their mission requires. To meet this standard, OPM requires agencies to make meaningful progress toward closing skills, knowledge, and competency gaps in all occupations used in the agency. Furthermore, the standard requires the agencies particularly to close skills, knowledge, and competency gaps in mission-critical occupations. For example, SSA’s Fiscal Year 2009-2011 Strategic Human Capital Plan, SSA identified ALJs as a mission-critical occupation and developed a set of ALJ-specific competencies to guide its ALJ recruitment, retention, and workforce development initiatives. Despite the significant proportion of ALJs who were eligible to retire between 2008 and 2013, OPM officials told us that, as of October 2009, they had no record or knowledge of any federal agency designation of ALJ skill gaps or competency issues. Performance management systems can be powerful tools in helping an agency achieve its mission and ensuring employees are working toward common ends. Performance management systems should help employees understand their responsibilities and how their day-to-day work contributes to meeting their agency’s strategic goals as well as providing a mechanism for giving employees candid, specific feedback on how well they are meeting their performance expectations. According to OPM’s performance management guidance, employee performance management in the federal sector generally includes planning work and setting expectations, continually monitoring performance, developing the capacity to perform, periodically rating performance in a summary fashion, and rewarding good performance. However, in order to ensure that an ALJ is not unduly influenced by his or her employing agency, renders impartial decisions, and appears impartial, the APA and OPM regulations do not permit the employing agency to rate or tie an ALJ’s compensation to the ALJ’s performance. Nevertheless, SSA and HHS managers reported that they employed a variety of practices other than ratings to directly and indirectly manage ALJ performance. An example of the variety in management practices is observed at HHS. There, the Chief ALJ of the Office of Medicare Hearings and Appeals (OMHA), a large hearing office, assigned more staff management responsibilities to his ALJs than the Chair of the Departmental Appeals Board (DAB), a smaller hearing office, assigned to her ALJs. At HHS’s OMHA, which employed 65 ALJs at the end of fiscal year 2008, ALJs directly supervised their legal teams, attorney, paralegal specialist and legal assistant. In contrast, at HHS’s DAB, which employed 6 ALJs at the end of fiscal year 2008, the ALJs did not supervise support staff. Agency managers and ALJs described the ALJs’ performance as significantly influenced by the hearing office performance, although the degree of dependency varies by ALJ agency. Within this context, agency managers reported using a wide variety of practices to either directly influence ALJ performance, or to indirectly influence ALJ performance by addressing hearing office performance. The practices focused on such areas as hearing office management and staffing, case management, quantity and quality of adjudications, tools to expedite adjudication, workplace privileges, and progressive discipline. We did not assess the extent to which various practices were used at SSA and HHS, nor their effectiveness or appropriateness. The direct practices reported are common to managing the performance of all federal employees. For example, SSA and HHS ALJ managers reported providing informal feedback and coaching. The indirect practices reported addressed aspects of the hearing process that were not directly under the control of the ALJ. For example, one indirect approach was to improve the efficiency of case processing by using electronic document processing, standardizing process procedures, and tele- and videoconferencing. ALJ agency managers and officials from ALJ-related associations expressed differing views regarding current performance management practices. Managers at HHS’s OMHA and DAB thought that statutory and regulatory deadlines were helpful in managing ALJ productivity. The Chief ALJ for OMHA thought that their most significant performance management problem was having enough resources to meet the demands of their work. He felt there were sufficient safeguards in place to effectively manage the performance of his supervisory ALJs, while avoiding interference in the ALJs’ decision making. The Chair of HHS’s Departmental Appeals Board found she could effectively manage the ALJs’ performance by engaging them in improving the hearing process. Yet, while each thought either a performance rating or award could be a useful management tool in certain situations, if available, they reported they were able to manage effectively without such tools. AALJ and FALJC did not raise concerns about specific ALJ management practices at either HHS office. At SSA, however, ALJ performance management was of much greater concern among ALJ stakeholders, especially pertaining to ALJ productivity. In 2007, in order to help SSA reduce its disability hearing backlog, the Chief ALJ requested the ALJs to manage their dockets in such a way that they would be able to issue 500-700 legally sufficient decisions each year. As of July 2009, SSA reported that the request had been an effective tool, among several others, in helping to raise ALJ productivity. Officials from the AALJ and FALJC questioned the use of a productivity goal as a major tool to manage ALJ performance for several reasons, including their view that SSA had not conducted a systematic study to validate the appropriateness of the numerical range of cases in the goal. According to AALJ, FALJC, and ABA officials, SSA’s emphasis on productivity is detrimental to maintaining or improving other important dimensions of ALJ performance, such as the quality of ALJ decision making. In addition, AALJ and the Social Security Advisory Board raised concerns that the agency’s emphasis on ALJ productivity may result in unintended consequences. For example, the AALJ and the Social Security Advisory Board noted an increase in the number of favorable decisions. The Advisory Board found that as the number of decisions increases, the percentage of favorable decisions tend to increase. The AALJ and the Social Security Advisory Board expressed concern because rendering a decision favorable to a party appealing an agency determination requires less ALJ time than rendering an unfavorable decision. SSA’s emphasis on ALJ productivity may lead to more favorable decisions and result in increasing long-term costs to the federal government. The Social Security Advisory Board suggested SSA monitor the correlation between the number of decisions and the number of favorable decisions. In contrast, SSA reported in December 2009 that the rate of favorable decisions (allowance rate) had not changed significantly from fiscal year 2001 through the first quarter of fiscal year 2010. We have reported that high-performing organizations both in the United States and abroad have applied, among other strategies, a set of competencies in their employee performance management to provide a fuller picture of performance. Importantly, we found that systematically applying competencies to guide employee performance management had several advantages beyond using competencies to rate or reward individual performance. These advantages include helping managers to structure their performance discussions, enhancing consistency in performance, and ensuring an objective, balanced review of all the areas significant to the performance of the individual. Lastly, we have reported that high-performing organizations that actively involve employees and stakeholders in developing the performance management systems and provide ongoing training on the systems help increase their employees’ understanding and ownership of the organizational goals and objectives. OPM and SSA have developed competencies to support other aspects of ALJ employment. As noted earlier, OPM uses a set of competencies in its examination of ALJ applicants, while SSA uses a set of ALJ competencies to assist in their workforce planning. However, OPM has not established performance competencies to guide ALJ agencies in their day-to-day management of ALJs. As noted earlier, APA and OPM regulations prohibit ALJ agencies from issuing performance ratings and awards to ALJs. Yet, recently, the ALJ associations urged OPM to implement a particular set of performance standards. Particularly, in 2006, the presidents of several ALJ- related associations, including AALJ and FALJC, urged OPM to support codifying into law or regulation ABA’s Model Code of Judicial Conduct as a standard for satisfactory ALJ conduct and performance to which ALJs must adhere. That same year, the ABA stated that they believed ALJs should be subject to, and accountable under, appropriate ethical standards adapted from its Model Code of Judicial Conduct. We did not assess the appropriateness or relative strengths of these different sets of competencies or standards. The use of competencies might also help OPM and the ALJ agencies to ensure the ALJs’ decisional independence, a responsibility unique to ALJ management and which OPM and the ALJ agency share. Even though the competencies may not be used to influence compensation, a set of validated competencies would help managers and ALJs to define the skills and supporting behaviors that ALJs need to effectively contribute to organizational results, and thereby a shared framework for discussing employee performance and management practices. Moreover, a set of validated competencies would also help ensure objective and balanced discussions between managers and ALJs regarding performance, and enhance the consistency of ALJ performance. Furthermore, OPM has expertise in providing performance management consulting to federal agencies. Without the systematic application of standards or competencies and other safeguards to employee performance management, contention over managing performance, such as at SSA, can arise and persist. For example, we have previously reported on the use of performance standards related to the quality and quantity of ALJ decisions to evaluate ALJ performance, first recommending their use in 1978. In 1990, we noted that the lack of a study to support SSA’s use of an ALJ performance goal (case dispositions per month) led to long-standing conflict between SSA and its ALJs. In setting its ALJ productivity expectation in October 2007, SSA officials indicated that they relied on recent historical ALJ productivity data, rather than conducting a systematic study. Officials from AALJ reported to us that SSA did not consult with them prior to issuing their ALJ productivity goal in October 2007. As noted earlier, the conflict between SSA and its ALJs over SSA’s use of an ALJ productivity goal continues into its third decade. Over the last 25 years, several statutory options have been proposed to change the employment and management of ALJs. The options have addressed to varying degrees several key issues, such as which federal agency manages the ALJ program, which agency employs ALJs, whether ALJs receive a performance appraisal, the purpose of the appraisal, and so forth. In this section, we summarize three statutory options which have been proposed, without assessing the strengths and weaknesses of each proposal. We selected these three proposals because, collectively, these proposals contained the major design features of other more narrowly focused options. The ALJ Corps option was proposed repeatedly in Congress between 1983 and 1995. The 1995 version of the legislation was intended to ensure the impartial resolution of cases by changing the APA in order to establish an independent corps of ALJs within the executive branch of government. The corps would organize ALJs into divisions of practice areas; each led by a supervisory division chief ALJ who would serve as a liaison between the division and the agency that required ALJ services. The head of the ALJ Corps, the Chief ALJ, would be a presidential appointee with Senate confirmation. The Chief ALJ and the division chief ALJs would serve on the Corps Council. This body and a Complaint Resolutions Board would review complaints against ALJs. The council would have the authority to take disciplinary action against ALJs if MSPB determined there was “good cause.” The legislation did not provide additional details regarding ALJ performance management. A major difference between the ALJ Corps option and the current system is that ALJs would no longer be employed by the agency whose cases they are hearing. Instead, they would be employed by the corps. The Corps Council and the division chief ALJs would assign ALJs to the agencies, manage their workload, establish a code of conduct and establish the rules of the judicial practice. OPM’s role would be limited to selecting candidates from among job applicants and maintaining the register of qualified candidates. This legislation passed the Senate in 1993, but was not considered for a vote by the House of Representatives. The ALJ Conference option was proposed in the House of Representatives in May of 1998 and September of 2000. The 2000 version of the legislative proposal changes the APA in order to create the ALJ Conference of the United States to, among other objectives, “promote efficiency, productivity, and the improvement of administrative functions, to enhance public service and public trust in the administrative resolution of disputes.” The conference would be led by a Chief ALJ who would be a presidential appointee with Senate confirmation and who could serve a maximum of two 5-year terms. Unlike the ALJ Corps option, this option proposed to eliminate OPM’s ALJ program responsibilities. This proposed legislation was not considered for a vote by the House or the Senate. The major difference between the ALJ Conference option and the current system is that all of OPM’s current program responsibilities, such as the applicant examination and maintaining a register of qualified candidates, would be transferred to the ALJ Conference. The legislation would also allow the Chief ALJ to adopt and issue rules of judicial conduct for ALJs as long as those rules were consistent with the ABA’s Model Code of Judicial Conduct for ALJs. The rules of conduct would provide for a voluntary alternative dispute resolution process conducted at the request of the ALJ. The legislation did not provide additional details regarding managing ALJ performance. The latest proposed option came from the Social Security Advisory Board in 2006. The board’s option suggested making statutory changes to allow for case processing guidelines and rating of ALJs. The intended purpose of the board’s suggestions was to increase accountability in the hearing process, and, according to board officials, provide useful information to ALJs and management. To protect against any interference with their decisional independence, this option would have the agency establish a system to investigate allegations from ALJs of such interference and to take appropriate action. OPM would have oversight responsibility for this activity and could review the agency’s response to allegations and recommend further action. ALJs would also continue to have the other protections for decisional independence that are provided by statute: their pay would be set in accordance with OPM guidelines and the agency must provide an ALJ an opportunity for a hearing before the Merit Systems Protection Board and their establishment of good cause before taking any adverse action against the ALJ. The major difference between the Advisory Board’s option and the current system is that the board’s option allows the ALJ agency, through the agency’s Chief ALJ, to conduct performance appraisals for ALJs. These reviews would consider ALJ performance relative to such criteria as case processing guidelines, judicial comportment and demeanor, and adherence to law, regulation, and binding agency policy. The guidelines would be set in collaboration with the ALJs’ union, agency members, and others. The reviews would not include a numerical rating or ranking or determine pay, but would provide feedback on performance to assist ALJs in improving themselves and their general discipline. According to Advisory Board officials, the board recommendation would not affect ALJs’ pay. To date, these three proposed options have not progressed to consideration by both houses of Congress. Officials from SSA, the largest ALJ employer, told us they were satisfied with the quality of their 2008 ALJ candidates, as did officials from HHS’s Office of Medicare Hearings and Appeals, the next largest employer of ALJs. However, these officials told us that, in their opinion, there should be more flexibility in the ALJ hiring process in order to better meet their needs. OPM is responsible for the examination of ALJ applicants and the certification of qualified ALJ candidates, the first phase in the ALJ hiring process. ALJ agencies must select their new ALJs from an OPM certificate of qualified candidates. Beyond these two largest ALJ employers, which were the focus of our work, OPM could benefit from collecting the views from ALJ agencies employing smaller numbers of ALJs about the new hiring process and the potential need for additional flexibilities. As the federal agency authorized to administer the governmentwide ALJ program, including prescribing hiring regulations, OPM could help ALJ agencies develop strategies to address any concerns, either within the existing hiring process or by revising the process. To ensure federal agencies have talented staff, OPM requires agencies to make meaningful progress toward closing skills, knowledge, and competency gaps/deficiencies in all occupations used in the agency. A review of the ALJs’ retirement eligibility raises concerns about potential vulnerabilities in the future ALJ workforce. Given the high percentage of retirement-eligible ALJs across the federal government, the ALJ workforce is vulnerable to knowledge and skill gaps. Yet despite this vulnerability and OPM’s human capital management standard, OPM officials reported they had no record or knowledge of any federal agency designation of ALJ skill gaps or competency issues. OPM is well-positioned through its role as the ALJ program manager and its annual review of federal agencies’ human capital accountability plans to assure that ALJ agencies appropriately identify and plan for future ALJ-related skill and competency gaps. The identification of such gaps will enable OPM to provide ALJ agencies with necessary guidance, tools, and technical assistance to address agency ALJ workforce gaps. In addition, OPM can take a comprehensive view of the risks that retirements pose to the capacity of the ALJ workforce, and lead programwide initiatives, if necessary, to identify, minimize, and mitigate potential skill gaps. Given the many practices reportedly used to manage ALJ performance, the concerns raised by the ALJ-related associations regarding SSA emphasis on ALJ productivity, and the ALJ agency’s need to balance meeting its organizational goals with ensuring the ALJ’s decisional independence, OPM should review the state of ALJ performance management across all ALJ agencies. OPM is well-positioned to lead in reviewing the agencies’ ALJ-related management practices because it is the only federal agency with the statutory authority to investigate the entire ALJ program and, by regulation, defines those management practices that ALJ agencies may not perform. Moreover, OPM and the ALJ agency share responsibility for managing the ALJ’s performance. Such a review could (1) identify the practices currently used to manage ALJ performance, (2) collect the views of ALJ managers and ALJs regarding effective ALJ performance management, (3) determine if the ALJ performance concerns raised at SSA are shared by ALJ managers across all ALJ agencies, or if such concerns are limited to a few ALJ agencies, and (4) ensure current practices do not infringe on ALJ decisional independence. If OPM and/or the ALJ agencies determine that the current ALJ performance management needs programwide or agency-level improvement, these agencies could develop agreed-upon competencies, using existing agency and professional competencies as starting points. While the agreed-upon competencies could not be used to influence ALJ compensation, they could help improve ALJ performance management by defining the skills and supporting behaviors that ALJs need to effectively contribute to organizational results, ensuring objective and balanced discussions between managers and ALJs regarding performance, and enhancing consistency of ALJ performance. Given OPM’s statutory authority for administering the ALJ program, we recommend the Director of OPM take the following five actions related to hiring and managing the performance of ALJs in order to (1) identify opportunities for continuous improvement of the ALJ hiring process, (2) identify and address potential competency gaps, and (3) identify opportunities for improved performance management practices while maintaining ALJs’ decisional independence: After current hiring related litigation is resolved, solicit ALJ agencies’ feedback on the new examination process and determine whether additional agency flexibilities are needed in the ALJ hiring process. Assure ALJ agencies have identified the extent to which their ALJ workforce is vulnerable to knowledge and skill gaps and addressed these gaps in their annual human capital plans, if appropriate. OPM should assist agencies by providing guidance, tools and technical assistance to enable agencies to identify and address any skill or competency gaps in its ALJ workforce. Moreover, consistent with the need for ALJ decisional independence, lead a program-wide review with ALJ stakeholders of ALJ performance management options. This review should: Determine the degree to which current practices are meeting the goals of the ALJ agencies and ensuring ALJs’ decisional independence. Consider the use of competencies in ALJ performance management while not influencing ALJ compensation. Consider the development and distribution of programwide guidance for ALJ performance management and the involvement of ALJs and stakeholders in the development of such guidance in order to gain employee and management ownership of performance management systems. We provided a draft of this report to the Secretary of HHS, the Commissioner of SSA, and the Director of OPM for review and comment. The Acting Assistant Secretary for Legislation of HHS and the Commissioner of SSA provided technical comments which we incorporated as appropriate. The Director of OPM responded with written comments, which we have reprinted in appendix II. Consistent with our protocols, we provided a summary of the performance management section of the draft report to the officials from AALJ, the ALJ union, ABA, and FALJC for their comments. They also offered technical comments which we incorporated as appropriate. Collectively, they thought the report’s discussion of performance management was helpful and appreciated the effort made to ensure their views were presented accurately. Additionally, SSAB provided technical comments on our presentation of their results and ALJ option from their 2006 report, which we incorporated as appropriate. OPM said it agreed with our recommendation that OPM consult with agencies prior to designing the next examination and was already planning to do so. Additionally, OPM expressed concern about the report’s focus on “performance management” a term OPM does not normally apply to ALJs. In OPM’s view, the term performance management, as defined in its regulations, is the effective use of performance appraisals, which are not used with ALJs. In OPM’s opinion, “tying the discussion in the report to a concept applied to employees who may be evaluated and provided with awards is somewhat confusing and could lead to unintended consequences in terms of agencies’ interactions with their ALJs.” OPM also commented that the report appeared to assume that OPM’s role in ALJ management was “well established and not subject to dispute.” Although OPM indicated that it was open to considering our “suggestions for the greater involvement of OPM in the management of ALJs,” OPM thought we should “tie that discussion to the statutory framework that actually applies to ALJs and indicate how it believes OPM could become more involved, within that framework.” Our report notes that, as described by OPM guidance, performance management in the federal sector includes planning work and setting expectations, continually monitoring performance, developing the capacity to perform, periodically rating performance in a summary fashion, and rewarding good performance. Our report recognizes that, in accordance with APA and OPM regulations, ALJs are excluded from performance appraisals and awards. Nevertheless, other performance management practices are available to agencies to manage ALJ performance and agency managers reported to us that they are using such practices. As stated in our report, OPM could help employing agencies use these other practices to improve ALJ performance management, while helping both OPM and the ALJ agency ensure the ALJs’ decisional independence. Additionally, statutory provisions authorize OPM to prescribe regulations governing nearly all aspects of ALJ employment (the exception being that the Merit Systems Protection Board is responsible for discipline or removal of ALJs). Further, OPM is the only agency in the federal government with authority to issue regulations on ALJ employment. OPM’s authority to prescribe regulations includes the authority to “implement, interpret or prescribe law or policy…” For these reasons, we believe OPM has the authority to take a more active role in the management of the ALJ program, and that it should do so. OPM also provided technical comments which we incorporated, as appropriate. We are sending copies of this report to the congressional committees with jurisdiction over HHS and its activities; the Secretary of HHS; and the Director of OMB. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2757. Key contributors to this report are listed in appendix VI. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Based on a mandate accompanying the Consolidated Appropriations Act of 2008, this report examines: (1) the process for hiring administrative law judges (ALJ) and selected agencies’ observations on the process; (2) the level of retirement and retirement eligibility for ALJs; (3) the reported ALJ management practices at the Social Security Administration (SSA) and the Department of Health and Human Services (HHS), and the stakeholders’ views of these practices; and (4) the options that have been proposed to improve the management of the ALJ workforce, either within existing authorities or requiring new authorities. We focused our data collection on the ALJ hiring process since 2007 and reported performance management practices as described by agency and association officials. As noted earlier, due to ongoing litigation, we did not collect detailed information regarding Office of Personnel Management’s (OPM) use of its ALJ register in the 2007 and 2008 hiring. To address these four objectives, we reviewed related legal documentation, program documentation gathered from OPM, the two federal agencies employing about 88 percent of ALJs—SSA and HHS—and three major professional associations for ALJs: the Association of ALJs, the Federal ALJ Conference, and the American Bar Association’s National Conference of the Administrative Law Judiciary. We also conducted interviews with key officials from each of these organizations to gather information regarding each objective. The following provides a brief description of each of the ALJ-related organizations providing hiring and performance information for this report. OPM has managed the ALJ program since the agency was created in 1979. The ALJ program was managed through an Office of Administrative Law Judges until OPM disbanded the office in 2003. OPM divided the program responsibilities among OPM units, as follows: General Counsel serves as the initial contact for ALJ issues Strategic Human Resources Policy (SHRP) Division has the lead for ALJ policy and regulations. Human Resources Products & Services (HARP) Division generates the ALJ examination, ranking, and register. Human Capital Leadership & Merit System Accountability (HCLMSA) Division handles the ongoing interaction with agencies and identifies their needs. They have the day-to-day agency liaison responsibility. SSA administers two disability programs—Disability Insurance and Supplemental Security Income—that provide cash benefits to claimants who believe that they can no longer work because of severe physical or mental impairments. SSA’s Office of the Deputy Commissioner for Disability Adjudication and Review (ODAR) oversees the adjudication of those cases where disability claimants appeal the agency determinations of their benefits. The ODAR consists of the Office of the Chief Administrative Law Judge—the principal consultant and advisor to the Deputy Commissioner on all matters concerning the ALJ hearing function; Office of Appellate Operations (Appeals Council)—the final level of administrative review under the Administrative Procedure Act for disability claims; and the Office of Management which provides administrative support for all related management and office automation activities. According to OPM data, as of September 2008, SSA employed 1,192 ALJs. These ALJs were supervised by the Chief Administrative Law Judge, Deputy Chief ALJ, Regional ALJs, and Hearing Office ALJs. HHS’s Office of Medicare Hearings and Appeals (OMHA) was created in July 2005 when the responsibility for conducting appeals of Medicare benefit determinations transferred from SSA to HHS, Office of the Secretary. The Office of Medicare Hearings and Appeals (OMHA) is under the direction of the Chief Administrative Law Judge, who reports directly to Secretary of HHS. The Office’s ALJs issue decisions to appeals of agency determinations regarding Medicare claims brought under Parts A, B, C, and D of Title XVIII of the Social Security Act. Claimants who are dissatisfied with an ALJ decision can seek a further review and decision from the Medicare Appeals Council. In January 2009, the office employed 65 ALJs, including the Chief ALJ, 4 managing ALJs, and 60 supervisory ALJs. HHS’s Departmental Appeals Board (DAB), a component within the Office of the Secretary, is responsible for (1) reviewing certain disputes between grantees and constituent agencies of the department; (2) adjudicating certain civil remedies cases pursuant to delegations from the Secretary; and (3) performing other review, adjudication, and mediation services as assigned. The board’s ALJs hear and decide civil remedies cases and other cases as assigned. These cases include (1) sanctions against persons and entities associated with participation as a provider in federally funded health care programs or as an employee, contractor, or other fiscal relationship with the department; (2) contract abuses; and (3) termination of federal funding for alleged civil rights violations. In January 2009, the Board Chair reported she supervised five ALJs and one retired ALJ annuitant. The Association of Administrative Law Judges (AALJ) is a professional union representing the ALJs employed at SSA and HHS’s DAB. The AALJ has a collective bargaining agreement (CBA) with SSA which is in effect until 2010, and has had an interim CBA with HHS’ DAB since 2003. As of March 2009, according to the union president, the AALJ represented about 1,100 of the approximately 1,400 federal administrative law judges, or over 78 percent of ALJs in the federal workforce. The Federal Administrative Law Judges Conference (FALJC) is a voluntary professional association of federal administrative law judges who perform judicial functions within the executive branch of the government. FALJC was organized over 60 years ago. In 2008, FALJC reported that its membership includes judges from virtually every federal agency that employs administrative law judges. As of March 2009, FALJC officials reported there were 174 members (136 are active ALJs and 38 are retired ALJs) that included management-level ALJs and line ALJs. The American Bar Association’s (ABA) Judicial Division represents judges who are members of ABA. As of March 2009, according to association officials, the Judicial Division had over 3,200 members. The division is comprised of six conferences: five judicial conferences and one lawyer conference. Federal ALJs formed what is now the National Conference of the Administrative Law Judiciary (NCALJ), as one of the Judicial Division’s six conferences, in 1971. According to association officials, both federal and state ALJs can be members of the NCALJ, and, as of March 2009, the NCALJ had 233 members. According to an ABA official, there may be federal ALJs who are ABA members who are not also members of the Judicial Division or NCALJ since membership in these ABA suborganizations is voluntary. To describe demographic data relating to the retirement eligibility of the ALJs, we analyzed employment data from OPM’s human resource reporting system, Central Personnel Data File (CPDF) for the federal agencies employing ALJs. We used the pay plan code to identify and analyze ALJ data in OPM’s CPDF. We analyzed data on age, years in federal service, retirement eligibility, projected retirement rates, new hires, and similar characteristics of the ALJs. For most of the groupings, we examined the data from 1991 through 2008 and projected retirement eligibility through 2013. To determine the percentage of ALJs eligible to retire, we examined the fiscal year in which an employed ALJ is first eligible for voluntary (optional) retirement with an unreduced annuity. For example, employees under the Federal Employment Retirement System (FERS) are eligible to retire with reduced annuities at any age from 55 to 62 with 10 years of service or less. The penalty for FERS employees retiring from age 55 to 61 with less than 20 years of service is that their annuity is reduced 5 percent for every year they are under age 62. We considered the penalty for retiring with less than 20 years under age 62 a disincentive and therefore did not include this methodology in the definition of “eligible to retire.” By including FERS employees that were eligible to retire on reduced annuity in the definition of eligible to retire inflates the percentage of ALJs eligible to retire. Thus, eligible to retire is defined as “eligible to retire with an unreduced annuity.” Moreover, we did not include temporary and term employees when calculating retirement eligibility because again, doing so inflates the percentage of employees that are eligible to retire in any given year. We defined age at the time the retirement action data was recorded; and the years of federal service was the effective date of service computation date as of September 30 of each CPDF file year. New hires data sets were created by comparing the employee identification numbers of the ALJs in the current year to that of the previous year. Any ALJ new to the data set in an analysis year was categorized as a new hire. For the purposes of our report, we did not independently verify these data for the years we reviewed; however, in a 1998 report, we found that governmentwide data from CPDF for key variables in this study (agency, age, retirement plan, pay plan used to identify ALJs, and type of personnel action that identified new hires) were 97 percent accurate or better. Since our 1998 report, we have monitored OPM’s reporting requirements and data checks used to assure that CPDF data are reliable. We also reviewed OPM reports which note exceptions to OPM’s reporting requirements. In addition, to assess the reliability of data specifically used in the ALJ analyses we performed a variety of checks on the CPDF data to ensure they were complete, valid, and consistent with the OPM Guide to Personnel Data Standards. Although there were minor differences between agency reported numbers of ALJs and CPDF data, these differences would not change the findings of this report. Because the OPM CPDF data quality processes have not substantially changed since the cited 1998 GAO report, our monitoring of CPDF data, and the specific checks we performed on the ALJ data prior to our analyses, we conclude that CPDF data for the years covered in this report are sufficiently reliable for our purposes. To identify ALJ performance management practices and stakeholder views of these practices, we interviewed agency and association officials and reviewed prior reports and testimonies from OPM, SSA, SSAB, and HHS. We reviewed previous audits on ALJs conducted by GAO, and HHS’s and SSA’s Inspectors General. We also reviewed position papers and testimonies from a number of ALJ professional associations, AALJ, FALJC, and ABA. We also reviewed these documents to identify the factors affecting hearing office and ALJ performance. Given the scope of our data collection, it is not clear the extent to which the views offered by the officials from these agencies or ALJ-related associations are shared across all ALJ agencies or ALJs. We did not assess the extent to which various practices were used at SSA and HHS, nor their effectiveness or appropriateness. Contemporaneously with this study, another GAO team was conducting an analysis of SSA’s plan for reducing the hearings level backlog and preventing its recurrence, titled Summary of Initiatives to Eliminate the SSA Hearings Backlog. This team conducted site visits to the National Hearing Center in Falls Church, Virginia and to three SSA regional offices—Atlanta, Georgia; Chicago, Illinois; and Seattle, Washington—to identify the factors contributing to, among other things, the agency’s hearings backlog. During these site visits, they interviewed a variety of staff, including Hearing Office Directors, ALJs, attorneys, and support staff. They also interviewed officials from three regional ODAR offices, two state Disability Determination Services (DDS) offices, one program service center, one SSA field office, and related professional associations. We collated from these interviews those responses germane to ALJ hiring and performance and added them to those comments obtained directly through this engagement. To identify the proposed options to improve ALJ performance management, we reviewed those options that had been proposed to Congress over the last 30 years. We drew on information collected through interviews, and our review of related reports, legislation and proposals from OPM, SSA, HHS, the three associations, and the Social Security Advisory Board (SSAB). Given the scope of our data collection, it is not clear if the concerns that prompted the proposals are shared across all ALJ agencies or ALJs. We selected the ALJ Corps, ALJ Conference and SSAB options because, collectively, they contained the major design features of other more narrowly focused options. We did not assess the relative strengths or weaknesses of these proposed options. In addition to the contact named above, William Doherty, Assistant Director; Patricia Farrell Donahue, analyst-in-charge; Sara Daleski; Sharon Hogan; Sabrina Streagle; Gregory Wilmoth; Melanie Papasian; and William Trancucci made key contributions to this report. Coast Guard: Administrative Law Judge Program Contains Elements Designed to Foster Judges’ Independence and Mariner Protections Assessed Are Being Followed. GAO-09-489. Washington, D.C.: June 12, 2009. Human Capital: Trends in Executive and Judicial Pay. GAO-06-708. Washington, D.C.: June 21, 2006. Social Security Administration: Agency Is Positioning Itself to Implement Its New Disability Determination Process, but Key Facets Are Still in Development. GAO-06-779T. Washington, D.C.: June 15, 2006. Medicare: Incomplete Plan to Transfer Appeals Workload from SSA to HHS Threatens Service to Appellants. GAO-05-45. Washington, D.C.: October 4, 2004. SSA Disability Decision Making: Additional Steps Needed to Ensure Accuracy and Fairness of Decisions at the Hearings Level. GAO-04-14. Washington, D.C.: November 12, 2003. Social Security: Many Administrative Law Judges Oppose Productivity Initiatives. GAO/T-HRD-90-39. Washington, D.C.: June 13, 1990. Social Security: Many Administrative Law Judges Oppose Productivity Initiatives. GAO/HRD-90-15. Washington, D.C.: December 7, 1989. Additional Management Improvements Are Needed to Speed Case Processing at the Federal Energy Regulatory Commission. EMD-80-54. Washington, D.C.: July 15, 1980. Management Improvements In the Administrative Law Process: Much Remains to Be Done. FPCD-79-44. Washington D.C.: May 23, 1979. Administrative Law Judge Activities and the Hearing Process at the Federal Regulatory Commission. EMD-79-28. Washington, D.C.: February 13, 1979. Administrative Law Process: Better Management Is Needed. FPCD-78-25. Washington, D.C.: May 15, 1978. | The Administrative Procedure Act established unique conditions for administrative law judges' (ALJ) hiring and employment to protect their decisional independence. However, the potential for a wave of retirements and other events have focused attention on how ALJs are hired and managed. In response to the Consolidated Appropriations Act of 2008, this report examines, among other things, (1) the process for hiring ALJs and selected agencies' observations of the process; (2) ALJs' retirement eligibility and retirement issues; (3) and agency managers' reported ALJ performance management practices and stakeholders' views of these practices. To address these objectives GAO reviewed relevant statutes, regulations, Office of Personnel Management (OPM) retirement-related data, and other program-related documents, and interviewed officials from OPM, ALJ professional associations, and the two largest federal agencies employing ALJs--the Social Security Administration (SSA) and the Department of Health and Human Services (HHS). SSA and HHS officials responsible for hiring new ALJs reported they were satisfied with the quality of the judges hired from OPM's ALJ register of qualified candidates in 2008. Despite their satisfaction with these ALJ candidates, agency officials raised several issues regarding ALJ hiring and offered suggestions to improve the process, including (1) opening the OPM registry to accept new candidates more frequently, (2) giving greater consideration to agency-specific knowledge and experience, and (3) providing additional agency flexibility in meeting the procedural requirements associated with selecting from the three best qualified candidates and awarding veterans' preference. OPM officials reported they are working to address these issues and develop new approaches, where appropriate. ALJ agencies could experience skill and competency gaps in the ALJ workforce in the near future. As of September 2008, the most currently available data, 51 percent of all ALJs were already eligible to retire. Moreover, by 2013, 78 percent of all ALJs employed as of September 2008 will be eligible to retire, while at 9 of the 25 ALJ agencies, all of the ALJs were eligible to retire. Retiring employees can leave gaps in institutional knowledge and technical skills due, in part, to the time required for new hires to become fully productive. To ensure agencies have talented staff to accomplish their missions, OPM requires agencies to make meaningful progress toward closing skills, knowledge, and competency gaps/deficiencies in all occupations in the agency. Despite the significant proportion of ALJs who were eligible to retire from 2008 to 2013, OPM officials reported that, as of October 2009, they had no record of any federal agency designation of ALJ skill gaps or competency issues. OPM, as ALJ program manager and lead agency in federal human capital management, could use its annual review of federal agencies' human capital accountability plans to assure that ALJ agencies appropriately identify and plan for future ALJ related skill and competency gaps. To safeguard the independence of ALJ decisionmaking, ALJ agencies are prohibited from rating or tying an ALJ's compensation to their performance. Nevertheless, SSA and HHS officials reported using numerous other practices to manage ALJ performance. ALJ association officials were concerned some SSA performance management practices could affect ALJs' decisional independence. The use of competencies in ALJ performance management might help OPM and ALJ agencies define needed ALJ skills and behaviors, ensure objective and balanced performance discussions between managers and ALJs, and enhance consistency in ALJ performance, while not influencing ALJ compensation. Given its role as ALJ program manager and its expertise in performance management, OPM is well-positioned to lead a review of all agencies' ALJ-related management practices. |
Within the United States, there are about 295,000 miles of gas transmission pipelines, which are part of larger gas pipeline systems that transport natural gas from producing wells to users. (See fig. 1.) Gas gathering lines collect natural gas from production facilities and transport it to transmission pipelines. In turn, gas transmission pipelines transport gas products to processing plants, and then on to communities and large- volume users, such as power plants. Gas distribution pipelines continue to transport natural gas from transmission pipelines to residential, commercial, and industrial customers. PHMSA, within the Department of Transportation (DOT), administers the national regulatory program to ensure the safe transportation of natural gas and hazardous liquid by pipeline. PHMSA carries out its mission through regulation, national consensus standards, research, education, inspections, and enforcement when safety problems are found. The agency employs about 165 staff in its pipeline safety program, about half of whom are pipeline inspectors who inspect gas and hazardous liquid pipelines under integrity management and other more traditional compliance programs. In general, PHMSA retains full responsibility for inspecting and enforcing regulations on interstate pipelines that cross state boundaries, but it has arrangements with 48 states, the District of Columbia, and Puerto Rico to assist with overseeing intrastate pipelines. PHMSA allows state agencies the flexibility to design their programs to best meet their needs, although it conducts an annual audit of each state’s inspection program. States are currently authorized to receive reimbursement of up to 50 percent of the costs of their pipeline safety programs from PHMSA. Traditionally, PHMSA has performed its oversight role using uniform, minimum safety standards that all pipeline operators must meet. For gas transmission pipeline operators, these standards are based on the “class location” of the pipeline. A pipeline’s class location—based on factors such as population within 660 feet of the pipeline—determines the thickness of the pipe required and the pressure at which it can operate. Recognizing that pipeline operators face different risks, depending on such factors as location and the products they carry, PHMSA began exploring the concept of a risk-based approach to pipeline safety in the mid-1990s. The Accountable Pipeline Safety and Partnership Act of 1996 included provisions for DOT to establish a demonstration program to test such a risk-based approach. As a result, PHMSA established the Risk Management Demonstration Program, which went beyond the agency’s traditional regulatory approach by allowing individual operators to identify and focus on the risks unique to their pipelines. According to a PHMSA official, the demonstration project identified the need for operators to better understand the condition of their pipelines, including the risks and threats to their pipelines. The agency subsequently moved forward with a new regulatory approach—termed integrity management—to supplement the existing uniform, minimum regulations. Integrity management created a systematic process to managing the safety of the pipeline and is designed to provide for continual improvement. PHMSA established requirements for integrity management for hazardous liquid pipeline operators with 500 or more miles of pipelines in December 2000 and for operators with less than 500 miles in January 2002. In 2000, PHMSA was also exploring issues related to integrity management for gas transmission pipelines, including collaboration with the pipeline industry to develop consensus standards for gas integrity management, which were subsequently incorporated into the regulations. These consensus standards cover issues such as establishing and conducting integrity management programs and actions operators must take to assess the extent of corrosion in their pipelines. In 2003, PHMSA issued integrity management regulations for all operators of gas transmission pipelines. As shown in figure 2, under these regulations, operators must identify and assess segments of their pipelines that are located in “high consequence areas,” which are highly populated or frequently used areas, such as parks, where pipeline leaks or ruptures could have the greatest impact on public safety. Operators are required to collect and integrate data from their entire pipeline system—such as maps, information on corrosion protection, exposed pipeline, and threats from excavation or other third-party damage—to identify the threats to their high consequence areas. Pipeline threats include corrosion; welding defects and failures; third-party damage (e.g., from excavation equipment); land movement; and incorrect operation. Once operators have identified the threats, they must perform a risk assessment to determine which pipeline segments are most susceptible to those threats. Starting with the pipelines that are most susceptible, operators must then assess the condition of their pipelines—referred to as baseline assessments—on half of their pipeline mileage in high consequence areas by December 2007 and the remainder by December 2012. Using the results of the assessments, operators must repair or replace any defective sections of pipeline. Operators are also required to perform preventive and mitigative measures, such as installing computerized monitoring and leak detection systems. In addition, operators are required to reassess their pipelines in high consequence areas for corrosion problems at least every 7 years and for all safety threats at least every 10, 15, or 20 years, depending on the condition of the pipelines and the stress under which the pipeline segments are operated. Operators must also document processes to ensure actions for managing pipeline integrity are applied consistently and that the results are repeatable across the company. For example, operators are required to have written processes for management of change, quality assurance, and communication. The gas integrity management program is designed to improve pipeline safety by supplementing existing standard safety requirements with risk- based management principles, including performance measures to monitor progress. For the first time, all operators are required to systematically assess the condition of their pipelines in high consequence areas and make identified repairs. As of December 31, 2005, operators report having assessed about 33 percent of their pipelines in high consequence areas and completed over 2,000 repairs. In addition, we estimate that up to 68 percent of people living along natural gas transmission pipelines are located in highly populated areas and are expected to receive additional protection as operators continue to assess and repair their pipelines in these areas. Furthermore, the gas pipeline industry, state pipeline agencies, safety advocate representatives, and operators with whom we spoke generally agree that the program benefits public safety. While early indicators show that integrity management benefits public safety, some operators noted that the program is not without its costs. Operators also expressed uncertainty about the program’s documentation requirements. Despite these concerns, operators are making good progress in implementing integrity management, as demonstrated by the performance measures that operators report semiannually to PHMSA. However, these performance measures could be improved to better enable PHMSA to identify the program’s impact on public safety. Prior to the integrity management program, there were, and still are, minimum safety standards for the design, construction, operation, and maintenance of all gas transmission pipelines that provide the public with a basic level of protection from pipeline failures. For example, all operators are required to have a system to protect their pipelines from corrosion. Federal or state inspectors use a “checklist” approach to determine whether operators have such a system and that it is operating appropriately. However, the minimum safety standards do not account for the differences in the kinds of threats and degrees of risk that pipelines face. In addition, inspections of the operators verify that the standards are being followed, but do not evaluate the effectiveness of the protective measures put into place, such as the corrosion protection system, because the standards do not require operators to assess the integrity of their pipelines. Consequently, some pipelines have operated for 40 or more years without being assessed. However, 33 of 51 operators (about 65 percent) told us they had assessed the integrity of some of their pipelines prior to the integrity management regulations. The gas integrity management program goes beyond existing minimum safety standards by using risk-based management principles to provide an additional level of safety to the public where the impact of pipeline leaks, failures, or incidents could be the greatest. Risk-based management has several key characteristics that help to ensure safety—it (1) uses information to identify and assess risks; (2) prioritizes risks so that resources may be allocated to address higher risks first; (3) promotes the use of regulations, policies, and procedures to provide consistency in decision making; and (4) monitors performance. The gas integrity management program embodies each of these characteristics. It requires operators to integrate information from different sources (both internal and external) to identify the risks specific to their pipelines and then use data from the assessment of their pipelines to make necessary repairs and take preventive measures. To prioritize risks for resource allocation, integrity management focuses on high consequence areas and requires operators to assess the riskiest segments of their pipelines first. Five operators told us that the requirements of integrity management has helped focus resources, and one said it has even helped to justify the need for resources that would otherwise have been difficult to obtain. To provide a level of consistency in how tasks are performed and decisions are made, the integrity management program requires operators to document their policies and procedures. In addition, PHMSA developed inspection protocols and “frequently asked questions” to help define the agency’s expectations for operators and help ensure consistency in inspections. According to PHMSA, having procedures, roles, and responsibilities clearly defined is crucial for operators to ensure continual and consistent management for safety. Finally, integrity management requires operators to monitor their progress by reassessing their pipelines at specified intervals. Operators must also report to PHMSA semiannually on specific performance measures related to integrity management. These measures include the total mileage of pipelines and the mileage of pipelines assessed in high consequence areas, as well as the number of repairs made and the number of incidents, leaks, and failures identified in these areas. We estimate that this risk-based approach should offer additional safety benefits for up to 68 percent of the population living near gas transmission pipelines; this estimate corresponds with PHMSA’s estimate of two-thirds of the population. Even though the integrity management program applies to only pipelines in high consequence areas, which account for about 7 percent of all transmission pipeline miles, the population living along pipelines tends to be clustered in these areas. Using Census data, we estimated that up to 68 percent of the people who live near (within 660 feet) natural gas transmission pipelines are located in highly populated areas and thus should be afforded additional protection as a result of integrity management. (See fig. 3.) While operators do not report the location of their high consequence areas, population is a key component to identifying these areas. Using Census data to identify the population living along pipelines, we estimated that about 22,000 miles of transmission pipelines could be considered as being in highly populated areas, which is similar to the 20,294 miles of pipelines reported by operators as being in high consequence areas. Therefore, our estimate of the highly populated areas is a reasonable approximation of the high consequence areas. Although the integrity management program is still being implemented, a number of representatives from pipeline industry organizations, state pipeline agencies, safety advocate groups, and operators we contacted agree that integrity management benefits public safety because it requires all operators to systematically assess their pipelines to gain a comprehensive knowledge about the risks to their pipeline systems. In addition, operators must repair problems or anomalies identified in their pipelines. As of December 31, 2005, 33 percent of the identified pipelines in high consequence areas had been assessed, and over 2,000 repairs had been completed. Six of the 51 operators we interviewed also pointed to the benefit of improved communications within their companies. Investigations of pipeline incidents have shown that, in some cases, an operator possessed information that could have prevented an incident but did not share the information with employees who needed it most. The integrity management program requires operators to integrate pipeline data from various sources within the company to identify threats to the pipelines, leading to more interaction among different departments within pipeline companies. While all operators we contacted generally believe integrity management is beneficial, the program is not without its costs. For example, over half of the operators with whom we spoke said that they have hired additional staff or contractors as a result of integrity management requirements. Furthermore, one operator told us that, although it assessed its pipeline before the gas integrity management program was enacted, the operator now spends about 5,000 to 10,000 more hours per year on assessments because it must integrate data from multiple sources—some of which are formatted differently—requiring that the operator make all data consistent before using it. Another operator told us that implementation of the program was costly because its gas transmission pipelines are located under pavement. These pipelines could not be assessed using tools that run through pipelines, so the operator had to excavate, visually assess, and repave over the pipelines, which is costly. A third operator estimated that it had spent between $8.5 million and $10 million on developing its integrity management program and related systems. This operator also estimated that its annual operating costs had increased by $16.5 million to $21.5 million to comply with the integrity management regulations, even though it had an aggressive inspection and testing program prior to those regulations. Operators also cited other concerns about implementing their integrity management programs. One of the more frequently identified concerns by the operators, cited by 19 of the 51 operators we contacted (37 percent), was related to the level of documentation needed to support their gas integrity management programs. PHMSA requires operators to develop an integrity management program and provides a broad framework for the elements that should be included in the program. The regulations provide operators the flexibility to develop their programs to best suit their companies’ needs, but each operator must develop and document specific policies and procedures to demonstrate its commitment to compliance with and implementation of the integrity management program. Operators may use existing policies and procedures if they meet the integrity management requirements. In addition, operators must document any integrity management related decisions to demonstrate that they understand the risks to their pipelines and are systematically managing their pipelines for these risks. For example, an operator must document how it identified the threats to its pipeline and assessed the risks, how these risks will be managed, who was involved in these decisions and their qualifications, and the data they used. While the operators we contacted generally agreed with the need to document their policies and procedures, some said that the detailed documentation required for every decision is very time consuming and does not contribute to the safety of pipeline operations. In addition, a few operators expressed concern that they will not know if they have sufficient documentation until their program has been inspected. Initial inspections of operators by PHMSA and state pipeline agencies have confirmed that some operators are experiencing difficulty with documentation but generally are doing well with assessments and repairs. According to PHMSA and state officials, as operators continue to develop and implement their integrity management programs and as they are provided feedback during inspections, the documentation issues identified during these initial inspections should be resolved. Another concern raised by a majority of the operators is the requirement to reassess their pipelines for corrosion problems at least every 7 years. We recently reported that while reassessments are useful, the 7-year requirement appears to be conservative. Operators report to PHMSA semiannually on several performance measures that show the progress operators have made in implementing integrity management and, over time, should demonstrate the impact of integrity management on safety. Table 1 lists the performance measures and shows the progress operators reported as of December 31, 2005. Total mileage reported and assessed: As a result of technology that many operators are using to assess their pipelines, operators are assessing a much greater portion of total pipeline mileage than that which is located in high consequence areas. In addition, they are making repairs to these pipelines. Of the 51 operators we contacted, 36 (71 percent) are using in- line assessment tools that run inside the pipelines to assess the integrity of some or all pipelines within high consequence areas. These tools must be inserted and removed from the pipelines at designated locations that often run through areas other than high consequence areas. Consequently, operators reported having assessed about 44,000 miles of pipelines located outside high consequence areas, which represents about 15 percent of all gas transmission pipelines. Operators that use the in-line assessment tools told us that they assess the entire distance of pipeline between the insertion and retrieval points because, in doing so, they gather additional insights into the condition of their pipeline. While operators are not required to report to PHMSA the results of the assessments in areas outside of the high consequence areas, a number of operators with whom we spoke said that they plan to make or have made repairs identified through the assessments, regardless of where they are identified, thereby expanding the benefits of integrity management beyond the high consequence areas. High consequence mileage reported and assessed: As of December 2005, operators had assessed about 6,700 miles of their 20,000 miles of pipeline —or about 33 percent—located in high consequence areas. This progress indicates that operators are well on their way to meeting the requirement to conduct baseline assessments on 50 percent of their pipelines in these areas by December 2007. Operators must then complete the rest of their baseline assessments by December 2012. Most of the operators with whom we spoke (48 of 51) said they had no major concerns about their ability to complete baseline assessments, as required. Incidents, leaks, and failures: While pipelines are considered a relatively safe mode of transporting gas, integrity management is designed to improve pipeline safety and should lead to a reduction in the number of incidents, leaks, and failures over time. PHMSA and the pipeline industry have generally used the number of incidents, related fatalities, and injuries as a measure for determining the safety of pipelines. Since the inception of integrity management, 19 of the 305 incidents reported for all pipelines in fiscal years 2004 and 2005 occurred in high consequence areas. The majority of the incidents reported in high consequence areas—10 of the 19 incidents—were caused by third-party damage. Leaks have traditionally been reported by operators in their annual reports, but this information is not generally aggregated nationwide, so it is not possible to determine how leaks in high consequence areas compare with those in other areas. Failures were not typically reported to PHMSA prior to integrity management; therefore, it is not possible to compare the number of failures in high consequence areas with those in other areas. As PHMSA collects information on incidents, leaks, and failures over time, the agency will be able to identify trends and make these comparisons. Immediate and scheduled repairs completed: In addition to assessing pipelines, operators are also making progress in fulfilling the requirement to repair problems found on pipelines in high consequence areas. In the 2 years that operators have reported the results of integrity management, they have completed 340 repairs that were immediately required and another 1,981 scheduled repairs in high consequence areas. While it is not possible to determine the number of needed repairs that would have been identified without integrity management, it is clear that the requirement to routinely assess pipelines enables operators to identify problems that may otherwise go undetected. For example, one operator told us that it had complied with all the minimum safety standards on its pipeline, and the pipeline appeared to be in good condition. The operator then assessed the condition of a segment of the pipeline under its integrity management program and found a serious problem, causing it to shut down the pipeline for immediate repair. While the integrity management performance measures should allow PHMSA to measure the impact of the program, the measures related to incidents, leaks, and failures could be improved to better allow for optimal comparison of performance over time and make them more consistent with other pipeline safety measures. For example, incident reporting requirements do not include an adjustment for changes in the price of natural gas, even though the value of gas released is a key factor in determining whether an incident must be reported to PHMSA. A reportable incident is defined, in part, as when the estimated property damage, including the cost of gas lost, meets a threshold of $50,000. Since this reporting threshold has not been adjusted over time, as the price of gas has increased, it is difficult to use the number of incidents over time as an indicator of pipeline safety. For many years the price of gas was relatively stable. However, since 1999, natural gas prices have increased by about 179 percent, while the threshold for reporting an incident has not changed. As a result, smaller releases of gas from a pipeline meet the definition of an incident and artificially inflate the number of pipeline incidents. For example, in 1999, a release of about 16,100 thousand cubic feet of gas would have triggered the incident reporting requirement, compared with only about 5,800 thousand cubic feet of gas in 2005. In 2002, PHMSA began collecting information on the value of gas released during an incident. Adjusting the 183 gas transmission pipeline incidents that occurred in 2005 to reflect the price of gas in 1999 would have resulted in about 27 fewer incidents. PHMSA officials recognize the advantages of changing the reporting requirements to adjust for the changing price of gas or to be based on the volume of gas rather than its value, but PHMSA has not yet initiated a rule to change the reporting requirement. In addition, the usefulness of the performance measure data is limited in part by inconsistencies in the reporting of causes of incidents and leaks in high consequence areas compared with the rest of the pipeline system. For example, to report a leak within a high consequence area, operators may choose from three separate corrosion causes: internal corrosion, external corrosion, or stress-corrosion cracking. In contrast, to report a leak outside of a high consequence area, operators use one overall category for corrosion. Without consistent reporting of causes, it is difficult to compare the reasons for incidents and leaks in high consequence areas with those along the rest of the pipeline system. We are making recommendations to improve the consistency of the integrity management performance measures. PHMSA has developed various tools to help prepare and assist federal and state inspectors in conducting inspections. These inspection tools include guidance documents for evaluating operators’ integrity management programs, training courses to provide inspectors with the knowledge of technical issues, and communication mechanisms. Overall, most state pipeline agency officials told us that these tools are useful; although about half of the state officials with whom we spoke have found it difficult to schedule the required training courses, and the majority have some concerns about the adequacy of their staffing. To address these concerns, PHMSA has taken steps to make it easier for state inspectors to attend training and supports a proposal from states to provide additional funding that could be used for staffing needs. PHMSA and states have begun inspections and expect to complete the first round of inspections no later than 2009. PHMSA has completed 20 of about 100 inspections, and states have begun or completed 117 of about 670 inspections, as of June 2006 and January 2006, respectively. PHMSA and state officials reported that the initial results from these inspections show that operators are doing well in implementing the assessment and repair requirements of the integrity management program, but they need to improve documentation of their program’s processes. In collaboration with state pipeline agencies, PHMSA developed guidance documents—inspection protocols, supplemental guidance, and “frequently asked questions”—to assist federal and state inspectors in evaluating operators’ integrity management programs. The inspection protocols provide a roadmap for conducting inspections. The protocols walk the inspectors through the integrity management requirements in the regulations to help inspectors verify that an operator’s program complies with the regulations. These inspection protocols are available to the public, and many operators with whom we spoke said they had reviewed the protocols when developing their programs. To supplement the inspection protocols, PHMSA has provided inspectors with additional guidance on the types of questions to ask operators, documents to review, and key elements to consider in evaluating operators’ programs. However, this supplemental guidance has not been provided to operators: it is intended to be suggestions for inspectors rather than requirements for operators because PHSMA expects programs to differ, given that each operator is unique. In addition, PHMSA posts “frequently asked questions” and corresponding answers to its Web site. This tool further clarifies the regulations and PHMSA’s expectations for what should be included in operators’ plans. PHMSA also developed a series of required training courses to inform federal and state inspectors of technical topics relevant to the integrity management regulations. The 10 training courses—4 classroom and 6 computer-based courses—take about 20 days to complete and address the integrity management inspection protocols as well as specific threats to the pipelines (such as stress-corrosion cracking, and internal and external corrosion) and different assessment techniques (such as in-line assessment and direct assessment). While most (13 of 21) state officials with whom we spoke consider the required training to be important, about half noted that it is difficult for inspectors to schedule the classroom training on inspection protocols. PHMSA has taken steps to help state inspectors attend this training, such as offering the course in each of the five PHMSA regional offices in 2005 and providing travel funds for two inspectors from each state to attend. In addition, PHMSA maintains flexibility in scheduling the course and schedules classes once it receives enough requests. As a result, according to PHMSA records, at least one inspector from 46 of 47 states has attended the required training. The remaining state agency reported that it had confirmed that the gas transmission pipeline operators in its state do not have any pipelines in high consequence areas. Another tool that PHMSA and state pipeline agencies may use is on-the-job training. PHMSA invites state inspectors to participate in PHMSA-led inspections of interstate operators that allow state inspectors to learn how PHMSA conducts inspections, to ask questions, and to gain experience in using the protocols. The majority (12 of 21) of state officials with whom we spoke indicated that their inspectors have, or will have, participated in PHMSA-led inspections before conducting their own inspections. As time permits, PHMSA inspectors also will attend state-led inspections to provide guidance and answer questions. Finally, PHMSA has implemented several mechanisms—such as Web sites, conference calls, and meetings—to communicate with federal and state inspectors. For example, PHMSA created a restricted Web site where federal and state inspectors may obtain guidance documents, access information pertaining to inspections, pose questions on the integrity management program, and communicate with other inspectors. Through this tool, inspectors may learn from other inspectors’ experiences by reviewing documentation of completed inspections that are posted. All completed federal inspections will be posted, and 28 states reported that they intend to post the results of their inspections as well. PHMSA also holds conference calls and periodic meetings with federal and state inspectors to discuss their experiences and identify opportunities to improve the inspection program. In addition, PHMSA keeps state pipeline agencies informed about gas integrity management through regular updates through the National Association of Pipeline Safety Representatives. These updates include Web site links and status reports on issues such as training classes, upcoming inspections, and work groups. Although communication between PHMSA and states has been problematic in the past, the majority of states (41 of 47) reported that PHMSA’s efforts to improve communication and guidance pertaining to gas integrity management have been useful. PHMSA and state pipeline agencies plan to conduct more than 700 gas integrity management inspections, with the majority expected to be completed no later than 2009. PHMSA anticipates conducting a total of about 100 inspections of interstate gas transmission pipeline operators, of which about 80 are expected to have pipelines in high consequence areas. The 47 state pipeline agencies anticipate conducting a total of about 670 inspections of intrastate gas transmission operators, including those with and without pipelines in high consequence areas. The majority of states (41 of 47) reported that they will each conduct fewer than 20 inspections, although one state reported that it will conduct as many as 256 inspections. Just as operators continually assess their pipelines, PHMSA and states plan to inspect operators’ programs on a regular basis. PHMSA plans to conduct inspections of operators’ programs at least once every 3 or 4 years, and more than half of the state agencies plan to conduct these inspections at least once every year or 2. To conduct these inspections, PHMSA currently has 22 trained inspectors, 9 of which are assigned exclusively to conducting integrity management inspections. In 2002, we reported that PHMSA’s efforts to identify the resources and expertise needed to implement its integrity management approach were hampered by the lack of an up-to-date assessment of current and future staffing and training needs. In response to our recommendation to develop a workforce plan, PHMSA drafted a workforce plan in March 2005 that considers the essential elements of such a plan. For example, the plan identifies trends likely to impact the number and types of field staff needed and identifies competencies needed to meet PHMSA’s strategic goals. In addition, the plan includes an examination of how its workforce should be deployed across the organization and suggests assigning staff to regions based on regional workload and need. State officials with whom we spoke reported additional staffing concerns as a result of integrity management inspections. State pipeline agencies generally employ between one and five inspectors to perform these inspections, although they may not be dedicated to integrity management. The Pipeline Safety and Improvement Act of 2002 increased the workload of state pipeline agencies by establishing three new inspection requirements for integrity management, operator qualifications and public awareness programs. However, state staffing and funding levels were generally not increased to fulfill these additional responsibilities. States are handling the increased workload in various ways, such as combining inspections, modifying the frequency of inspections, or focusing efforts on completing one new inspection at a time. For example, a few states focused on completing operator qualifications inspections before starting integrity management inspections. In addition, 11 state officials said that it is difficult to hire qualified staff, such as engineers, who are needed for the technical nature of the integrity management inspections. According to two state officials, state agencies are losing trained inspectors because the state salaries are typically lower than those paid by operators. To help states deal with increased workload and hiring issues, the National Association of Pipeline Safety Representatives has recommended that PHMSA be allowed to reimburse state pipeline agencies up to 80 percent of their inspection program costs—up from the current allowance of up to 50 percent of program costs. PHMSA supports this increase, and such an increase is included as part of the proposed Pipeline Safety Improvement Act of 2006 (H.R. 5678 and H.R. 5782). PHMSA and about half of the state pipeline agencies have begun conducting inspections of operators’ implementation of the integrity management requirements. PHMSA and states generally started initial integrity management inspections in 2005. As of June 2006, PHMSA reported having completed 20 of about 100 inspections, encompassing about 7,063 of the 10,039 miles in high consequence areas that PHMSA is responsible for inspecting. About half of the state pipeline agencies reported that they had started or completed 117 of about 670 inspections as of January 31, 2006. In response to our survey, most of the remaining states reported that they anticipate beginning inspections in 2006. PHMSA selected the operators for initial inspections based on their history of working well with PHMSA and their expected level of program development to allow PHMSA inspectors to gain experience with its inspection protocols and process. After the first nine inspections, PHMSA met with inspectors to discuss the process and has made some revisions to the protocols based on inspectors’ recommendations. PHMSA’s current and future inspection schedule is determined by using a risk-ranking system that considers factors such as an operator’s compliance history and pipeline mileage. Using this system should result in inspections of operators with a higher potential of having an incident or problem prior to those operators with a lower potential. According to PHMSA’s “Guidelines for States Participating in the Pipeline Safety Program,” states should use the date of the last inspection and operating history to prioritize operators for inspections. Seven state officials told us they initially inspected all operators’ programs to ensure they had a program and had identified their high consequence areas, and that a more detailed inspection would be done in the future. According to a PHMSA official and state officials, initial integrity management inspections show that operators are generally experiencing few problems with assessing and repairing pipelines, although some operators are having trouble documenting their processes and procedures and thus are failing to get adequate credit for their efforts. PHMSA considers documentation important for ensuring that an operator is appropriately implementing the program, that the operator is committed to continued implementation, and that the program is being consistently implemented throughout an operator’s organization. It is also important to document the processes and procedures so that knowledge of the process is not lost as staff changes occur. According to PHMSA, the documentation should include identifying the person involved in the decision or task, information needed and steps taken to make the decision or complete a task, and the results. Two state officials said that the operators in their states with few transmission pipeline miles were making efforts to comply but that they were struggling with implementing integrity management requirements. For example, the operator of a paper mill that also owns and operates about 8 miles of gas transmission pipeline to transport gas to its production facility stated that it is struggling to understand and comply with integrity management requirements. According to PHMSA and state officials, as operators continue developing and implementing their integrity management programs, and as they are provided feedback during inspections, the issues identified during these initial inspections should be resolved. PHMSA is continuing to determine the appropriate enforcement actions, if any, as a result of its initial inspections and will consider all available enforcement tools, including civil penalties. As of June 30, 2006, six enforcement actions have been processed but no fines have been assessed. Four operators have been issued a Notice of Amendment, which indicates a need to improve their written processes and procedures. In addition, two of these operators have also received a Notice of Probable Violation and Proposed Compliance Order for potentially failing to fully comply with the risk analysis requirement in the rule. According to a PHMSA official, the enforcement actions processed to date are proposed actions and will become final after the operators have had an opportunity for a hearing. PHMSA has developed a process that provides consistent standards for the inspectors and regional directors to use in determining when an enforcement action is warranted. The process lays out criteria to determine the severity of each issue identified during the inspection, whether enforcement action is appropriate and, if so, what type of action to take. As part of their agreements with PHMSA, most states are responsible for taking appropriate enforcement actions as a result of their inspections. Most state officials said that issues identified during their initial integrity management inspections have not warranted enforcement actions. However, one state official with whom we spoke issued a notice of violation to an operator that had not developed an integrity management plan. The operator, with about 11 miles of gas transmission pipelines, told the state that it was unaware of the requirement to develop an integrity management program. The state official told us that, after the inspection, the operator immediately began developing a program, and the state inspector is to revisit this operator within 6 months. The gas integrity management program has made a promising start. The program’s risk-based approach is supported by industry, state pipeline agencies, safety advocates, and operators. Although the national transmission pipeline system is extensive, much of the population that is potentially affected by a pipeline event is concentrated in highly populated areas, which will be provided additional protection through the program. Thus far, operators are successfully implementing the critical assessment and repair requirements, and their documentation concerns should be resolved as operators gain experience with the program and receive feedback during inspections. While the progress in implementing the program to date is encouraging, PHMSA and state oversight will be critical to ensure that operators continue to effectively implement integrity management. As the program matures, PHMSA’s performance measures should allow the agency to quantitatively demonstrate the program’s impact on the safety of pipelines. However, relatively minor changes in how some of the measures are reported could help improve their usefulness and PHMSA’s ability to analyze and demonstrate the program’s impact over time. To improve the consistency and usefulness of the integrity management performance measures, we are recommending that the Secretary of Transportation direct the Administrator for the Pipeline and Hazardous Materials Safety Administration to take the following two actions: revise the definition of a reportable incident to consider changes in the price of natural gas and establish consistent categories of causes for incidents and leaks on all gas pipeline reports. We provided a draft of this report to DOT for review and comment. We received oral comments from DOT officials, including the Assistant Administrator and Chief Safety Officer of PHMSA. The officials generally agreed with the report’s findings and recommendations. They agreed with the need to revise the definition of a reportable gas transmission pipeline incident, noting that doing so provides a more realistic and consistent basis for reporting. PHMSA has already begun informal discussions with various parties on this issue and expects to initiate the rule making necessary to change the definition of a reportable gas incident soon. The officials also agreed with the recommendation to have consistent categories of causes for incidents and leaks for all gas pipeline reports. PHMSA is evaluating several alternatives to reconcile the differences in the categories and expects to initiate action to implement this recommendation. We are sending copies of this report to congressional committees and subcommittees with responsibility for transportation safety issues; the Secretary of Transportation; the Administrator, PHMSA; the Assistant Administrator and Chief Safety Officer, PHMSA; and the Director, Office of Management and Budget. We will also make copies available to others upon request. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at siggerudk@gao.gov or (202) 512-2834. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix III. The Pipeline Safety Improvement Act of 2002 directed GAO to assess the effects on public safety stemming from the gas transmission pipeline integrity management program. Accordingly, the objectives of our report were to examine (1) the effect on public safety of the gas transmission pipeline integrity management program and (2) the plans of the Pipeline and Hazardous Materials Safety Administration (PHMSA) and state pipeline safety agencies to oversee gas transmission pipeline operators’ implementation of integrity management requirements. To address these objectives, we reviewed laws, regulations, performance measure data, and PHMSA guidance and inspection reports related to the gas integrity management program. We also interviewed PHMSA officials and representatives from gas pipeline trade associations, pipeline safety advocacy groups, state pipeline agencies, and gas transmission pipeline operators. In addition, we reviewed prior GAO reports related to pipeline safety. To determine the effect that the gas integrity management program requirements have had on public safety, we analyzed how those requirements compare with minimum safety requirements to understand what additional requirements operators were subject to as a result of integrity management. We discussed with PHMSA officials how the regulations were designed and developed to improve public safety. Since the integrity management requirements apply to a relatively small percentage of all transmission pipeline miles—about 7 percent—we estimated the percentage of the population living along pipelines that should receive additional protection as a result of integrity management because they are located in highly populated areas. We used Census data to estimate the percentage of the population that lives within 660 feet of a transmission pipeline that are located in urban areas, which would be considered highly populated areas. We used Census data to identify highly populated areas because the specific locations that operators have identified as high consequence areas were not readily available. Operators have identified a total of 20,294 miles of gas transmission pipelines in high consequence areas, and we have likewise identified a total of about 22,000 miles of pipelines in highly populated areas. Therefore, our estimate of pipelines in highly populated areas is a reasonable approximation of the pipelines in high consequence areas. To identify and understand the benefits and challenges the operators face in developing and implementing their integrity management programs, we contacted 51 gas transmission pipeline operators to discuss their experiences and views on the program. We selected a range of operators with either large or small numbers of transmission pipeline miles since this could indicate the level of resources a particular operator would have to draw from to develop its integrity management program. We also selected operators based on a mixture of interstate and intrastate operators and considered the proportion of pipeline miles that each operator had in high consequence areas in our selection process. The information that we obtained from these operators is not generalizable to all gas transmission pipeline operators. We also discussed the integrity management program and its requirements with gas pipeline trade associations, pipeline safety advocacy groups, and state pipeline agencies to obtain their opinions on the benefits, challenges, and performance measures of the program. In addition, we analyzed the integrity management performance measure data reported by operators to PHMSA. We assessed the internal controls and the reliability of the data elements needed for this engagement and determined that they were sufficiently reliable for our purposes. We compared the reporting requirements for integrity management performance measures with other pipeline reported data. Given the early stages of implementation of the integrity management program, we determined that there was not enough comparable historical data to conduct a trend analysis to quantify the impact of the program to date. To determine PHMSA’s plans to oversee operators’ implementation of the integrity management program, we spoke with PHMSA officials about the inspection tools it developed to understand the purpose of the tools, their development, information that both federal and state inspectors receive about them, and plans for continual evaluation and improvement of the inspection program. We also reviewed the integrity management regulations, inspection protocols, supplemental guidance, frequently asked questions, and other guidance documents that inspectors may use to conduct integrity management inspections. While we compared the inspection protocols with the gas integrity management regulations to ensure that the protocols are aligned with the regulations, we did not evaluate the adequacy of these documents. We reviewed PHMSA requirements for both integrity management and core training, the schedule of training classes, and attendance records of state inspectors who have attended training on the inspection protocols. We also reviewed PHMSA’s schedule of inspections and documentation on how the agency prioritizes operators for inspections. In addition, we reviewed PHMSA’s workforce plan dated March 2005 to understand the agency’s efforts to identify the resources and expertise needed for integrity management. To understand the plans of state pipeline agencies to oversee operators’ implementation of integrity management requirements, we surveyed the 46 state pipeline agencies and the District of Columbia pipeline agency that have responsibility for conducting gas integrity management inspections. We pretested the survey with three states prior to deployment. The survey covered state plans for inspections, resources and challenges, and communication with PHMSA. All 46 state agencies and the District of Columbia responded to our survey. (See app. II for a copy of the survey and aggregated results.) We then selected 15 states to contact to gain additional information on challenges the states face as a result of integrity management, benefits of the program to the pipeline industry, results of inspections started or completed, performance measures, and communication with PHMSA. We considered the following factors when selecting states to contact: geographic dispersion, whether inspections had been started or completed as of January 31, 2006, and whether states reported facing staffing and/or training challenges to a great or very great extent. In addition, we contacted three states prior to developing the survey. In total, we spoke with officials from 21 state pipeline agencies. These state agencies started or completed 103 of the 117 inspections started or completed, as of January 31, 2006. However, the information obtained from these conversations is not generalizable to all state pipeline agencies. We also reviewed documents from the National Association of Pipeline Safety Representatives to better understand the role of state pipeline agencies in overseeing operators. We also reviewed PHMSA’s guidance for state pipeline programs but did not evaluate PHMSA’s oversight of state pipeline programs. To understand the extent to which operators were complying with the integrity management requirements, we reviewed reports from 10 PHMSA inspections and 10 inspections from two states. Our review of the inspection reports was for illustrative purposes, and the results of our review cannot be generalized to all operators. We also spoke with PHMSA officials about their enforcement program and enforcement actions to date, and we reviewed regulations and PHMSA guidance on what enforcement actions may be taken and how PHMSA determines the appropriate action to take as a result of gas integrity management inspections. Since states were not required to develop a separate enforcement plan for gas integrity management and most state officials with whom we spoke had not taken any enforcement actions, we did not review state enforcement programs. The U.S. Government Accountability Office (GAO), an independent congressional agency, was required by the Pipeline Safety Improvement Act of 2002 (PL 107-355), to assess and evaluate the effects on public safety of the requirements for the implementation of gas transmission pipeline integrity management programs (IMP). As part of our work, GAO is reviewing how the Office of Pipeline Safety (OPS) within the Pipeline and Hazardous Materials Safety Administration plans to ensure that pipeline operators are complying with the IMP regulations. Given state pipeline agencies’ role in inspecting intrastate pipeline operators, we would like to understand the extent to which states will be inspecting operators’ implementation of IMP. The following survey is intended to help us understand state plans for conducting IMP inspections, including the development of an inspection program and resources required to conduct inspections. GAO is not auditing state inspection programs in any way. Instructions for Completing This Questionnaire This questionnaire can be filled out using MS-Word and returned via Email, or if you prefer, you may print the questionnaire and complete it by hand. If you complete it by hand, you can return your survey via fax or mail. If you are completing the survey in MS-Word, follow these instructions: Please use your mouse to navigate by clicking on the field or check box you wish to answer. To select a check box or button, simply click on the center of the box. To change or deselect a check box response, simply click on the check box and the ‘X’ will disappear. To answer a question that requires that you write a comment, click on the answer box ____ and begin typing. These boxes are highlighted in yellow. The box will expand to accommodate your answer. To assist us, we ask that you complete and return this survey by Friday, March 3, 2006. To return by Email: Once the survey is completed, save this file to your computer desktop or hard drive and attach the file as part of your Email message to FrevertH@gao.gov or EdelsteinM@gao.gov. To return by fax: Print the survey, complete it by hand, and fax it to: 202-512-4852. Please fax to the attention of Heather Frevert or Maria Edelstein. Please provide the following information for the individual coordinating the completion of this survey so that we may contact them to clarify any responses, or obtain additional information, if necessary. ( ) - , Ext: Before completing the survey, please note the following: Unless otherwise indicated, all responses should be made about your program at the state level. There is space for your comments at the end of the survey. We recognize that it is early in the IMP implementation process, and that your program may change, as well as your opinions about the process. We ask that you answer these survey questions as they pertain to your current program status and your opinions as of today. 1. How many gas transmission pipeline operators do you currently have oversight responsibility for? No. of Operators Frequency 2. How many gas integrity management program (IMP) plans do you expect to have oversight responsibility for, given that multiple operators may follow the same IMP plan? over 50 3. Does your state have its own gas IMP regulations that are separate from the federal IMP regulations? No (46) Yes (1) 3a. If yes, briefly explain how your regulations are different than federal IMP regulations. 4. To what extent do you expect that gas IMP requirements will protect public safety? Very great extent ....................... (3) Great extent ............................... (10) Moderate extent ......................... (16) Some extent ............................... (8) Little or no extent ..................... (0) Don’t know................................ (9) In measuring the effectiveness of the gas transmission integrity management regulations, do you currently collect any performance measures that are above and beyond what the federal gas IMP rules require? No (45) (2) In your opinion, are additional federal performance measures needed to measure the effectiveness of the gas transmission integrity management regulations? No (17) (4) Undecided (25) Gas Integrity Management Program Inspections 7. Will you follow the Office of Pipeline Safety’s (OPS) inspection protocols when conducting gas IMP inspections? Yes, with no changes to the protocol ....................... (43) SKIP TO QUESTION #9 Yes, but with some changes to the protocol ............. ( 3) SKIP TO QUESTION #9 No, we will not follow the OPS protocols................ ( 0) If you will not follow the OPS protocols when conducting inspections, will you use inspection protocols that your state developed? No n.a. (0 responses to “No, we will not follow . . .”, above) n.a. (0 responses to “No, we will not follow . . .”, above) 9. Has your state started inspections of gas IMP plans? (23) No SKIP TO QUESTION #10 a. On approximately what date did you start the inspections? (MM/ /YY) (Responses ranged from 3/05 to 2/06, with one respondent starting inspections in 5/01) b. As of January 31, 2006, how many gas IMP inspections have been completed? Inspections (7 Respondents reported 0 completed inspections, 9 reported between 1 and 3, 3 reported between 4 and 7, and 1 reported 50, and 3 indicated no response) c. As of January 31, 2006, how many gas IMP inspections have been started but not completed? Inspections SKIP TO QUESTION #11 (8 respondents reported 0, 9 respondents reported between 1 and 4, 1 reported 9, 1 reported 12, 4 gave no response). 10. If you have not begun inspections, have you set a date for gas IMP inspections to begin? (17) Yes On approximately what date will inspections begin? (Responses ranged from April 2006 through the end of 2006, with 1 respondent saying 2007) 11. How long do you anticipate it will take your state to inspect all of the gas IMP plans you are responsible for? Up to one year .............................................. Between one and two years.......................... Between two and three years........................ More than three years................................... Other time frame (please specify ) ..... 12. How often do you anticipate that you will inspect each of the gas IMP plans you are responsible for? Once a year................................................... Once every two years ................................... Once every three years ................................. Other time frame(s) (please ).............................................. 13. Do you plan to report the results of completed gas IMP inspections to OPS? 14. How would you describe the number of staff that your agency currently has to implement the gas IMP inspection program? We do not have enough staff at this time .................. 27 We have enough staff at this time ............................. 18 We have more than enough staff at this time ............ 1 15. How many inspectors do you currently have that can perform gas IMP inspections? Inspectors 16. To date, how many inspectors received OPS training on inspection protocols, and are currently available to conduct inspections? 17. To what extent has the state’s frequency of conducting other pipeline inspections been impacted by the addition of gas IMP inspections? Very great extent ..............................2 Great extent ......................................7 Moderate extent ................................17 Some extent ......................................9 Little or no extent ............................7 Don’t know.......................................5 18. To what extent does your agency experience the following challenges as a result of implementing the gas IMP inspection program? answer) a. Staffing challenges?.................... b. Funding challenges? ................... c. Training challenges? ................... d. Another challenge? (please ) ........................... e. Another challenge? (please ) ............................ 19. How useful has the overall guidance that OPS has provided on your IMP inspection roles and responsibilities been? Extremely useful ........................ 4 Very useful................................. 23 Moderately useful ..................... 9 Somewhat useful........................ 5 Not at all useful ......................... 1 Don’t know ................................ 5 20. B. Is this a main source of information or guidance on conducting gas IMP inspections? conducting gas IMP inspections? a. OPS State Liaison? ............... b. Other OPS Regional Staff?... c. OPS Training Staff? ............. Representatives (NAPSR)? e. Other source? (please ) .................... 21. Please provide any additional comments that you have in this space. If your comments are in response to a particular question, please indicate the question number to which you are referring. Thank you for completing the survey! In addition to the individual named above, Jennifer Clayborne, Tamera Dorland, Maria Edelstein, Heather Frevert, Cindy Gilbert, Brandon Haller, John Mingus, and Sara Vermillion made key contributions to this report. | The Pipeline Safety Improvement Act of 2002 established a risk-based program for gas transmission pipelines--the integrity management program. The program requires operators of natural and other gas transmission pipelines to identify "high consequence areas" where pipeline incidents would most severely affect public safety, such as those occurring in highly populated or frequented areas. Operators must assess pipelines in these areas for safety risks and repair or replace any defective segments. Operators must also submit data on performance measures to the Pipeline and Hazardous Materials Safety Administration (PHMSA). The 2002 act also directed GAO to assess this program's effects on public safety. Accordingly, we examined (1) the effect on public safety of the integrity management program and (2) PHMSA and state pipeline agencies' plans to oversee operators' implementation of program requirements. To fulfill these objectives, GAO interviewed 51 gas pipeline operators and surveyed all state pipeline agencies. The gas integrity management program is designed to benefit public safety by supplementing existing safety requirements with risk-based management principles that focus on safety risks in high consequence areas, such as highly populated or frequented areas. Early indications show that the condition of transmission pipelines is improving as operators complete assessments and related repairs of their pipelines. For example, as of December 31, 2005, operators had assessed 33 percent of pipelines in high consequence areas and completed over 2,000 repairs. Furthermore, up to 68 percent of the population living near gas transmission pipelines is expected to benefit from improved pipeline safety because they live in highly populated areas. Representatives from the pipeline industry, safety advocacy groups, and state pipeline safety agencies generally agree that integrity management improves public safety, but operators noted that the program can be costly to implement and cited concerns with implementing the program, such as meeting the documentation requirements. PHMSA's performance measures should demonstrate the impact of the program over time. However, we are recommending revisions to improve the measures. For example, adjusting the incident reporting requirement to account for changes in the price of natural gas would allow PHMSA to more accurately track trends in pipeline incidents. PHMSA and states plan to use a variety of inspection tools to oversee operators' implementation of integrity management requirements and expect to complete the first round of inspections no later than 2009. To assist in conducting these inspections, PHMSA has developed a range of tools, including guidance documents and training courses for inspectors. Overall, state agencies have found these tools to be useful, although some states have found it difficult to schedule the required training courses and have some concerns about the adequacy of their staffing. To address these concerns, PHMSA is taking steps to make it easier for state inspectors to attend the training and supports providing additional funding to states. Initial results from 20 federal inspections and 117 state inspections show that operators are making good progress in assessing pipelines and making repairs, but they generally need to better document their decisions and processes. |
MDA’s BMDS is being designed to counter ballistic missiles of all ranges—short, medium, intermediate, and intercontinental. Because ballistic missiles have different ranges, speeds, sizes, and performance characteristics, MDA is developing multiple systems that, when integrated, provide multiple opportunities to destroy ballistic missiles before they can reach their targets. The BMDS architecture includes space-based sensors, ground- and sea-based radars, ground- and sea- based interceptor missiles, and a command and control, battle management, and communications system to provide the warfighter with the necessary communication links to the sensors and interceptor missiles. Table 1 provides a brief description of individual BMDS systems, which MDA refers to as elements of the BMDS. As noted in the table, two programs were proposed for cancellation in April 2013 as part of DOD’s Fiscal Year 2014 President’s Budget Submission. When MDA was established in 2002, the Secretary of Defense granted it exceptional flexibility to set requirements and manage the acquisition of the BMDS in order to quickly deliver protection against ballistic missiles. This decision enabled MDA to rapidly deliver assets but we have reported that it has come at the expense of transparency and accountability. Moreover, to meet tight deadlines, MDA has employed high-risk acquisition strategies that have resulted in significant cost growth, schedule delays, and in some cases, performance shortfalls. Examples of key problems we have cited in reports in recent years are highlighted below. In recent years, MDA has experienced several test failures. These, as well as a test anomaly and delays, disrupted MDA’s flight test plan and the acquisition strategies of several components. Overall, these issues forced MDA to suspend or slow production of three out of four interceptors being manufactured. The GMD program in particular has been disrupted in its attempts to demonstrate the CE-II interceptors by two test failures. As a result of a failed flight test in January 2010 due to an assembly process quality issue, MDA added a retest designated as Flight Test GMD-06a (FTG-06a). However, this retest also failed in December 2010 due to the effects of vibration on the kill vehicle’s guidance system. As a result of these failures, MDA decided to halt GMD flight testing and restructure its multiyear flight test program, halt production of the GMD interceptors, and redirect resources to return- to-flight testing activities. Additionally, as we reported in April 2013, the costs to demonstrate and fix CE-II capability have grown from $236 million to over $1.2 billion and are continuing to grow. MDA acquisitions have faced significant cost growth, schedule delays, and/or performance shortfalls due to a highly concurrent acquisition approach. Concurrency is broadly defined as the overlap between technology development and product development or between product development and production. While some concurrency is understandable, committing to product development before requirements are understood and technologies are mature or committing to production and fielding before development is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. High levels of concurrency were present in MDA’s initial efforts and remain present in current efforts. There has been limited visibility into cost and schedule progress associated with the BMDS. We have reported on the limited usefulness of MDA’s acquisition baselines for oversight due to (1) a lack of clarity, consistency, and completeness; (2) a lack of high- quality supporting cost estimates and schedules; and (3) instability in the content of the baselines. MDA has made limited progress in developing the individual system models it uses to assess performance of the BMDS elements and linking those models. Models and simulations are critical to understanding BMDS capabilities. The complex nature of the BMDS, with its wide range of connected elements, requires integrated system-level models and simulations to assess its performance in a range of system configurations and engagement conditions. Quality issues have also impeded missile defense development in These were due to workmanship issues, the use of recent years. undocumented and untested manufacturing processes and poor control of manufacturing materials, among other factors. Congress and DOD have taken steps in recent years to address concerns over MDA’s acquisition management strategy, accountability, and oversight. These include efforts to provide more information on cost, schedule, and other baselines; efforts to prevent quality problems; and efforts to begin obtaining independent cost estimates. In April 2013, we reported that in the past year MDA gained important knowledge through its test program, including successfully conducting its most complex integrated air and missile defense flight test to date, and it took some positive steps to reduce acquisition risks for two of its programs. It has also improved the clarity of baseline information it reports to Congress. Specifically, in April 2013 we reported that in October 2012, MDA conducted the largest integrated air and missile defense flight test to date, achieving near simultaneous intercepts of multiple targets by various BMDS interceptors. This test was a combined developmental and operational flight test that for the first time used warfighters from multiple combatant commands and employed multiple missile defense systems. All five targets—three ballistic and two cruise missiles—were launched and performed as expected. In this test, THAAD also intercepted a medium range target for the first time and an Aegis ship conducted successfully a standard missile-2 Block IIIA engagement against a cruise missile. This test also provided valuable data to evaluate interoperability between several systems during a live engagement. In April 2013, we reported that in fiscal year 2012, the Aegis BMD SM-3 Block IB and THAAD programs also attained important knowledge in their flight test programs. In May 2012, the Aegis BMD SM-3 Block IB system intercepted a short-range target for the first time. In June 2012, the system completed another successful intercept which provided more insight into the missile’s enhanced ability to discriminate the target from other objects during an engagement. In October 2011, THAAD successfully conducted its first operational flight test prior to entering full- rate production.intercepted two short-range targets, demonstrating that the system can perform under operationally realistic conditions from mission planning through the end of the engagement. Additionally, this test supported the resumption of interceptor manufacturing, and was used by the Army as support for accepting the first two THAAD batteries. This also marked the During the test, THAAD fired two missiles that first time Army and DOD test and evaluation organizations confirmed that the test and its results resembled the fielded system. We also reported in April 2013 that MDA took steps to reduce acquisition risk by decreasing the overlap between technology and product development for two of its programs—the Aegis BMD SM-3 Block IIA and Block IIB programs. By taking steps to reconcile gaps between requirements and available resources before product development begins, MDA makes it more likely that programs can meet cost, schedule, and performance targets. The Aegis BMD SM-3 Block IIA program added time and money to extend development following significant problems with four components. MDA reduced its acquisition risk by delaying the program’s system preliminary design review for more than one year and, as a result, in March 2012, the program successfully completed the review because it allowed additional development of the components. We also reported in April 2013 that the Aegis BMD SM-3 Block IIB program had taken important steps to reduce concurrency and increase the technical knowledge it planned to achieve before development by delaying product development until after its preliminary design review was completed. Pub. L. No. 110-181, § 223(g), repealed by Pub. L. No. 112-81, § 231(b) (2011). operations and support, and disposal costs.key milestones and tasks, such as important decision points, significant increases in performance knowledge, modeling and simulation events, and development efforts. Some also show time frames for flight and ground tests, fielding, and events to support fielding. In its 2012 BAR, MDA made several useful changes to its reported resource and schedule baselines in response to our concerns and congressional direction. For example, MDA reported the full range of life cycle costs borne by MDA; defined and explained more clearly what costs are in the resource baselines or were excluded from the estimates; included costs already incurred in the unit cost for Targets and Countermeasures so they were more complete; added a separate delivery table that provided more detailed information on deliveries and inventories; and added a list of significant decisions made or events that occurred in the past year—either internal or external to the program—that affected program progress or baseline reporting. Although the MDA has made some progress, the new MDA Director faces considerable challenges in executing acquisition programs; strengthening accountability; assessing alternatives before making new investment commitments; developing and deploying U.S. missile defense in Europe and using modeling and simulations to understand capabilities and limitations of the BMDS. In April 2013 we reported that though MDA has gained important insights through testing and taken some steps to reduce acquisition risk and increase transparency, it still faces challenges stemming from high-risk acquisition strategies. As noted earlier, MDA has undertaken and continues to undertake highly concurrent acquisitions. While some concurrency is understandable, committing to product development before requirements are understood and technologies are mature or committing to production and fielding before development is complete is a high-risk strategy that often results in performance shortfalls, unexpected cost increases, schedule delays, and test problems. It can also create pressure to keep producing to avoid work stoppages. Our April 2012 report detailed how the Aegis BMD SM-3 Block IB, GMD, and THAAD programs undertook highly concurrent acquisition strategies. For example, to meet the presidential directive to deploy an initial set of missile defense capabilities by 2004, the GMD program concurrently matured technology, designed the system, tested the design, and produced and deployed an initial set of missile defense capabilities. CE-I interceptors were rapidly delivered to the warfighter but they required an expensive retrofit and refurbishment program that is still ongoing. Similarly, MDA proceeded to concurrently develop, manufacture, and deliver 12 of the next generation of interceptors, the CE-IIs. They were also delivered prematurely to the warfighter and will require an extensive and expensive retrofit. In April 2012, we also reported that the Aegis Ashore and PTSS programs were adopting acquisition strategies with high levels of concurrency. The Aegis Ashore program, for instance, began product development on two systems—one designated for testing and the other operational—and set the acquisition baseline before completing the preliminary design review. Best practices, by contrast, call for such baselines to be set after this review because the review process is designed to ensure the program has sufficient knowledge about resources and requirements before engaging in large-scale acquisition activities. Similarly, for its new PTSS, MDA planned to develop and produce two industry-built satellites while a laboratory-led contractor team was still in the development phase of building two lab development satellites. Such an approach would not enable decision makers to fully benefit from the knowledge about the design to be gained from on-orbit testing of the laboratory-built satellites before committing to the next industry-built satellites. In our April 2013 report, we noted that the concurrent high risk approaches for the GMD and Aegis BMD SM-3 Block IB programs were continuing to have negative effects, while the THAAD program was able to overcome most of its issues. For instance, discovery of the CE-II design problem while production was already under way increased MDA costs to demonstrate and fix CE-II capability from approximately $236 million to over $1.2 billion, due to the costs of additional flight tests including the target and test-range, investigating the failure, developing failure resolutions, and fixing the already delivered missiles. Costs continue growing because MDA further delayed the next intercept test planned for fiscal year 2012. At this time, the next intercept test date is not yet determined as MDA is considering various options. While the Aegis BMD SM-3 Block IB program slowed production to address developmental issues that arose when the program experienced a failure and a flight anomaly in early flight tests, it experienced further difficulties completing testing of a new maneuvering component—contributing to delays for a third flight test needed to validate the interceptor’s capability. We also reported in April 2013 that MDA was continuing to follow high risk acquisition strategies for its Aegis Ashore, PTSS, and Targets and Countermeasures programs. For example, this year we reported that the Targets and Countermeasures acquisition strategy is adding risk to an upcoming complex, costly operational flight test involving multiple MDA systems because it plans to use unproven targets. Using these new targets puts this major test at risk of not being able to obtain key information should the targets not perform as expected. Developmental issues with this new medium-range target as well as identification of new software requirements have already contributed to delaying the test, which was originally planned for the fourth quarter of fiscal year 2012 and is now planned for the fourth quarter of fiscal year 2013. In 2012, we recommended MDA make adjustments to the acquisition schedules to reduce concurrency. DOD agreed and partially addressed the recommendation. Specifically, MDA reduced concurrency in the Aegis BMD SM-3 Block IIA and Block IIB programs, but continues to include high levels of concurrency in other programs as discussed above. We also recommended in 2013 that the Secretary of Defense direct MDA’s new Director to add non-intercept flight tests for each new type of target DOD partially concurred, stating that missile developed to reduce risk.the decision to perform a non-intercept target test must be balanced against cost, schedule and programmatic impacts. While there may be exceptions that need to occur when there is a critical warfighter need, we believe, whenever possible, that MDA should avoid using undemonstrated targets, particularly for costly and complex major operational tests. In April 2013 we reported that while MDA made substantial improvements to the clarity of its reported resource and schedule baselines in fiscal year 2012, it has made little progress improving the quality of its cost estimates that support its resource baseline since we made a recommendation to improve these estimates in our March 2011 report.resource baselines are not yet sufficiently reliable, in part because they do not include costs from military services in reported life cycle costs for its programs. Instability due to MDA’s frequent adjustments to its acquisition baselines also makes assessing progress over time extremely difficult and, in many cases, impossible. Despite some positive steps forward since 2004, the baselines are of limited use for meaningfully assessing BMDS cost and schedule progress. In particular, MDA’s In our March 2011 report, we assessed MDA life cycle cost estimates using the GAO Cost Estimating and Assessment Guide. We found that the cost estimates we assessed, that were used to support MDA’s resource baselines, were not comprehensive, lacked documentation, were not completely accurate, or were not sufficiently credible. In April 2013 we reported that, in June 2012, MDA completed an internal Cost Estimating Handbook, largely based on our guide which, if implemented, could help address nearly all of the shortfalls we identified. Because the Handbook was only recently completed, it is too early to assess whether the quality of MDA’s cost estimates have improved. In our April 2013 report, we found that while the agency made improvements to its reported resource baselines to include all of the life cycle costs funded by MDA from development through retirement of the program, the baselines do not include operation and support costs funded by the individual military services. According to our guide, cost estimates should be comprehensive. Comprehensive estimates include both the government and contractor costs of the program over its full life cycle, from inception of the program through design, development, deployment, and operation and support to retirement. MDA officials told us in 2011 that MDA does not consider military service operation and support funds to be part of the baselines because the services execute the funds. It is unclear what percentage operation and support costs are in the case of MDA programs because they have not been reported. For programs outside of MDA these costs can be significant, and as a result the reported life cycle costs for some MDA programs could be significantly understated. In our April 2013 report, we recommended that the Secretary of Defense direct MDA’s new Director to include in its resource baseline cost estimates all life cycle costs, specifically the operations and support costs from the military services in order to provide decision makers with the full costs of ballistic missile defense systems. DOD partially concurred with this recommendation, agreeing that decision makers should have insight into the full life cycle costs of DOD programs, but disagreeing that they should be reported in MDA’s BAR. DOD did not identify how the full life cycle costs should be reported. We continue to believe that these costs should be reported because good budgeting requires that the full costs of a project be considered when making decisions to provide resources. In addition, DOD has reported full operation and support costs to Congress for major defense acquisition programs where one military service is leading the development of an acquisition planned to be operated by many military services. We also believe that MDA’s BAR is the most appropriate way to report the full costs to Congress because it already includes the acquisition costs and the MDA funded operation and support costs. In July 2012, we also used our Schedule Assessment Guide to assess five MDA program schedules that support the baselines and found that none fully met the best practices identified in the guide. For example, three programs took steps to ensure resources were assigned to their schedule activities, but one program did not do so and the other only partially did so. Moreover, none of the five programs we reviewed had an integrated master schedule for the entire length of acquisition as called for by the first best practice, meaning the programs are at risk for unreliable completion estimates and delays. DOD concurred with our recommendations to ensure that best practices are applied to those schedules as outlined in our guide, and MDA programs have taken some actions to improve their schedules, though they have not yet had time to fully address our recommendations. We plan to continue to monitor their progress because establishing sound and reliable schedules is fundamental to creating realistic schedule and cost baselines. Lastly, as we reported in March 2009, in order for baselines to be useful, they need to be stable over time so progress can be measured and so that decision makers can determine how to best allocate limited resources. In April 2013, we reported that most major defense acquisition programs are required to establish baselines prior to beginning product development.DOD, include key performance, cost, and schedule goals. Decision makers can compare the current estimates for performance, cost, and schedule goals against a baseline in order to measure and monitor progress. Identifying and reporting deviations from the baseline in cost, schedule, or performance as a program proceeds provides valuable information for oversight by identifying areas of program risk and its causes. These baselines, as implemented by However, as we reported in April 2013, MDA only reports annual progress by comparing its current estimates for unit cost and scheduled activities against the prior year’s estimates. As a result, MDA’s baseline reports are not useful for tracking longer term progress. When we sought to compare the latest 2012 unit cost and schedule estimates with the original baselines set in 2010, we found that because the baseline content had been adjusted from year to year, in many instances the baselines were no longer comparable. I would like to highlight the problems we identified in Aegis Ashore to illustrate how these adjustments limited visibility into cost or schedule progress. MDA prematurely set the Aegis Ashore baseline before program requirements were understood and before the acquisition strategy was firm. The program has subsequently added significant content to the resource baseline to respond to acquisition strategy changes and requirements that were added after the baseline was set. In addition, activities from Aegis Ashore’s 2010 BAR schedule baseline were split into multiple events, renamed, or eliminated altogether in the program’s 2012 BAR schedule baseline. MDA also redistributed planned activities from the Aegis Ashore schedule baselines into several other Aegis BMD schedule baselines. These major adjustments in program content made it impossible to understand annual or longer-term program cost progress. Rearranging content to other baselines also made tracking the progress of these activities very difficult and in some cases impossible. We recommended in our April 2013 report that the Secretary of Defense direct MDA’s new Director to stabilize the acquisition baselines so that meaningful comparisons can be made over time that support oversight of those acquisitions. DOD concurred with this recommendation. Our April 2013 report discussed a variety of other challenges facing MDA that I would like to highlight today. First, in light of growing fiscal pressures, it is becoming increasingly important that MDA have a sound basis before investing in new efforts. But MDA has not analyzed alternatives in a robust manner before making recent commitments. Second, during the past several years, MDA has been responding to a mandate from the President to develop and deploy new missile defense systems in Europe for defense of Europe and the United States. Our work continues to find that a key challenge facing DOD is to keep individual system acquisitions synchronized with the planned time frames of the overall U.S. missile defense capability planned in Europe. Third, MDA also is challenged by the need to develop the tools—the models and simulations—to understand the capabilities and limitations of the individual systems before they are deployed, which will require the agency to overcome technical limitations in the current approach to modeling missile defense performance. While MDA recently committed to a new approach in modeling and simulation that could enable them to credibly model individual programs and system-level BMDS performance, warfighters will not benefit from this effort until two of the currently planned three phases for U.S. missile defense in Europe have already been deployed in 2011 and 2015 respectively. Because MDA faces growing fiscal pressure as it develops new programs at the same time as it supports and upgrades existing ones, DOD and MDA face key challenges getting the best value for its missile defense investments. We have frequently reported on the importance of establishing a sound basis before committing resources to developing a new product. We have also reported that part of a sound basis is a full analysis of alternatives (AOA). The AOA is an analytical study that is intended to compare the operational effectiveness, cost, and risks of a number of alternative potential solutions to address valid needs and shortfalls in operational capability. A robust AOA can provide decision makers with the information they need by helping establish whether a concept can be developed and produced within existing resources and whether it is the best solution to meet the warfighter’s needs. Major defense acquisition programs are generally required by law and DOD’s acquisition policy to conduct an AOA before they are approved to enter the technology development phase. Because of the flexibilities that have been granted to MDA, its programs are not required to complete an AOA before starting technology development. Nevertheless, MDA’s acquisition directive requires programs to show they have identified competitive alternative materiel solutions before they can proceed to MDA’s technology development phase. However, this directive provides no specific guidance on how this alternatives analysis should be conducted or what criteria should be used to identify and assess alternatives, such as risks and costs. We reported in February 2013 that the Aegis BMD SM-3 Block IIB had not conducted a robust alternatives analysis and also reported in April 2013 that MDA did not conduct robust alternatives analyses for the PTSS program. Both of these programs were recently proposed for cancellation in the Fiscal Year 2014 President’s Budget Submission. In our April 2013 report, we recommended that the Secretary of Defense direct the new MDA Director to undertake robust alternatives analyses for new major missile defense efforts currently underway and before embarking on other new missile defense programs. Doing so can help provide a foundation for developing and refining new program requirements, understanding the technical feasibility and costs of alternatives and help decision makers determine how to balance and prioritize MDA’s portfolio of BMDS investments.DOD concurred with our recommendation but asserted MDA already performs studies and reviews that function as analyses of alternatives. We have found, however, that these studies are not sufficiently robust. In September 2009, the President announced a new approach to provide U.S. missile defense in Europe. This four-phase effort was designed to rely on increasingly capable missiles, sensors, and command and control systems to defend Europe and the United States. In March 2013, the Secretary of Defense canceled Phase 4, which called for Aegis BMD SM- 3 Block IIB interceptors, and announced several other plans, including deploying additional ground based interceptors in Fort Greely, Alaska, and deploying a second AN/TPY-2 radar in Japan. DOD declared the first phase of U.S. missile defense in Europe operational in December 2011. The current three-phase effort is shown in figure 1. We reported in April 2012 that in order to meet the 2009 presidential announcement to deploy missile defenses in Europe, MDA has undertaken and continues to undertake highly concurrent acquisitions. We reported in April 2013 that, according to MDA documentation, system capabilities originally planned for the first three phases are facing delays, either in development or in integration and testing. The systems delivered for Phase 1 do not yet provide the full capability planned for the phase. Phase 1 was largely defined by existing systems that could be quickly deployed because of the limited time between the September 2009 announcement and the planned deployment of the first phase in 2011. MDA planned to deploy the first phase in two stages—the systems needed for the phase and then upgrades to those systems in 2014. However, an MDA official told us that MDA now considers the system upgrades stage to be part of the second phase, which may not be available until the 2015 time frame. For Phase 2, some capabilities, such as an Aegis weapon system software upgrade, may not yet be available. MDA officials stated they are working to resolve this issue. For Phase 3, some battle management and Aegis capabilities are currently projected to be delayed. We recommended in our April 2012 report that DOD review the extent to which capability delivery dates announced by the President in 2009 were contributing to concurrency in missile defense acquisitions and identify schedule adjustments where significant benefits could be obtained by reducing concurrency. DOD concurred with this recommendation. We reported in April 2013 that a key challenge for both the Director of MDA and the warfighter is understanding the capabilities and limitations of the systems MDA is going to deploy, particularly given the rapid pace of development. According to MDA’s Fiscal Year 2012 President’s Budget Submission, models and simulations are critical to understanding BMDS operational performance because assessing performance through flight tests alone is prohibitively expensive and can be affected by safety and test range constraints. In August 2009, U.S. Strategic Command and the BMDS Operational Test Agency jointly informed MDA of a number of system-level limitations in MDA’s modeling and simulation program that adversely affected their ability to assess BMDS performance. Since then, we reported in March 2011 and again in April 2012 that MDA has had difficulty developing its models and simulations to the point where it can assess operational performance. In April 2013, we reported that MDA recently committed to a new approach in modeling and simulation that officials stated could enable them to credibly model individual programs and system-level BMDS performance by 2017. To accomplish this, MDA will use only one simulation framework, not two, to do ground testing and performance assessments. With one framework, the agency anticipates data quality improvements through consistent representations of the threat, the environment, and communications at the system level. Without implementing these changes, MDA officials told us it would not be possible to credibly model BMDS performance by 2017, in time to assess the third phase of U.S. missile defense in Europe. MDA program officials told us that the next major assessment of U.S. missile defense in Europe for the 2015 deployment will continue to have many of the existing shortfalls. As a result, MDA is pursuing initiatives to improve confidence in the realism of its models in the near term, one of which involves identifying more areas in the models where credibility can be certified by the BMDS Operational Test Agency. Another focuses on resolving the limitations identified jointly by the Operational Test Agency and U.S. Strategic Command. Lastly, MDA officials told us they are refining the process used to digitally recreate system-level flight tests in order to increase confidence in the models. Because MDA recently committed to a new approach for modeling and simulation, we did not make recommendations in our 2013 report. However, it is important that this effort receive sufficient management attention and resources, given past challenges and the criticality of modeling and simulation. In conclusion, many of the challenges I have highlighted today are rooted in both the schedule pressures that were placed on MDA when the agency was directed in 2002 to rapidly field an initial missile defense capability and the flexibilities that were granted MDA so that it could do so. Today, however, initial capability is in place; MDA has begun to transition more mature systems to the military services; it has had to propose canceling two major efforts in the face of budget reductions, concerns about affordability, and technical challenges; and the employment of BMDS systems is becoming increasingly interdependent, thereby increasing the potential consequences of problems discovered late in the development cycle. In recent years, both Congress and MDA have recognized that conditions have changed and steps need to be taken that reduce acquisition risk, while increasing transparency and accountability. However, especially in light of growing budget pressures, additional actions are needed, including sufficiently analyzing alternatives before making major new investment commitments; stabilizing acquisition baselines and ensuring they are comprehensive and reliable; ensuring acquisition strategies allow for the right technical and programmatic knowledge to be in place before moving into more complex and costly phases of development; and demonstrating new types of targets in less critical tests before they are used in a major test in order to lower testing risks The appointment of a new Director provides an opportunity to address these challenges, but doing so will not be easy as MDA is still under significant schedule pressures and the agency is undergoing a transition to respond to new Secretary of Defense direction to expand the GMD capabilities. As such, we look forward to continuing to work with MDA to identify and implement actions that can reduce acquisition risk and facilitate oversight and better position MDA to respond to today’s demands. Chairman Udall, Ranking Member Sessions, and Members of the Subcommittee, this concludes my statement. I am happy to answer any questions you have. For future questions about this statement, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include David B. Best, Assistant Director; Aryn Ehlow; Ivy Hübler; Meredith Allen Kimmett; Wiktor Niewiadomski; Kenneth E. Patton; John H. Pendleton; Karen Richey; Brian T. Smith; Steven Stern; Robert Swierczek; Brian Tittle; and Hai V. Tran. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In order to meet its mission, MDA is developing a highly complex group of systems comprised of land-, sea-, and space-based sensors to track missiles, as well as ballistic missile interceptors and a battle management system. These systems can be integrated in different ways to provide protection in various regions of the world. Since its initiation in 2002, MDA has been given a significant amount of flexibility in executing the development and fielding of the ballistic missile defense system. This statement addresses recent MDA progress and the challenges it faces with its acquisition management. It is based on GAO's April 2013 report and reports on missile defense issued from September 2008 through July 2012. The Department of Defense's (DOD) Missile Defense Agency (MDA) has made some recent progress gaining important knowledge for its Ballistic Missile Defense System (BMDS) by successfully conducting several important tests. In addition, the agency made substantial improvements to the clarity of its cost and schedule baselines since first reporting them in 2010, and declared the first major deployment of U.S. missile defense in Europe operational in December 2011. MDA also took steps to reduce acquisition risk by decreasing the overlap between technology and product development for two of its programs. MDA faces considerable challenges in executing acquisition programs; strengthening accountability; assessing alternatives before making new investment commitments; developing and deploying U.S. missile defense in Europe and using modeling and simulations to understand capabilities and limitations of the BMDS. The appointment of a new director for MDA provides an opportunity to address these challenges. More specifically: Interceptor production for three of MDA's systems has been significantly disrupted during the past few years due to high-risk acquisition strategies which have resulted in delaying planned deliveries to the warfighter, raising costs, and disrupting the industrial base. Further, MDA continues to follow high-risk acquisition strategies for other programs. For example, its Targets and Countermeasures program is adding risk to an upcoming complex, costly operational flight test involving multiple MDA systems because it plans to use unproven targets. While MDA made substantial improvements to the clarity of its reported cost and schedule baselines, MDA's estimates are not comprehensive because they do not include costs from military services in reported life-cycle costs for its programs. Instability due to MDA's frequent adjustments to its acquisition baselines makes assessing progress over time using these baselines extremely difficult and, in many cases, impossible. While MDA has conducted some analyses that consider alternatives in selecting which acquisitions to pursue, it did not conduct robust analyses of alternatives for two of its new programs, both of which were recently proposed for cancellation. During the past several years, MDA has been responding to a mandate from the President to develop and deploy new missile defense systems in Europe for the defense of Europe and the United States. GAO's work continues to find that a key challenge facing DOD is to keep individual system acquisitions synchronized with the planned deployment time frames. MDA has also struggled for years to develop the tools--the models and simulations--to understand the capabilities and limitations of the individual systems before they are deployed. While MDA recently committed to a new approach that could enable them to credibly model individual programs and system-level BMDS performance, warfighters will not benefit from this effort until after the first two of the currently planned three phases for U.S. missile defense in Europe have been deployed in 2011 and 2015 respectively. GAO makes no new recommendations in this statement. In the April 2013 report, GAO made four recommendations to DOD to ensure MDA (1) fully assesses alternatives before selecting investments, (2) takes steps to reduce the risk that unproven target missiles can disrupt key tests, (3) reports full program costs, and (4) stabilizes acquisition baselines. DOD concurred with two recommendations and partially concurred with two, stating the decision to perform target risk reduction flight tests should be weighed against other programmatic factors and that its current forum for reporting MDA program costs should not include non-MDA funding. GAO continues to believe the recommendations are valid as discussed in that report. |
Specialist physicians, by virtue of their narrower focus, can more readily keep up with changes in clinical knowledge as they occur. This appears to be especially true for cardiac care, where changes in treatment paradigms occur frequently. Cardiologists also have the advantage of seeing a larger number of patients with heart conditions, so they have more experience with the range of variation in presenting symptoms and responses to therapy. Numerous studies comparing the performance of cardiologists and primary care physicians, or generalists, in providing patient care tend to support the view that cardiologists provide a higher level of cardiac care. For example, researchers have found that cardiologists demonstrate a better understanding of the appropriate use and relative efficacy of alternative treatments for heart attacks and congestive heart failure than generalists. Moreover, cardiologists are generally quicker to put successful innovations into practice and to discontinue using therapies shown to be less effective. This has been found in the treatment of unstable angina as well as heart attacks. Studies have also demonstrated that cardiologists are more likely to follow well-established treatment guidelines than generalists. Several studies report that cardiologists are more likely to prescribe cholesterol-lowering drugs to patients with elevated cholesterol levels and beta-blockers to heart attack survivors. A smaller group of studies has found that cardiologists achieve better outcomes—including for inpatient care for heart attacks. Similar differences in practice patterns between specialists and generalists have been found in the treatment of noncardiac conditions as well, such as ulcers and strokes. The findings of these studies do not mean that cardiologists always provide superior care. First, each study reports an overall tendency, with considerable variation in performance among both cardiologists and noncardiologists. Moreover, some noncardiologists do better than others. For example, in several studies, the performance of internists comes closer to that of cardiologists (cardiology is actually a subspecialty within internal medicine) than family practitioners. Nonetheless, within cardiac care, studies reveal a fairly consistent pattern—as physician specialization increases, so does the overall level of adherence to established standards of care. These studies, however, generally do not address the extent to which HMOs affect the pattern of care provided by cardiologists compared with that provided by noncardiologists. The handful of studies looking at physician specialty differences within an HMO setting have focused on other medical conditions. Specifically, we found two recent studies by researchers employed by HMOs that compared the treatment of asthma sufferers cared for by primary care physicians and allergy and asthma specialists.Statistically adjusting for disease severity and patient characteristics, both studies found that patients of specialists received more thorough and appropriate care. Specialists’ patients more often reported taking medications recommended by national treatment guidelines, had improved day-to-day functioning, and had fewer asthma exacerbations requiring emergency room treatment. These findings suggest that treatment differences across specialties can persist within an HMO structure. However, in cardiac care, comparable differences in care provided by primary care providers and cardiologists might not be found if, for example, HMOs placed a higher priority on standardizing care for cardiac patients. Our study compares the use of three specific pharmacological treatments among Medicare heart attack survivors who saw cardiologists regularly and those who did not. Although use of these drugs represents only a portion of the post-heart-attack care available, we chose to focus our analysis on this subset of treatments because (1) there is strong scientific evidence that these treatments are beneficial for a large proportion of heart attack survivors and (2) other data indicate that many patients who would benefit from these drugs are not using them. These two conditions do not apply to nearly the same extent to other aspects of care provided to heart attack survivors. For example, while there is considerable variation in the extent to which invasive procedures—such as cardiac catheterizations, angioplasty, and coronary artery bypass graft surgery—are performed on heart attack survivors, the evidence for these procedures is not as definitive as the evidence supporting the use of cholesterol-lowering drugs, beta-blockers, and aspirin. As a result, existing clinical guidelines for their use rest primarily on expert judgment. For many cases, that judgment is either equivocal or divided. Thus, it is more difficult to determine whether any given group of patients is getting either too many or too few of these procedures. The value of cholesterol-lowering drugs, beta-blockers, and aspirin for heart attack survivors has been widely publicized through practice guidelines as well as numerous articles in prominent medical journals. It is therefore reasonable to expect physicians to know about these therapies and to provide them to most of their patients, while recognizing that the general benefits of these drugs may not apply to certain individual patients. Since we limited the scope of our study to these drugs, we cannot assume that our findings are indicative of relative performance in other aspects of care, such as the appropriate use of invasive procedures. However, restricting the scope of this study to a set of well-defined and well-supported therapies means that we can identify with greater certainty a substantial number of patients who stood to benefit from the treatments in question. Multiple, large-scale randomized clinical trials support the widespread use of three pharmacological treatments in caring for heart attack survivors. Cholesterol-lowering medications: A series of large-scale clinical trials have demonstrated the substantial therapeutic benefit of using “statin” drugs (HMG CoA reductase inhibitors) and other medications (in addition to proper diet and exercise) to lower the cholesterol level of people with coronary heart disease—including those who have had a heart attack. These studies show a reduction in subsequent coronary-related deaths for heart attack survivors ranging from 20 percent (for those with normal cholesterol levels) to 42 percent (for those with high cholesterol). These studies have also shown a reduction in strokes of about 30 percent for both normal- and high-cholesterol patients. These trials have been published in prominent journals and extensively described in the national media. In 1993, the National Heart, Lung, and Blood Institute (NHLBI) issued practice guidelines that spelled out the implications of these trials for follow-up care of heart attack survivors. Whether a patient should get such therapy depends on his or her baseline level of low-density lipoprotein (LDL) cholesterol. The guidelines set an LDL goal for coronary heart disease patients of 100 mg/dL, well below the average level for the population as a whole. Those with baseline LDL levels of 130 mg/dL and above are definite candidates for cholesterol-lowering medications, although specific factors in individual cases can provide countervailing reasons not to initiate drug therapy. For those with baseline readings between 129 and 101, the guidelines recommend that physicians carefully weigh the expected benefits and risks of cholesterol-lowering therapy for each patient. Beta-Blockers: A second drug therapy whose benefits for heart attack survivors are well established in the clinical literature involves long-term use of beta-blockers. This class of drugs inhibits stimulation of the heart and reduces the force of heart muscle contractions, thereby decreasing both the workload placed on the heart and arrhythmias that can lead to sudden death. Beginning in the early 1980s, a series of large-scale clinical trials demonstrated that beta-blockers reduced overall mortality among heart attack survivors by about 25 percent. Subsequent studies provided additional confirmation of these effects. Another study found that among the elderly patients surveyed, those receiving beta-blockers were 43-percent less likely than nonrecipients to die in the 2 years following their heart attacks. In August 1990, the American College of Cardiology (ACC) and the American Heart Association (AHA) jointly issued guidelines on the management of heart attacks that cited these studies in support of a general recommendation to treat heart attack survivors with beta-blockers for at least 2 years, with the exception of patients who had specific contraindications. Six years later, ACC and AHA issued revised guidelines that repeated this recommendation, while reducing somewhat the scope of the stipulated contraindications. In the years since the first beta-blocker trials were published, the proportion of heart attack patients considered eligible to use them has expanded. In particular, the therapeutic value of beta-blockers for many patients with moderately severe heart failure has become more evident over time. Thus, current ACC and AHA practice guidelines list only relative contraindications, meaning that in each case, the specific risks posed by beta-blockers for these patients should be weighed against the general benefits. Aspirin: The 1990 practice guidelines for treating heart attacks issued by ACC recommended long-term aspirin therapy for all post-heart-attack patients “who could tolerate it.” In its 1996 revised guidelines, ACC specified that daily aspirin therapy should be continued indefinitely, with substitution of other antiplatelet agents only in the case of a “true aspirin allergy.” As with cholesterol-lowering medications and beta-blockers, multiple randomized clinical trials provided the basis for these recommendations. A pooled analysis of these trials indicated that long-term aspirin therapy led to a 13-percent reduction in vascular mortality, a 31-percent reduction in recurrent nonfatal heart attacks, and a 42-percent reduction in nonfatal strokes. The measure of appropriate care used in this study is whether patients reported that they were actually taking cholesterol-lowering drugs, beta-blockers, and aspirin about 2 years after their heart attack occurred—not if these drugs were prescribed. While tallying prescriptions would be a more direct measure of one aspect of physician behavior, none of the potential benefits of the drugs are realized unless the patient is actually taking them. Moreover, research has demonstrated that self-reported drug use is strongly related to more proximate measures of medication compliance, such as pharmacy records of prescriptions filled and counts of pills taken. While it is ultimately the patient who decides how faithfully to adhere to a treatment regimen, research has shown that physicians strongly influence patient behavior by, among other actions, prescribing certain medications, closely monitoring patient compliance, and by simplifying and adjusting regimens to encourage compliance. Despite the strength of the clinical evidence, many patients who would benefit from drug therapies to treat coronary heart disease do not take the drugs. While we were unable to test directly for differences between the Medicare HMO enrollees in our sample and the general population of Medicare fee-for-service heart patients, the drug usage rates reported by our sample are both broadly comparable to those found in studies by others of the fee-for-service population and below the rates suggested by clinical guidelines. Just 36 percent of our sample reported taking any of the statin drugs or another type of cholesterol-lowering drug. NHLBI estimates that only about one-third of patients in the general population with coronary heart disease are receiving medications to lower their cholesterol. Further, based on cholesterol levels in the general population of elderly Americans, we estimate that 57 percent of our sample has LDL cholesterol levels of 130 or higher, and are therefore clear candidates for cholesterol-lowering drugs given current treatment guidelines for patients with established coronary heart disease. Our respondents’ 36-percent usage rate falls considerably short of that standard. Similarly, only 40 percent of our sample reported taking beta-blockers. As one comparison, 32 percent of Medicare fee-for-service heart attack survivors in the CCP study received prescriptions for beta blockers when they were discharged from the hospital. For the subset of our respondents identified in the CCP study as ideal candidates for beta-blockers, the usage rate was somewhat higher at 49 percent. The finding that only one-half of the ideal candidates took beta-blockers shows that these drugs are underused as well. Usage rates for aspirin were much higher but still below recommended levels. At the time of our survey, 71 percent of our respondents reported that they regularly took aspirin. By comparison, CCP found that 66 percent of Medicare fee-for-service heart attack survivors were instructed to take aspirin when discharged from the hospital. Similarly, 78 percent of our respondents identified as ideal candidates for aspirin therapy in the CCP study took aspirin. Approximately 2 years after their heart attack, 41 percent of our sample reported that they saw a cardiologist regularly. For the remainder, 19 percent reported that they visited a cardiologist only occasionally—when they felt ill or when they were referred by their primary care physician—and 40 percent told our interviewers that they did not see a cardiologist about their heart (37 percent saw only a primary care physician, and 3 percent saw a specialist physician other than a cardiologist). We compared the drug usage of the 41 percent under the regular care of a cardiologist with that of the 59 percent who saw a cardiologist occasionally or not at all. We found clear differences in the use of cholesterol-lowering drugs and beta-blockers—and a smaller difference in aspirin usage—between patients under the regular care of a cardiologist and all others. As table 1 shows, both cholesterol-lowering drugs and beta-blockers were taken 50-percent more often by respondents who routinely saw a cardiologist compared to those without regular cardiology appointments. In both cases, this is a statistically significant difference. For aspirin, we found that the tendency for patients with regular cardiology appointments to have higher usage rates was not statistically significant. Our analysis shows that Medicare HMO heart attack survivors are more likely to take appropriate heart-related medications if they have regular follow-up appointments with a cardiologist. The most direct explanation for this finding is that cardiologists treat heart attack survivors differently than physicians who are not heart specialists. However, taking medications is an outcome that involves patient as well as physician behaviors, and differences in patient use of drug therapies could be due more to differences in patient characteristics than to differences in the treatment patterns of physicians. For example, patients who are most steadfast in their pharmaceutical regimens may also be the most likely to seek specialty care. We tested this alternative explanation by conducting multivariate statistical analyses to identify the variables associated with taking each type of drug and with having regular cardiology appointments. These analyses included variables known from the work of other researchers to influence the use of physician services or medication compliance, including self-reported current health status; background variables (such as education, current income, age, and race); and clinical variables measured at the time of hospitalization (such as heart attack severity and major comorbidities). Because these analyses found that the variables associated with having regular cardiology appointments and with taking heart drugs are different, it is unlikely that our finding—that patients with regular cardiology appointments take these drugs more often—is due to systematic differences between the patients who see cardiologists regularly and those who do not. However, as with any analysis of this type, it is possible that patient attributes that are statistically unrelated to any of the factors we examined could affect the relationship between regular cardiology care and recommended drug therapy. In general, we found that healthier patients were more likely to take all three types of drugs, although the specific predictive factors varied among the drug categories. For example, we found that cholesterol-lowering drugs were taken more often by those who told our interviewers that their current health was very good or excellent (52 percent, compared to 31 percent of those in poor, fair, or good health) and by those without other major illnesses at the time of the heart attack (43 percent, compared to 29 percent of those with at least one comorbidity). Similarly, both beta-blockers and aspirin were taken more often by those with fair to good heart function measurements, compared to those with poor measurements. The use of beta-blockers and aspirin, but not of cholesterol-lowering drugs, was also associated with variables reflecting socioeconomic status. Respondents with some postsecondary education, compared to those whose education did not extend beyond high school, reported greater use of beta-blockers (50 percent, compared to 34 percent) and greater use of aspirin (76 percent, compared to 67 percent). Patients with incomes greater than the median for our sample also used beta-blockers more often (48 percent, compared to 33 percent); income did not affect aspirin use. We also found that cholesterol-lowering drugs were taken more often by younger respondents (48 percent of those in the younger half of our sample, aged 67 to 73 when they were interviewed, compared to 25 percent of those aged 74 to 86). Respondent age, however, did not affect the use of beta-blockers or aspirin. In addition, gender and race had no effect on usage rates for any of the three categories of drugs. We conducted a separate analysis to identify patient-related variables associated with having regular cardiology appointments. We found that those with regular cardiology appointments were more likely to be white (43 percent had regular appointments, compared to 22 percent of nonwhites); relatively young (47 percent of those aged 73 or younger had regular appointments, compared to 34 percent of those aged 74 to 86); and to have had relatively severe heart attacks. Regular cardiology care was not associated with gender, educational attainment, current income, the presence of comorbidities, or self-reported health status. We reexamined our analysis of factors associated with patients taking cholesterol-lowering medications, beta-blockers, and aspirin, making sure to include those variables that predicted regular care by a cardiologist (race, age, and heart attack severity). If the relationship of regular care by a cardiologist to appropriate drug therapy actually reflected differences in these patient characteristics, then the inclusion of these factors in the analysis would diminish greatly the statistical association of specialty care with those treatments. This did not occur. Even with these factors included in the analysis, the effect of regular visits with a cardiologist did not change. Neither race nor heart attack severity was associated with taking any of the three types of drugs, and patient age was associated only with taking cholesterol-lowering medication. Further, among the younger patients—those more likely to have regular cardiology appointments—the usage rate of cholesterol-lowering drugs was much higher among those with regular cardiology appointments—60 percent, compared to 38 percent for those without a regular cardiologist. On the whole, our conclusion that patients under the regular care of a cardiologist are more likely to take recommended medications parallels the findings of other studies of physician specialty differences in the United States. Our results also reinforce the findings of the small number of other studies specifically concerned with HMO members. The pattern we found for older heart attack patients in Medicare HMOs is the same as that reported by other researchers for younger HMO members with asthma. One characteristic of medical care in the United States is that the patients of specialist and generalist physicians sometimes receive different treatments for the same medical condition. Studies have documented this phenomenon in both fee-for-service and HMO settings. However, it is both a special problem and a unique opportunity for HMOs and their members. It is a special problem because HMOs can restrict access to specialists, perhaps leading some enrollees to feel that they have been denied necessary care. It is a unique opportunity because these differences are not immutable and because HMOs, unlike fee-for-service insurers, can actively manage care. Thus, HMOs can educate the physicians they employ about treatment guidelines, review clinical records to ensure that patients are taking appropriate medications, or take other organizational actions to improve the quality of care provided by all types of physicians that are not possible in fee-for-service settings. We provided a draft of this report to HCFA and a panel of experts for their review. Based on their comments, we expanded the number of drugs we examined and explicitly addressed the possible confounding effects of patient characteristics. We also incorporated technical changes where appropriate. Several other issues that the reviewers raised are addressed here. First, some reviewers were concerned that our survey sample had the potential to introduce selection biases. In general, enrollees in Medicare HMOs who develop chronic conditions are more likely to revert to standard fee-for-service Medicare. Our sample, however, was limited to heart attack survivors enrolled in Medicare HMOs who remained enrolled for the roughly 2-year period from their heart attack until we interviewed them. If many patients were excluded from our sample because they had left HMOs between their heart attack and our survey, then our respondents could represent HMO enrollees who were disproportionately healthy and satisfied with medical care provided by HMOs. However, we found that the potential effect of any such selection bias was minimal because few patients in our initial sample—less than 4 percent—were dropped from the study because they had returned to fee-for-service Medicare between their heart attack hospitalizations and the survey period. Thus, because so few members of our sample left HMOs, we believe that it accurately reflects the population of Medicare patients who survived heart attacks that occurred while they were enrolled in HMOs. Second, some reviewers pointed out that our finding that heart attack survivors with regular cardiology appointments have more appropriate drug treatment may be the result of having regular physician appointments, not that the appointments are with a cardiologist. This explanation hypothesizes that the respondents in our comparison group have fewer physician contacts overall. Because we were interested specifically in heart-related medical care, our survey questions did not attempt to measure the overall level of physician contacts. Consequently, we are unable to rule out this explanation with direct evidence. However, two other aspects of our work—a separate sensitivity analysis and the multivariate analyses—provide indirect evidence that this alternative explanation is unlikely. We conducted a sensitivity analysis to judge the plausibility of this alternative explanation. For this analysis, we estimated how much lower the rate of regular physician contacts would have to be among those who did not see a cardiologist at all in order to explain their lower use of cholesterol-lowering drugs and beta-blockers. We found that a lower rate of regular physician visits could explain the lower use of cholesterol-lowering drugs among patients who had not seen a cardiologist only if no more than 1 in 10 of them had regularly seen their primary care doctor or another noncardiologist physician for the treatment of any medical condition. Similarly, to explain their lower use of beta-blockers, the proportion seeing a noncardiologist regularly would have to be no more than one-third. By contrast, among those who saw a cardiologist, two-thirds reported having regular appointments. Since these groups did not differ in self-reported health status and incidence of major comorbidities, we believe that it is implausible that such a high proportion of heart attack survivors who did not see a cardiologist would also lack regular contact with even their primary care provider. Our multivariate analyses included variables other than health that are known to be associated with the use of physician services, especially education, income, age, and gender. If frequency of physician contacts explained our findings, then including these variables in the multivariate analyses should have greatly diminished the statistical association between regular specialty care and drug usage. This did not occur. (See app. II for a description of our sensitivity and multivariate analyses). In addition, some reviewers noted that more care is not always better care. That is, while our results are consistent with the finding from the research literature that specialists provide more intensive care than generalists, there is the possibility that specialists may provide heart-related medications to patients whom the drug will not help more often than generalists, which would account for at least part of this difference. We agree that it is likely that some individual patients in our survey were not helped by these medications; however, we do not believe that our results can be attributed to a systematic tendency for patients with regular cardiology care to take these drugs inappropriately. The drugs we selected as indicators of appropriate care have been demonstrated to have great clinical benefits and few absolute contraindications. Moreover, for beta-blockers and aspirin, our statistical analyses documenting the importance of regular cardiology care controlled for the degree to which patients were ideal candidates for the therapy. Further, our results show that even patients under the regular care of cardiologists took these drugs at rates below the recommended guidelines—a finding that is more consistent with the position that cardiologists provide too little appropriate care than it is with the view that they provide too many inappropriate treatments. Finally, some reviewers also suggested that our results may be due to differences in the out-of-pocket expenditures for these drugs between respondents with regular cardiology care and those without regular cardiology appointments. If patients with regular cardiology care systematically paid less for these drugs for any reason, their increased usage rates may be due to lower costs instead of to the care provided by cardiologists. While we do not know how much these drugs would have cost each respondent, we were able to identify heart attack survivors who belonged to HMO plans with pharmacy benefits and, thus, who presumably have lower drug costs. We found that the presence of a pharmacy benefit was not related to the self-reported use of any of these three drugs or to having regular cardiology care. Moreover, in a comparable study of heart attack survivors treated by the Department of Veterans Affairs—where none of the patients had to pay more than minimal amounts for their drugs—patients under regular cardiology care received cholesterol-lowering drugs much more often than those cared for by primary care physicians. As we arranged with your staff, unless you publicly announce the report’s contents earlier, we plan no further distribution until 30 days after it is issued. We will then send copies to the Secretary of the Department of Health and Human Services and other interested parties. We will also make copies of this report available to others upon request. Please call me or Marsha Lillie-Blanton, Associate Director, at (202) 512-7119 if you have any questions about this report. Martin T. Gahart and Eric A. Peterson are the major contributors to this report. The heart attack survivors sampled for this survey were all enrolled in Medicare HMOs at the time they were hospitalized for an acute myocardial infarction (between May and July 1995) and at the time the survey was conducted (between April and July 1997). They were identified as part of a larger study, the Cooperative Cardiovascular Project (CCP), conducted by HCFA. For this study, HCFA abstracted clinical data from hospital records for approximately 224,000 Medicare heart attack survivors. CCP sampled acute myocardial infarction admissions that occurred between February 1994 and July 1995. Each hospital was sampled for only a subset of the months during that period, and patients were included in the CCP data set only if they were hospitalized during the time their hospital was sampled. HMO patients are underrepresented in HCFA’s claims data, from which the CCP sampling frame was constructed. In return for the fixed, per month amount that HMOs receive for each Medicare enrollee, they assume full responsibility for patient hospital bills. Hospitals are still supposed to submit “no pay” bills to HCFA for Medicare HMO patients, but this requirement is frequently not followed. As a result, there is often no record in HCFA’s claims files for hospitalizations of HMO enrollees. To compensate for this deficiency in the original CCP sample, we contacted all Medicare HMOs with 1,000 or more enrollees as of August 1995 and asked the HMOs to send us information on any enrollee who had been hospitalized with an acute myocardial infarction during the CCP study period. We passed this information on to HCFA; HCFA then determined if the patients reported by the HMOs belonged in CCP based on the sampling time frame for the hospital where the patient was treated. As a result, the CCP data file now includes about 13,000 HMO patients. We then limited the sample to residents of seven states that together totaled 72 percent of the Medicare HMO population in 1995: California, Florida, Massachusetts, New York, Ohio, Pennsylvania, and Texas. We limited our sample to these states to allow us to compare our survey data to survey data that researchers at Harvard Medical School collected on a subset of CCP patients treated under fee-for-service Medicare in those states. We also restricted our sample to those aged 65 to 84 years at the time of their heart attack to match the Harvard survey’s selection criteria. We excluded Medicare beneficiaries known to have died by February 1997. We also excluded individuals who were no longer in an HMO at the time of the survey, even though they had been HMO members when they suffered the heart attack. Finally, we included in our sample all the remaining patients who had been hospitalized at the end of the CCP time period—May through July 1995—to make the interval between heart attack and interview as close as possible to that of the patients in the Harvard survey. The final sample size was 578. HCFA provided us with the mailing address of each member of our sample; we then used publicly available directories to locate the phone numbers of as many individuals as possible. Next, we sent to all selected beneficiaries letters that explained the study, asked for their participation, and provided a list of heart-related drugs for the interview. The letters advised those for whom we found phone numbers that an interviewer would be calling and asked those without phone numbers to call a toll-free telephone number to participate in the survey. A second round of mailings was sent to nonrespondents midway through the study period. In the end, we were unable to locate 112 individuals. Of the 578 individuals in our sample, 19 died between February 1997 and the end of the survey period. The survey was completed by 362 respondents—65 percent of the remaining 559. We were unable to contact, or could not make satisfactory arrangements to complete the interview with, 118 individuals (21 percent). Only 14 percent of the sample (79 individuals) refused to participate. Seventy-seven percent of the completed interviews were with respondents reached directly by phone by our interviewers, while 23 percent were with respondents who contacted us through the toll-free telephone number. Eighty-eight percent of our respondents were interviewed within 2 years of their acute myocardial infarction hospitalization, and all of the interviews were completed within 26 months of the hospitalization. To see how our respondents compared to the sample as a whole, we analyzed demographic information from HCFA’s administrative data bases. The two groups had similar distributions for gender, age, and state of residence. However, relative to their proportions in the sample, whites completed the interview slightly more often (accounting for 76 percent of the sample but 79 percent of completed interviews) and Hispanics somewhat less often (12 percent of the sample but only 9 percent of the completed interviews). We do not believe that these small differences affect the validity of our findings, although they mean that we cannot generalize our findings to Hispanic Medicare beneficiaries. Several different categories of drugs can be used to lower cholesterol levels. The statins (HMG CoA reductase inhibitors) are effective and have few short-term side effects, but they are relatively expensive and lack a long-term track record. Bile acid resins are inexpensive and have a long safety record but are more complicated to take and can produce unpleasant gastrointestinal symptoms. Nicotinic acid is also inexpensive. However, it can be fairly toxic when taken in higher doses. Fibric acids are especially potent in lowering triglycerides but have more limited effect on both low- and high-density lipoprotein (LDL and HDL) cholesterol levels. To boost the cholesterol-lowering effect, drugs from several of these categories can be combined. Prior to contacting respondents by telephone, we mailed each a comprehensive list of drugs prescribed to heart attack survivors. During the interview, respondents were asked to tell the interviewer the code number next to each drug that they were currently taking. For respondents who did not have the coded list—because they had not received it or had misplaced, lost, or otherwise did not have the list—were asked to tell the interviewers the names of the heart drugs they took. In addition, all respondents were asked if they were taking any heart drugs not on the list. Respondents were coded as taking a cholesterol-lowering drug if they said that they took any one of the 24 drugs on the list. (The list of 24 drug names actually measured only 11 distinct pharmaceuticals, as each of 11 drugs was listed with both a generic name and at least one trade name.) The list included 5 statins with both generic and trade names (totaling 10 drugs): atorvastatin (Lipitor), fluvastatin (Lescol), lovastatin (Mevacor), pravastatin (Pravachol), and simvastatin (Zocor). The list also included 14 other cholesterol-lowering drugs (6 distinct drugs with both generic and trade names): cholestyramine (Questran); clofibrate (Atromid-S); colestipol (Colestid); gemfibrozil (Lopid); niacin (Niacor, Nicobid, and Nicolor); and probucol (Lorelco). For respondents reporting that they took an anticholesterol drug, 82 percent reported taking only statin drugs, 13 percent only nonstatin drugs, and 5 percent both statin and nonstatin drugs. Beta-adrenergic blocking agents, or beta-blockers, inhibit stimulation of the heart and reduce the force of heart muscle contractions. As a result, they reduce the patient’s heart rate and blood pressure, which in turn lowers the heart’s workload and consequent need for blood and oxygen. These conditions increase the likelihood that sufficient blood will flow through the coronary arteries to prevent a new heart attack. In addition, beta-blockers reduce the incidence of arrhythmia, which can lead to sudden cardiac death. Respondents were coded as taking a beta-blocker if they said that they took any one of the 38 such drugs listed or if they volunteered the name of a beta-blocker when asked about their heart drugs. The 38 drug names referred to 13 distinct pharmaceuticals, with both generic and one or more trade names listed. We also included formulations that combined several of these beta-blockers with diuretics. The list included acebutolol (Sectral), atenolol (Tenormin), betaxolol (Kerlone), bisoprolol (Zebeta), carteolol (Cartrol), labetalol (Normodyne and Trandate), metoprolol (Lopressor and Toprol XL), nadolol (Corgard), penbutolol (Levatol), pindolol (Visken), propranolol (Inderal), sotalol (Betapace), timolol (Blocadren). A separate survey question asked respondents if they took aspirin every day or every other day. We coded respondents as taking aspirin if they answered “yes” to this question. We asked respondents the name and office location (city or town) both of the physician they saw for general health care and of the doctor mainly responsible for treating their heart condition. For the physician mentioned as primarily responsible for heart treatment, we asked if they had regular appointments or only saw the doctor when they were ill or when referred by a primary care physician. For these questions, 59 percent of the respondents provided the names of two physicians, and 41 percent the name of one doctor. We then used physician directories from the American Medical Association to identify the practice specialty of the physician named as treating the respondent’s heart condition. We coded as cardiologists any physician who listed cardiology as his or her primary practice specialty, who listed cardiology as a secondary practice specialty, or who had completed a residency in cardiology. Nearly 90 percent of the physicians we coded as cardiologists listed cardiology as their primary practice specialty. Some respondents identified a cardiologist by name and office location but then volunteered that they had not seen that physician for some time. Those respondents were coded as not having a cardiologist. Our criteria for identifying cardiologists were permissive. That is, if the physician and office location noted by the respondent could plausibly identify a cardiologist, we coded that physician as a cardiologist. In practice, this meant that (1) physicians with common names were counted as cardiologists if any one doctor with that name was a cardiologist (for example, if 1 of the 10 Dr. Smiths in a city was a cardiologist, any Dr. Smith there was coded as a cardiologist) and (2) physicians in nearby towns were included (for example, if Dr. Jones the cardiologist was not found in the city given by the respondent but practiced in an adjacent suburb, Dr. Jones was coded as a cardiologist). Any bias that may have been introduced by this practice worked against our major finding; the most likely error in this method involves coding a noncardiologist as a cardiologist, and to the extent that cardiologists prescribe cholesterol-lowering drugs more often than noncardiologists, this error would reduce the difference between the specialties that we have reported. Our analysis included a number of other variables, including the following demographic and health-related variables. Gender: Gender was coded from a question on the survey. Race: Based on responses to the survey, we categorized each respondent as Hispanic, non-Hispanic white, or other. Age: Age in years at the time of the acute myocardial infarction was obtained from HCFA’s administrative records. We grouped the respondents into two age categories, each with about one-half of the total: 67 to 73 years at the interview date (65 to 71 at the time of the heart attack) and 74 to 86 years (72 to 84 at the time of the heart attack). Individuals aged 85 and older at the time of the heart attack were excluded from the sampling frame. Some College Education: From a survey question, we measured education attainment by assigning a positive value to this variable for all respondents who said that they had completed at least 1 year of college, were college graduates, or who had some post-graduate education. High Current Income: Based on responses to a question on the survey, we coded individuals reporting a total yearly family income of $20,000 or more (not quite one-half of the respondents) as having a high current income. The comparison group includes individuals with less income and those with missing values on this question. Residency: State of residence at the time of the interview was ascertained from a survey question. We divided this group into three categories: California residents (44 percent of respondents), Florida residents (32 percent), and residents of the five other states eligible for our sample (Massachusetts, New York, Ohio, Pennsylvania, and Texas). Spanish-Language Interview: Interview language was coded by the interviewers at the completion of the interview. Thirty-three, or 9 percent, of the respondents completed the interview in Spanish. Called in for the Interview: In our contact letters, we asked beneficiaries for whom we could not find telephone numbers to call our interviewers on a toll-free telephone number. About one-quarter of the completed interviews came from individuals who called in. Compared to the sample as a whole, those who called in were disproportionately female and California residents. We included this variable in our multivariate analysis to take account of these differences between those who were called and those who called in. Very Good Current Health: The survey included a self-reported health status measure. Individuals reporting that their health was very good or excellent received a “1” on this variable; respondents reporting good, fair, or poor health were coded “0.” Confirmed Acute Myocardial Infarction: This variable was obtained from HCFA. Based on information abstracted from each patient’s clinical records as part of the CCP, HCFA determined if a heart attack could be confirmed. Lack of confirmation may mean either that a heart attack did not occur or that information about relevant clinical measurements was missing from a patient’s file. Any Major Comorbidities: From the abstracted clinical records provided by HCFA, we coded individuals as having a major comorbidity if they had any one of these conditions at the time of their heart attack hospitalization: congestive heart failure, chronic obstructive pulmonary disease, dementia, any form of diabetes, or a previous stroke. Heart Function: The abstracted clinical records included measures of the left ventricular ejection fraction taken during the heart attack hospitalization for two-thirds of our respondents. For our multivariate statistical analyses, we grouped this interval variable into three categories: below 35, 35 to 50, and above 50. In the text and in some appendix tables, we categorized respondents with ejection fractions of less than 35 as having poor heart function, with the comparison group comprised of individuals with a fraction of 35 or greater. Ideal Candidate for Beta-Blockers and Aspirin: CCP data on our survey respondents allowed us to identify whether or not respondents were likely candidates for beta-blocker or aspirin therapy. As part of CCP, HCFA determined which patients would be eligible for these therapies when they were discharged from the hospital and which among those were “ideal” candidates. Since this status depended in large part on the presence or absence of chronic diseases—such as heart failure, diabetes, and chronic obstructive pulmonary disease—it would likely remain unchanged 2 years later for most (though probably not all) of our respondents. Patients who are not ideal candidates may have evidence of one of the potential contraindications or have missing data for one of the contraindications. Heart Attack Severity: We measured heart attack severity with an interval variable derived from the abstracted clinical records that counted the presence of three indicators: a previous myocardial infarction, a transmural myocardial infarction, and angina more than 24 hours after arrival at the hospital. Four percent of our sample had all three of these indicators, 25 percent had two indicators, 45 percent had one, and 26 percent had none. The findings described in this report are based on our analysis of data from a subset of the completed interviews. We excluded cases with missing data on the main explanatory variable (whether or not the patient had regular appointments with a cardiologist) and respondents who completed the interview in Spanish. Twenty-two percent of the respondents (or 78 individuals) were dropped for these reasons. The purpose of this section is to describe why and how we made these exclusions, describe the differences between those kept in the analysis and the excluded cases, and discuss the implications for our conclusions. Fifty-one cases (14 percent of the entire sample) were excluded because we could not determine if they had regular visits with a cardiologist or not. These individuals either did not answer the physician contact questions on the survey or listed doctors we could not find in the physician directories. We excluded these cases because they did not provide information that would help us answer our research questions. Of those with complete physician data, an additional 27 cases (or 8 percent of the entire sample) with Spanish-language interviews were excluded because their results were implausibly different from those of the rest of the sample; we believe that these differences, at least in part, may have been caused by our survey procedures. For example, only 6 percent of the Spanish-language interviews reported taking cholesterol-lowering drugs, compared to 33 percent for the sample as a whole and to 32 percent for the 22 Hispanic respondents who completed the interview in English. The Spanish-language interviews also reported lower usage rates for beta-blockers and aspirin than the other Hispanic respondents. We believe that our failure to provide a drug list in Spanish may have contributed to this low level of self-reported drug use. We also found that while 70 percent of those with Spanish-language interviews reported having regular cardiology appointments, only 43 percent of the sample as a whole and 19 percent of Hispanics who completed the interview in English reported having such appointments. We suspect that our physician coding scheme led us to substantially overestimate the proportion of these respondents with regular cardiology care. Almost all of the Spanish-language cases reside in southern Florida, an area with many physicians with similar last names practicing in close proximity. In such circumstances, our physician specialty coding rules were likely to have coded many generalist physicians as cardiologists. As table I.1 shows, our decision to exclude some cases from the analysis slightly increased our estimates of the proportion of respondents taking cholesterol-lowering drugs and beta-blockers and slightly decreased the percentage of respondents with regular appointments with a cardiologist (from 43 percent for all respondents to 40 percent for the analysis subset). Both of these differences result from excluding the low drug use but high cardiology appointment set of respondents who completed the interview in Spanish. These decisions somewhat limit the generalizability of our results. In particular, we are unable to reach any conclusions about Spanish-speaking Medicare HMO enrollees. All respondents (N=362) Included in analysis (N=284) Ideally, we would have taken into account each patient’s baseline LDL cholesterol level in determining the clinical appropriateness of cholesterol-lowering medications for that patient. Unfortunately, these data were not part of the CCP data set. However, recent data on the distribution of LDL levels in the national population are available from the Third National Health and Nutrition Examination Survey (NHANES III).Our analysis of data from this survey indicates that 53 percent of men and 64 percent of women over age 65 have baseline LDL levels of 130 mg/dL or above. These figures are comparable for those that either have or have not had a heart attack. We used figures from NHANES III to estimate the proportion of our survey respondents who were likely to benefit from cholesterol-lowering drugs, based on the estimated incidence of threshold levels of LDL cholesterol specified in NHLBI guidelines and the proportion of men and women in our sample. We estimate that approximately 57 percent of our sample had LDL levels of 130 mg/dL or greater. This figure provides an estimate of the proportion of heart attack survivors who should receive cholesterol-lowering drugs, assuming that some patients with somewhat lower baseline LDL levels would benefit from this therapy, while others with high LDL levels would not, due to extreme frailty or terminal illness, for example. Some reviewers of a draft of this report explained the greater drug usage rates among respondents with regular cardiology appointments as possibly the result of those patients having regular appointments with any physician, not necessarily to any aspect of care provided specifically by cardiologists. Although we are unable to directly test this alternative explanation because we did not ask our respondents about the regularity of their contacts with physicians other than cardiologists, we addressed this concern by conducting a rough sensitivity analysis of the effects of having regular physician appointments on the use of cholesterol-lowering drugs and beta-blockers. The sensitivity analysis starts with the assumption that the use of cholesterol-lowering drugs and beta-blockers is equally appropriate for each of our three patient groups: those who saw cardiologists regularly, those who saw cardiologists occasionally, and those who saw only noncardiologist physicians. While there are specific reasons why a relatively small proportion of our respondents might not benefit from one or the other therapy (for example, an unusually low baseline LDL cholesterol level without drugs, or a specific clinical contraindication for beta-blockers, such as asthma), we do not expect these characteristics would affect the regularity of physician contacts for these patients. For instance, we know that the respondents seeing cardiologists regularly did not differ from other respondents in self-reported health status or incidence of comorbidities. Further, while those seeing cardiologists regularly did tend to have more severe heart attacks, lower heart attack severity does not make beta-blockers and cholesterol-lowering drugs any less beneficial for heart attack survivors. A heart attack of any severity puts a patient in the high-risk group for future heart attacks, according to NHLBI guidelines. Because of the structure of our survey, we know whether respondents who saw a cardiologist had regular or occasional appointments, but we do not have this information for respondents who saw only noncardiologists. That is why we cannot directly assess the effect of regular visits compared to that of physician specialty with respect to taking cholesterol-lowering medications and beta-blockers. However, by regrouping data from the main analysis to consider just those patients who saw a cardiologist at least occasionally (two-thirds regularly and one-third only occasionally), we can derive an estimate of the magnitude of the effect of having regular physician appointments for that subset of our respondents. Thus, we observed that 45 percent of those with regular appointments with cardiologists used cholesterol-lowering drugs, compared to 29 percent of those who saw cardiologists only occasionally. For beta-blockers, the comparable usage figures are 50 percent and 29 percent. (See table II.1.) Patients with regular cardiology appointments Patients with occasional cardiology appointments Patients with no cardiology appointments Patients who did not see a cardiologist or saw one only occasionally (B+C) Our main analysis compared group A with group D (see table 1); this analysis compares group A with group B to make inferences about group C. If, as suggested by the alternative explanation, the principal determinant of drug use is the regularity of physician appointments regardless of the physician’s specialization, then one would expect the same proportion of patients who did not see a cardiologist to receive these drugs depending on whether they saw any other physician regularly or not. Thus, hypothetically, 45 percent of those respondents who saw their primary care doctor or other physician regularly should be taking cholesterol-lowering drugs and 50 percent of them should be taking beta-blockers. Similarly, among those with only occasional appointments with any physician, 29 percent should be taking cholesterol-lowering medications and (coincidentally) 29 percent of them should be taking beta-blockers. At the same time, we know from the survey responses what proportion of the group not seeing cardiologists actually used these drugs overall: 31 percent for cholesterol-lowering medications and 36 percent for beta-blockers. Working from these figures, we can derive what proportion of the group would have had to have seen any noncardiologist physician on a regular basis in order for these two assumptions to hold. If that estimated proportion is implausibly low, it would make it unlikely that the observed differences in drug use we found reflect simply the effect of regular visits and not physician specialty. Thus, for cholesterol-lowering drugs, respondents who did not see a cardiologist had a usage rate of 31 percent. Given the presumed usage rates—29 percent for respondents with occasional visits and 45 percent for those with regular physician appointments—one can reach the observed aggregate level for respondents not seeing cardiologists only if the large majority—90 percent—of this group saw physicians only occasionally: (29 percent x .90) + (45 percent x .10) = 31 percent overall. To the extent that more than 10 percent of this group saw their primary care physician regularly (and therefore had a 45-percent usage rate for these drugs), the overall rate of use would have to rise above the 31-percent level that we observed. The result of this calculation for beta-blockers is similar, though less dramatic. Thus, if respondents with regular noncardiology appointments are presumed to use beta-blockers at a rate of 50 percent, and those with occasional physician visits at a rate of 29 percent, then to reach the observed overall rate of 36 percent, 32 percent of this group would have to have regular physician visits and 68 percent occasional appointments: (29 percent x .68) + (50 percent x .32) = 36 percent overall. This would mean that two out of three of these respondents—none of whom were seeing a cardiologist even occasionally and all of whom had been hospitalized for a heart attack within the last 2 years—were not seeing even a primary care physician on a regular basis. For both types of drugs, the estimated rates of regular physician appointments from our sensitivity analysis (one-tenth and one-third, respectively) are considerably below the actual regular visit rate for patients who saw a cardiologist (two-thirds). Since the overall health of our respondents with regular cardiology care does not differ from that of the other members of our sample, we do not believe that differences of this magnitude are plausible. For that reason, it seems quite unlikely that our findings about the influence of regular cardiology care on the use of cholesterol-lowering drugs and beta-blockers can be explained by differences in regular physician contacts among the heart attack survivors in our sample. As a further check on the robustness of these conclusions, we tested the potential impact of sampling error in our relatively small sample. All of the figures we used in the above calculations reflect the responses provided by the particular sample HCFA drew from the population of Medicare heart attack survivors in HMOs. The extent to which any other comparable sample might provide different results is captured by the standard error for the rates of drug use for each of the three respondent subgroups (those with regular cardiologist visits, occasional cardiologist visits, and no cardiologist visits). Testing for the effect of changes in each of these parameters, we found that variation in the rate of drug use by the group that had no contact with cardiologists had the largest impact on the derived estimate of regular physician visits for that group. If the use of cholesterol-lowering drugs was actually one standard error higher for the group that had not seen a cardiologist (that is, 35 percent instead of 31 percent), then this would imply that 37 percent—not 10 percent—of these patients had regular contact with a physician of some sort. Similarly, the estimated rate of regular visits increased from 32 percent to 54 percent if overall use of beta-blockers by this group was raised by one standard error. There is one chance in six that the “true” mean is greater than the sum of the observed sample mean and the standard error. In other words, even with sampling error, there is a five in six chance that the estimated rate of regular physician visits for patients who did not see a cardiologist would be, at most, 37 percent in the analysis of cholesterol-lowering drugs and 54 percent for beta-blockers. Thus, the rate of inferred regular visits for patients who did not see a cardiologist is still clearly lower than that observed in our sample for patients who did see one at least occasionally (67 percent). For our major analyses, we compared the usage rates of cholesterol-lowering drugs, beta-blockers, and aspirin for respondents who had regularly scheduled cardiology visits with the rates for those who do not see a cardiologist regularly. As a necessary step in this analysis, we also examined the overall rates of taking these heart drugs and of receiving regular care from a cardiologist. In addition, we conducted multivariate statistical analyses to ensure that any differences we found did not change when we took into account the effects of other background and health-related factors influencing the use of cholesterol-lowering drugs, beta-blockers, and aspirin. Finally, we conducted a multivariate statistical analysis to identify variables associated with having regular cardiology appointments. All of our analyses excluded respondents with missing physician information or who completed the interview in Spanish. Table II.2 presents the results of a logistic regression analysis predicting use of cholesterol-lowering drugs. The outcome variable is dichotomous: “1” indicates that the respondent takes cholesterol-lowering drugs; “0” indicates that he or she does not. The regression uncovered four statistically significant factors—cholesterol-lowering drugs were taken more often by respondents with regular cardiology appointments, by respondents aged 67 to 73 (or 65 to 71 at the time of the heart attack), by respondents claiming that their health was very good or excellent, and by respondents without major comorbidities at the time of the heart attack. Odds ratio (95% confidence interval) (1.15-3.40) (.69-2.43) Called in for interview (versus reached by phone) (.77-2.59) (.56-1.71) (.53-3.00) Aged 67 to 73 years (versus aged 74 to 86) (1.71-5.09) California resident (versus other six states) (.79-2.31) Very good current health (versus good, fair, or poor) (1.19-4.13) (.28-.83) To control for background factors, the first seven variables were kept in the equation regardless of their statistical significance. The original regression equation also included other variables that were dropped from this final analysis because none were statistically significant. The variables that were dropped, along with their coefficients and probability levels in the original equation, are as follows: high current income (.20, p=.52), some college education (.56, p=.16), heart attack severity (.03, p=.89), and heart function (–.13, p=.61). Table II.3 shows the results of a logistic regression analysis predicting use of beta-blockers. The outcome variable is dichotomous: “1” indicates that the respondent takes beta-blockers; “0” indicates that he or she does not. The regression uncovered four statistically significant factors—beta-blockers were taken more often by respondents with regular cardiology appointments, by respondents with current income above the median for our sample, by respondents who had attended college, and by respondents with relatively good heart function measurements. In addition, the control variable indicating a valid heart function measurement was also statistically significant. The variable identifying ideal candidates for beta-blockers did not influence the actual use of beta-blockers. Odds ratio (95% confidence interval) (1.40-4.02) (.41-1.35) Called in for interview (versus reached by phone) (.77-2.54) (.48-1.45) (.33-1.62) Aged 67 to 73 years (versus aged 74 to 86) (.61-1.73) California resident (versus other six states) (.36-1.11) (1.13-3.42) (1.23-3.71) (1.07-2.68) (.63-3.25) Valid heart function measure (versus missing data) (.07-.75) (Table notes on next page) Heart function has three values, with the levels indicating left ventricular ejection fractions below 35, 35 to 50, and above 50. To control for background factors, the first seven variables were kept in the equation regardless of their statistical significance. The original regression equation also included other variables that were dropped from this final analysis because none were statistically significant. The variables that were dropped, along with their coefficients and probability levels in the original equation, are as follows: very good current health (–.19, p=.81), heart attack severity (.21, p=.22), and major comorbidity (.07, p=.80). Table II.4 presents our logistic regression analysis for aspirin. The outcome variable is dichotomous: “1” indicates that the respondent took aspirin; “0” indicates that he or she does not. The regression uncovered four statistically significant factors—aspirin was taken more often by respondents who had attended college, by respondents with relatively good heart function measurements, and by respondents identified as ideal candidates for aspirin therapy. The control variable indicating a valid heart function measurement was also statistically significant. The variable for regular cardiology appointments approached statistical significance (probability level = .10) but did not reach the required threshold. Odds ratio (95% confidence interval) (.92-2.91) (.61-2.23) Called in for interview (versus reached by phone) (.52-1.86) (.38-1.24) (.74-3.42) Aged 67 to 73 years (versus aged 74 to 86) (.54-1.67) California resident (versus other six states) (.45-1.45) (1.04-3.42) (1.29-3.25) (1.24-3.97) Valid heart function measure (versus missing data) (.09-.82) (Table notes on next page) Heart function has three values, with the levels indicating left ventricular ejection fractions below 35, 35 to 50, and above 50. To control for background factors, the first seven variables were kept in the equation regardless of their statistical significance. The original regression equation also included other variables that were dropped from this final analysis because none were statistically significant. The variables that were dropped, along with their coefficients and probability levels in the original equation, are as follows: high current income (.16, p=.61), very good current health (.60, p=.15), heart attack severity (.03, p=.86), and major comorbidity (–.50, p=.09). Table II.5 presents our logistic regression analysis for regular cardiology appointments. The outcome variable is dichotomous: “1” indicates that the respondent had regular appointments with a cardiologist; “0” indicates that he or she did not. The regression uncovered three statistically significant factors—respondents who were white, younger, or who had suffered relatively severe heart attacks had regular appointments with a cardiologist more often than other respondents. Odds ratio (95% confidence interval) (.41-1.31) Called in for interview (versus reached by phone) (.55-1.71) (.53-1.47) (1.06-5.69) Aged 67 to 73 years (versus aged 74 to 86) (1.06-2.86) California resident (versus other six states) (.59-1.63) (1.03-1.89) To control for background factors, the first seven variables were kept in the equation regardless of their statistical significance. The original regression equation also included other variables that were dropped from this final analysis because none were statistically significant. The variables that were dropped, along with their coefficients and probability levels in the original equation, are as follows: high current income (–.08, p=.78), very good current health (–.29, p=.42), some college education (–.01, p=.98), major comorbidity (.13, p=.61), and heart function (–.22, p=.31). In addition to obtaining official agency comments from HCFA, we asked the following individuals to review an early draft of this report. Their comments prompted us to expand the scope of our analyses and to consider more fully several alternative explanations for our findings. We gratefully acknowledge their assistance. John Ayanian, M.D., M.P.P., Assistant Professor, Division of General Medicine, Brigham and Women’s Hospital, and Department of Health Care Policy, Harvard Medical School Carolyn Clancy, M.D., Director, Center for Outcomes and Effectiveness Research, and Acting Director, Center for Primary Care Research, Agency for Health Care Policy and Research James Cleeman, M.D., Coordinator, National Cholesterol Education Program; National Heart, Lung, and Blood Institute; National Institutes of Health Robert Hurley, Ph.D., Associate Professor, Department of Health Administration, Medical College of Virginia Charles Alan Lyles, Sc.D., Assistant Professor, Department of Health Policy Management, School of Hygiene and Public Health, Johns Hopkins University Barbara Starfield, M.D., Professor, Department of Health Policy and Management, School of Hygiene and Public Health, Johns Hopkins University The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the potential differences in treatment patterns for health maintenance organizations (HMO) patients treated by specialists and those treated by generalist physicians, focusing on: (1) the proportion of Medicare heart attack survivors enrolled in HMOs who take cholesterol-lowering drugs, beta-blockers, and aspirin; and (2) whether Medicare heart attack survivors in HMOs regularly treated by a cardiologist are more likely to take cholesterol-lowering drugs, beta-blockers, and aspirin than those who do not have regular cardiology appointments. GAO noted that: (1) the ongoing use of cholesterol-lowering drugs and beta-blockers reported by Medicare heart attack survivors enrolled in HMOs generally parallels the patterns for heart attack survivors in the U.S. health care system overall; (2) as others have found for the general patient population, GAO found a much smaller proportion of respondents reported taking cholesterol-lowering drugs (36 percent) or beta-blockers (40 percent) than would be expected if everyone who would benefit from using these drugs were taking them; (3) Medicare HMO heart attack survivors with regular cardiology care--40 percent of GAO's survey respondents--were more likely to take the recommended drugs than those without regular appointments with a cardiologist; (4) enrollees who saw cardiologists regularly for their cardiac care were approximately 50 percent more likely to take cholesterol-lowering drugs and beta-blockers--a finding consistent with other comparisons of care provided by cardiologists and generalists; (5) although factors such as age, education, self-reported health status, and the presence of other illnesses also influenced who took cholesterol-lowering drugs and beta-blockers, they did not account for the higher use levels observed among patients who had routine cardiology appointments; (6) still, even patients of cardiologists often did not take one or both of these drugs; (7) by contrast, the overall use of aspirin was much higher--71 percent--and while regular patients of cardiologists were still more likely to take aspirin, the difference between them and other patients was smaller and not statistically significant (75 percent versus 68 percent); (8) on the whole, GAO's results for heart attack survivors treated by cardiologists and generalist physicians in Medicare HMOs are consistent with those of other studies of physician specialty differences in the United States; and (9) GAO's finding that patients under the regular care of cardiologists are more likely to take recommended medications reinforces the findings of the small number of other studies of physician specialty differences that are specifically concerned with HMO members and extends those findings to an older population and to a different medical condition. |
SSA administers two disability programs that provide monthly cash benefits to eligible individuals: (1) Disability Insurance (DI) for individuals (and their dependents) who have paid into the Disability Insurance Trust Fund and (2) Supplemental Security Income (SSI) for low-income individuals. To be eligible for DI or SSI benefits based on a disability, an individual must have a medically determinable physical or mental impairment that (1) prevents the individual from engaging in any substantial gainful activity and (2) is expected to result in death or has lasted or is expected to last at least 1 year. Federal law generally requires CDRs to be conducted at least once every 3 years for all DI beneficiaries whose disabilities are not considered permanent, and at intervals determined appropriate by SSA for those whose impairments are considered permanent. For SSI, federal law generally requires SSA to (1) conduct CDRs for infants during their first year of life if they are receiving SSI benefits due in part to low birth weight, and at least once every 3 years for SSI children under age 18 if their impairments are considered likely to improve, and (2) review the cases of all SSI children beginning on their 18th birthday to determine whether they are eligible for disability benefits under adult disability criteria. SSA may waive the requirement to conduct periodic legislatively- required CDRs on a state-by-state basis. SSA may also conduct CDRs that are not required by law as it deems appropriate. SSA contracts with state Disability Determination Services (DDS) agencies to initially determine whether applicants are disabled and to conduct periodic CDRs to determine whether beneficiaries continue to be disabled. DDS examiners assess whether individuals are eligible for benefits based on several criteria, including their current medical condition and ability to work. At the time beneficiaries enter the DI or SSI programs or continue their benefits following a CDR, a DDS examiner determines beneficiaries’ due dates for a subsequent CDR based on their potential for medical improvement. Beneficiaries classified as “medical improvement expected” are generally scheduled for a CDR within 6 to 18 months, beneficiaries classified as “medical improvement possible” are scheduled once every 3 years, and beneficiaries classified as “medical improvement not expected” are scheduled once every 5 to 7 years. To cost-effectively manage its CDR workload, SSA conducts CDRs in different ways. In general, beneficiaries with a high likelihood of medical improvement are referred for a full medical review—an in-depth assessment of a beneficiary’s medical and vocational status. Beneficiaries with a low likelihood of medical improvement are, at least initially, sent a questionnaire known as a mailer. If beneficiaries respond to a mailer in certain ways, SSA may refer these individuals for a full medical review. In contrast to mailers, full medical reviews are more labor intensive and expensive. Full medical reviews result in a decision to either cease or continue an individual’s benefits. In fiscal year 2013, the cessation rate for CDRs involving full medical reviews was about 19 percent, whereas the cessation rate for all CDRs including mailers was about 5 percent. Each year, SSA allocates a portion of its program integrity budget to CDRs, which affects the number of full medical reviews and mailers that the agency initiates during the year. When the number of cases due for a CDR exceeds SSA’s capacity to conduct full medical reviews and mailers, the cases not initiated during the year are considered backlogged for future review. The number of CDRs completed as full medical reviews, as mailers only, or backlogged varied during fiscal years 2003 through 2013 (see fig. 1). After the authority for special funding to process CDRs expired in fiscal year 2002, backlogged CDRs increased from about 100,000 cases in fiscal year 2003 to more than 1 million in fiscal year 2007, reaching a peak of nearly 1.5 million in fiscal year 2009. At the same time, the number of full medical reviews fell from nearly 670,000 in fiscal year 2003 to less than 190,000 in fiscal year 2007 before rebounding to nearly 429,000 in fiscal year 2013. SSA estimates the accuracy rate of CDRs and separately estimates the cost savings that result from CDRs. In addition to annually reporting the nationwide accuracy rate of all CDRs to the Congress, SSA internally tracks CDR accuracy rates by state and generates estimates for the accuracy of cessations and continuances, separately as well as combined. For fiscal year 2013, SSA reported an accuracy rate of 97.2 percent for CDR decisions. For the same year, SSA reported an estimated ratio of federal program savings to costs for performing CDRs as $15 to $1. Savings from CDRs include federal benefits that would be paid to individuals were it not for a CDR that resulted in a cessation. Such benefits include those from Medicare and Medicaid because in certain situations individuals’ eligibility for DI or SSI confers eligibility for these other programs. Because SSA does not complete all CDRs as scheduled due to competing priorities and existing resources, the agency must decide which cases will receive a full medical review. SSA uses a range of inputs to prioritize which CDRs to conduct, such as: Statutory requirements: Legal requirements to review SSI children beginning at age 18 to determine if they are eligible for benefits under adult disability criteria, and reviews of children up to 1 year old who are receiving SSI benefits due in part to low birth weight. SSA policies: Rules established by SSA to guide prioritization. For example, SSA prioritizes cases with particular responses to its mailers and cases with a “medical improvement expected” designation that are coming up for review for the first time. Statistical models: A set of statistical models that score each case according to the likelihood of medical improvement, typically the sole criterion for ceasing benefits. SSA’s prioritization process determines which CDRs are initiated and in what form: full medical review or mailer. To begin, SSA initiates full medical reviews of cases that fall into two high-priority categories: first, statutory requirements, and then SSA policies. Once full medical reviews of all high-priority cases are initiated, SSA prioritizes the remaining cases by using its statistical models. Full medical reviews of cases with the highest scores (i.e., highest likelihood of medical improvement) are initiated as resources permit, first by beneficiary group (i.e., DI, SSI children, SSI adults) and then by the statistical scores of cases within the group (see fig. 2). Cases with lower scores (i.e., lower likelihood of medical improvement) receive a mailer or are backlogged for future review. The extent to which the statistical models have been used to select cases for full medical reviews has varied by year, but the models have been consistently used for determining who receives mailers. Specifically, we found that the extent to which SSA’s statistical models were used to select cases for full medical review was related to the combination of budget fluctuations for CDRs, SSA’s statutory requirements, and agency policies. We estimate that the statistical models were the basis for selecting 11 to 60 percent of full medical reviews completed annually in fiscal years 2003 through 2013 (see fig. 3). In contrast, SSA has used the models consistently since 1993 to determine which cases should receive a mailer. Although SSA annually assesses its models’ performance, the agency has not updated the models since 2007. The models’ effectiveness depends on their ability to accurately predict beneficiaries’ likelihood of medical improvement. To test the accuracy of the models, SSA conducts an annual validation process using a sample of completed cases to evaluate how well the models predicted medical improvement. According to SSA officials, the model validation process has shown that the models’ accuracy in predicting medical improvement has not degraded substantially in recent years. In addition to model validation, in the past SSA has conducted periodic re-estimation of its statistical models to help ensure that they are up-to- date. In re-estimating its models, SSA updates the relationship between existing variables and medical improvement and tests whether new variables should be included. Re-estimation is particularly important when advances in medicine and assistive technology affect people’s ability to work. In recent years, SSA has changed its classification of certain beneficiaries’ impairments to reflect advances in medical knowledge. For example, in 2015, SSA revised its codes for cancer in light of new diagnoses and treatments. Because these codes can appear as variables in the statistical models, it is possible that the models are no longer accurately capturing the effect of cancer-related impairments on the likelihood of medical improvement. Although SSA officials believe that some advances would not markedly affect the accuracy of the models, the agency has not completed a re-estimation to confirm the effect of such changes. In addition, demographic changes in the underlying population of disability beneficiaries, which has grown substantially in recent years due in part to baby boomers reaching their disability-prone years, could also affect the accuracy of the models. The contractor that SSA hired to handle its last model re-estimation in 2007 provided SSA with a set of programs that would allow the agency to re-estimate the models in-house. Regular re-estimation and updating of predictive models is a best practice, and the contractor anticipated that SSA would do so at least every 3 years. Model accuracy leads to savings for SSA in two ways. First, model accuracy is important for identifying cases that are unlikely to result in medical improvement and can therefore be handled as mailers. According to SSA’s contractor, the models’ last re-estimation in 2007 increased the accuracy of the models while allowing SSA to process over 25,000 additional cases as mailers, potentially saving the agency over $20 million by performing fewer full medical reviews. Second, greater model accuracy means the models are more likely to correctly assign high scores to cases most likely to demonstrate medical improvement, potentially leading to more medical cessations among beneficiaries who receive full medical reviews. Although SSA acknowledged the importance of re-estimating its models again, it has yet to complete concrete actions toward doing so. In December 2015, SSA officials indicated that they were in the process of re-estimating the models, but the agency had not yet documented its efforts. In addition, according to SSA officials, the agency had not yet established plans to re-estimate the models on a regular basis. Without re-estimating its models on a regular basis, the agency risks losing the predictive accuracy of the models and could compromise its ability to use CDR resources efficiently. Although SSA considers cost savings when prioritizing CDR cases, it does not do so in a manner that will maximize potential savings. According to federal internal control standards, federal agencies should ensure effective stewardship of public resources. The order in which SSA prioritizes beneficiary groups for CDRs generally aligns with the average savings per full medical review conducted for those groups in recent years. For example, the two highest priority groups—statutorily required reviews for age-18 redeterminations and low birth-weight children—have the highest average savings in foregone disability benefits as a result of full medical reviews (see fig. 4). However, the priority ranking of beneficiary groups is not exclusively reliant upon the average savings achieved from conducting full medical reviews, because the agency takes other factors into consideration. For example, the average savings per full medical review of children receiving SSI benefits on the basis of low birth weight is higher than that of SSI children at age 18. Although reviews of 18-year-olds are automatically initiated 2 months before the beneficiary’s 18th birthday, not all reviews of low birth-weight children are conducted as scheduled. Specifically, in fiscal years 2009 through 2014, approximately 3,900 to 21,700 low birth- weight reviews were backlogged annually. In addition, although the SSI Other Children group has a higher average savings in foregone disability benefits than DI beneficiaries, SSI Other Children are prioritized after DI beneficiaries. In fiscal year 2013, SSA conducted more than twice as many full medical reviews on DI beneficiaries as on SSI Other Children beneficiaries and backlogged tens of thousands more full medical reviews for SSI Other Children than for DI beneficiaries. DI cases have been given priority over SSI Other Children partly to protect the Disability Insurance Trust Fund, which is the source of benefit payments to most DI recipients. However, recent action to address the solvency of the Disability Insurance Trust Fund somewhat mitigates this rationale. If SSA had switched the number of full medical reviews conducted for these groups in 2013, it is possible that the agency would have generated over $100 million more in savings. Furthermore, in focusing on beneficiary groups, SSA’s prioritization process does not capture any differences among subgroups’ average savings in foregone disability benefits as a result of full medical reviews. For example, the DI group can be split into four subgroups, and the average lifetime savings per full medical review among these subgroups differed by as much as approximately $3,000 (or about 21 percent) in recent years (see fig. 5). The aggregate mix of cases across different beneficiary groups reviewed during a fiscal year directly affects the agency’s total savings from conducting CDRs. If SSA were to shift the mix of discretionary cases it reviews among subgroups within beneficiary groups while still taking likelihood of medical improvement into account, it could realize greater savings. For example, shifting the mix of DI cases reviewed to better align with historical average savings performance among different DI subgroups would likely increase SSA’s total savings. Furthermore, we reported in 2012 that certain subgroups of SSI children beneficiaries, such as those with speech and language disorders as well as other mental impairments, demonstrated higher rates of initial cessation (i.e., prior to the appeals process) stemming from full medical review than other SSI children beneficiaries. As a result, reviews of these subgroups are more likely to contribute to savings for the agency than other non- required SSI children reviews. Without considering average savings at the beneficiary subgroup level, SSA may not be maximizing the total savings it realizes from conducting full medical reviews. In addition to differences in savings from shifts in the aggregate mix of cases receiving full medical reviews, savings can also differ when comparing individual cases. When an individual’s benefits are ceased as the result of a CDR, the foregone benefits represent savings to the federal government. The amount of savings depends on various factors that affect how much SSA would have paid had the individual continued to receive disability benefits over time. These factors include the individual’s age, life expectancy, and monthly benefit payment. For example, two individuals who are different ages but are otherwise similar (e.g., they live in the same state, have the same benefit amount, and have the same likelihood of medical improvement as determined by SSA’s statistical models) would generate different expected savings from a CDR because the younger individual would likely receive benefits for a longer period of time. Similarly, two individuals who have different benefit amounts but are otherwise similar would generate different expected savings from a CDR because the individual with higher monthly benefits would likely receive greater total benefits over time. Prioritizing the CDR for the younger individual or the individual with a higher benefit level could result in greater savings for SSA. The simplified scenarios below illustrate this point; however, if SSA were to further incorporate such factors for individuals into its CDR prioritization process, a more complete set of inputs and assumptions would be needed (see fig. 6). Despite the potentially substantial differences in savings among beneficiaries, SSA lacks a mechanism for factoring expected savings from benefit cessation into its CDR prioritization process on a case- specific basis. As a program integrity effort, CDRs are intended to assess the continued eligibility of beneficiaries to ensure that payments are made only to those individuals who should be receiving them, and SSA’s statistical models use an appropriate proxy of eligibility—potential for medical improvement—to prioritize cases for review. However, for beneficiaries with the same likelihood of medical improvement, SSA officials told us the agency does not further differentiate among individuals in the same beneficiary group on the basis of potential benefit savings. In SSA’s current prioritization process, the individuals depicted in the hypothetical scenarios in figure 6 would be equally likely to receive a full medical review because the agency does not consider the potential savings from individual cessations. As demonstrated in the analysis presented above, SSA could miss additional savings because it does not further consider beneficiaries’ potential savings when prioritizing cases for full medical review. To assess the ability of DDSs to correctly apply policy and fully document CDRs, SSA performs quality reviews of a sample of continuances and cessations. The SSA quality reviewers who perform these reviews have guidance for checking specific elements of the decisions, and they are guided through a step-by-step computer program for conducting and documenting the reviews. SSA’s quality reviewers check CDRs for three types of errors: (1) decision errors, which include incorrect decisions or incomplete evidence to support a decision; (2) date errors, including incorrect benefit cessation dates; and (3) administrative errors. The reviewers return CDRs with decision errors to the DDS to perform additional work but generally correct those with date and administrative errors themselves. SSA uses these quality reviews for multiple purposes. First, SSA estimates state, regional, and national CDR accuracy rates—the percentage of CDRs estimated to be accurate on the basis of a statistical sample. SSA also uses these accuracy rates to help monitor DDSs’ performance and shares this information with the DDSs. In addition, SSA uses the results of quality reviews to correct identified errors before the DDS decisions take effect. Although SSA has reported high nationwide CDR accuracy rates in recent years, we identified shortcomings in how SSA prevents errors, defines and reports accuracy, and samples CDRs for quality review: Preventing errors: Although SSA tracks the number and types of CDR decision errors and disseminates this information to state DDSs, it does not analyze the characteristics of CDR errors to help identify error trends associated with particular types of cases and address root causes. According to SSA officials, SSA probes CDR quality review data to uncover error trends by, for example, general groupings of impairments such as mental disorders. However, SSA does not analyze the data to uncover error trends for specific impairments, beneficiary types, or other characteristics. Federal internal control standards stipulate that management should assess the quality of performance over time and promptly resolve findings from audits and other reviews. According to officials, SSA does not analyze the characteristics of CDRs with errors because CDR accuracy rates are generally high and resources are limited. In addition, officials stated that SSA does not have sufficient data to do statistical modeling for such analyses. However, it is possible to analyze the characteristics of CDRs with errors by comparing relevant percentages without modeling, using data from multiple years if necessary. According to SSA and DDS officials, certain types of cases may be more error-prone than others. For example, cases involving mental impairments are thought to be relatively error-prone because they can be more challenging to document. In addition, officials reported challenges in conducting CDRs of low birth-weight children receiving SSI benefits because of the lack of documentation of other impairments they may have. However, because SSA has not analyzed the incidence of inaccurate CDRs by impairment, beneficiary type, or other characteristics, it cannot efficiently identify common types of errors and their root causes to help the DDSs take steps to prevent them. Defining and reporting accuracy: In determining CDR accuracy rates, SSA does not include date errors, including incorrect cessation dates. As a result, decision makers do not have a complete picture of the CDR errors that affect disability payments. We have previously reported that to be useful, performance information must be complete, accurate, and valid, among other factors. However, per SSA regulation, the agency does not consider date errors when calculating accuracy rates because date errors do not affect the decision to cease or continue benefits, according to officials.28, 29 Nonetheless, such errors can affect the number of payments a beneficiary receives and thus SSA’s costs. For example, cessation date errors in a CDR can result in some beneficiaries receiving payments for longer or shorter periods of time, and thus accruing overpayments or underpayments for the period in question. Without including date errors in its reported accuracy rates, SSA does not provide its management and other decision makers and the public with complete information about errors that can affect disability payments. In addition, if SSA had counted date errors in CDR cessations, its accuracy rate for cessations in fiscal year 2014 would have fallen 1.6 percentage points from 95.5 percent to 93.9 percent. For some states, the effect of considering these errors is more pronounced. We examined SSA’s fiscal year 2014 cessation accuracy rate estimates and found that for 13 states, the accuracy rates would have decreased by at least 2 percentage points had SSA counted date errors; and, in one state the accuracy rate for cessations would have fallen 7 percentage points, from 95.4 percent to 88.4 percent. SSA regulations define accuracy in this context as the percentage of cases that do not have to be returned to state DDSs for further development or correction of decisions based on evidence in the files. See 20 C.F.R. §§ 404.1643, 416.1043. In explaining the accuracy standard, SSA stated that its primary purpose was to improve the initial claims process and ensure that only properly entitled claimants receive disability benefits and that its approach was to specify outputs (i.e., performance accuracy), rather than specifying all inputs that could go into the standard. SSA conducts stewardship reviews which examine the non-medical quality of various decisions related to benefit payments, including date designations. To do so, SSA reviews a sample of individuals receiving payments. In conducting and reporting on these reviews, however, SSA does not specifically focus on CDRs. Sampling CDRs for quality review: SSA produces accuracy rate estimates by state DDS, but its sampling approach does not reliably and efficiently generate accuracy rate estimates for continuances and cessations separately in every state. According to federal guidance for developing statistical estimates, agencies should develop a sampling plan that is reflective of the level of detail and precision needed of the key estimates. CDR accuracy rates vary by state, and continuances are consistently more accurate than cessations. In fiscal year 2014, for example, the states’ estimated CDR accuracy rates varied from 92.4 percent to 99.8 percent. In the same year, the estimated accuracy rate for continuances was 98.3 percent nationwide, whereas the equivalent for cessations was 95.5 percent. Moreover, the range of accuracy rates across states is much larger for cessations. In fiscal year 2014, state-level accuracy rates for cessations ranged from 78.3 to 100 percent, while the accuracy rates for continuances ranged from 92 to 100 percent. To monitor CDR accuracy, SSA randomly selects about 70 continuances and 70 cessations for quality review each quarter from each state. Despite this sampling approach, SSA officials stated that their sampling design is not intended to produce precise estimates for continuations and cessations separately by state. However, precise accuracy rate estimates for continuations and cessations separately by state are needed to monitor DDS performance because of the difference in accuracy by decision type and because the state DDSs are managed separately. In analyzing CDR workload and accuracy data, we found that SSA’s sampling approach produced accuracy rate estimates with margins of error that were consistently wide in seven states and consistently narrow in six states for either one type of CDR decision or both. A wide margin of error occurs when there are not enough CDR decisions in the sample to produce a reliable estimate. In these instances, such as in Vermont and Wyoming, we found SSA could not produce an estimate with a margin of error of plus or minus 5 percentage points using its current approach unless it sampled more CDR decisions. When SSA does not sample enough decisions and produces estimates with wide margins of error, decision makers may be relying on misleading information to assess CDR accuracy. Conversely, when SSA samples too many decisions and produces estimates with margins of error that are narrower than necessary to achieve reliable results, the agency may be wasting time and resources on such quality reviews. SSA’s annual process for estimating the cost savings of CDRs—the estimated ratio of federal program savings to costs for performing CDRs—involves many steps. To calculate the federal program savings generated by CDRs in a particular year, SSA estimates the present value of expected future benefits over 40 years that are saved as a result of the reviews. In forecasting these savings, SSA considers benefits from programs administered by SSA (i.e., DI and SSI) as well as programs that are not administered by SSA (i.e., Medicare and Medicaid). To do so, SSA estimates the number of people whose benefits would be ceased by CDRs and considers the effect of appeals in determining these estimates. SSA then estimates the savings associated with the cessations that are forecasted not to be overturned. It considers the age of individuals whose benefits would be ceased, and uses separate models to forecast savings from DI, SSI, Medicare, and Medicaid. SSA also forecasts and accounts for the number and timing of beneficiaries who would stop receiving disability benefits regardless of a CDR. Similarly, it forecasts and excludes the number of former beneficiaries who will successfully reapply for benefits after, for example, a new disabling condition arises. To generate the overall CDR cost savings rate (i.e., the amount saved for every dollar invested in CDRs) for a particular year, SSA divides the present value of future benefit savings by SSA’s actual cost of conducting CDRs during the relevant year (see fig. 7). To determine the cost of conducting CDRs, SSA considers its relevant expenses as well as those of the state DDSs. We determined that SSA’s methods and assumptions for estimating CDR cost savings were reasonable, but, in certain respects, inconsistent with guidance for conducting cost savings analysis of federal programs. Specifically, we identified two areas of weakness: Sensitivity analysis: According to federal guidance for conducting cost savings analysis, “major assumptions should be varied and net present value and other outcomes recomputed to determine how sensitive outcomes are to changes in the assumptions.” In reviewing SSA’s approach to estimating CDR cost savings for fiscal year 2012, we determined that SSA did not conduct sensitivity analysis of the overall cost savings rate. However, SSA separately performed some limited sensitivity analysis on savings from DI and SSI, which collectively represented about 82 percent of the savings that SSA forecasted for fiscal year 2012 CDRs. SSA calculated the effect of using inputs from fiscal year 2011, such as the average benefit amount, on the savings estimates for fiscal year 2012. However, SSA did not vary its assumptions (e.g., from optimistic to pessimistic) to generate a range of estimated savings. In addition, SSA has not reported the effect of changing its assumptions about SSI and DI savings on the overall cost savings estimate. According to an SSA official, doing sensitivity analysis on the reported cost savings estimate would require additional coordination with CMS about Medicare and Medicaid. However, SSA could conduct more complete sensitivity analysis by, for example, estimating a combined range of savings from DI and SSI without additional coordination. By not including a range of estimated savings for at least SSA’s programs, decision makers lack data on the extent to which the estimates could vary under different assumptions. Documentation: According to federal guidance, models used in cost savings analysis should also be well documented and, where possible, available to facilitate independent review. SSA uses multiple complex models to estimate the cost savings of CDRs, but it has limited documentation about its methods, including data sources, assumptions, and limitations that factor into the estimates. For example, SSA has not documented how it estimates the number and timing of beneficiaries who stop receiving disability benefits because of a CDR, as well as the number of former beneficiaries who will successfully reapply for benefits. According to SSA officials, the agency has not yet documented the assumptions and procedures used to calculate CDR cost savings because of competing priorities and limited resources. Consequently, knowledge of SSA’s models is limited to the few SSA actuaries who work with them, and this information is not readily available or transferrable to others, including external reviewers. In light of SSA’s current backlog of CDRs and the long-term financial challenges of the Disability Insurance Trust Fund, conducting timely, high- quality, and cost-effective CDRs is particularly important. In an effort to use its resources efficiently, SSA applies several sound practices to help prioritize CDRs. However, without further integrating comparative cost savings information in its prioritization process, SSA is missing an opportunity not only to focus on CDRs that are likely to save the federal government the most money, but also to more efficiently use its resources for program integrity work. Maximizing cost savings is not the only goal of this work, but it is an important criterion to help SSA prioritize CDRs and ensure that beneficiaries are being more effectively selected for review. Further, although SSA has an extensive process for reviewing the quality of CDR decisions and a high overall accuracy rate, until the agency systematically uses available data to identify error-prone cases and root causes, it will be hard-pressed to prevent similar errors from recurring. In addition, absent tracking all meaningful errors that it identifies, such as date errors, the agency and other stakeholders lack an accurate sense of the true error rate of CDRs. Similarly, SSA’s current approach to sampling state decisions means the agency may be relying on misleading performance information for making management decisions. SSA has demonstrated that CDRs are cost-effective, and it applies sound methods and assumptions for estimating cost savings. However, because it does not vary the assumptions that it uses to estimate a range of potential returns on investment for CDRs, the Congress and other stakeholders do not have complete information on the precision of these estimates and the extent to which they could vary with changes in assumptions. Finally, SSA’s limited documentation about its actuarial models leaves the agency vulnerable in the event of turnover of the few staff who use these models and challenges external reviewers’ ability to understand and audit the integrity of its models. We recommend that the Acting Commissioner of Social Security: 1. Direct the Deputy Commissioner of Operations to further consider cost savings as part of its prioritization of full medical reviews. Such options could include considering the feasibility of prioritizing different types of beneficiaries on the basis of their estimated average savings and, as appropriate, integrating case-specific indicators of potential cost savings, such as beneficiary age and benefit amount, into its modeling or prioritization process. 2. Direct the Deputy Commissioner of Budget, Finance, Quality, and Management to complete a re-estimation of the statistical models that are used to prioritize CDRs and determine a plan for re-estimating these models on a regular basis to ensure that they reflect current conditions. 3. Direct the Deputy Commissioner of Budget, Finance, Quality, and Management to monitor the characteristics of CDR errors to identify potential root causes and report results to the Disability Determination Services. For example, SSA could analyze CDRs with and without errors to identify trends by impairment, beneficiary type, or other characteristics. 4. Direct the Deputy Commissioner of Budget, Finance, Quality, and Management to regularly track the number and rate of date errors, which can affect benefit payments (e.g., incorrect cessation dates), and consider including those errors in its reported CDR accuracy rates. 5. Direct the Deputy Commissioner of Budget, Finance, Quality, and Management to adjust its approach to sampling CDRs to efficiently produce reliable accuracy rate estimates for continuances and cessations separately in each state. 6. Direct the Chief Actuary to conduct sensitivity analyses on SSI and DI’s contributions to CDR cost savings estimates and report the results reflecting a range of inputs (e.g., from optimistic to pessimistic). 7. Direct the Chief Actuary to better document the methods including data sources, assumptions, and limitations that factor into its estimates of CDR cost savings. We provided a draft of this report to SSA for review and comment, and its written comments are reproduced as appendix II in this report. SSA stated that it generally agreed with our recommendations, but noted that the level of program integrity funding it receives has affected the number of CDRs performed annually and, at times, the size of the CDR backlog. SSA also noted that our report implied that SSA is not focused on the CDRs that are most likely to save the government money and that the report did not fully convey the accuracy of the agency’s statistical models and its treatment of CDR errors. We agree that SSA’s CDR process is generally designed to use its program integrity resources efficiently and note in our report that SSA applies several sound practices to help prioritize CDRs including annually assessing its statistical models’ performance. However, we maintain additional steps are warranted to ensure ongoing accuracy of the models and to maximize potential savings. We also note that the agency has a process in place to identify and evaluate errors, but maintain that additional steps could be taken to systematically analyze error trends and uncover root causes of errors. SSA agreed with four of our recommendations, partially agreed with one recommendation, and disagreed with two recommendations. The agency’s specific concerns and our responses are described below: Regarding our recommendation to further consider cost savings as part of its prioritization of full medical reviews, SSA partially agreed. Although SSA agreed that it could look for ways to improve its return on conducting CDRs, it also stated that its statistical models and prioritization process already do much of what we recommend. For example, SSA stated that age is already a strong variable in its statistical models. However, these models predict medical improvement and are not designed to take expected cost savings into account. We continue to believe that to maximize expected cost savings SSA could refine its prioritization process by factoring in actuarial considerations. For example, SSA could consider the effect of a beneficiary’s age on expected costs savings, in addition to its existing statistical models that account for the effect of age on the likelihood of medical improvement. Regarding our recommendation to complete a re-estimation of the statistical models that are used to prioritize CDRs and determine a plan for re-estimating these models on a regular basis, SSA agreed and stated that it plans to complete its ongoing re- estimation and to document a process for determining when to re- estimate the models in the future. Regarding our recommendation to monitor the characteristics of CDRs with errors to identify root causes, SSA agreed and stated that it reports all errors to the relevant DDS for corrective action. SSA further stated that its identification of root causes is limited by the relatively few reviewed CDRs that have errors. However, in fiscal year 2014 as an example, SSA identified over 600 CDRs with errors. Although these CDRs make up a small percentage of the CDRs reviewed by SSA that year, the agency could analyze the characteristics of CDRs with errors by comparing relevant percentages without modeling. In addition, SSA could combine data from multiple years if it determined that considering more CDRs with errors would be helpful. Regarding our recommendation to track the number and rate of date errors and consider including them in its reported CDR accuracy rates, SSA disagreed and stated that, per SSA regulation, the agency does not consider date errors when calculating accuracy rates because date errors do not affect the decision to cease or continue benefits. SSA also stated its stewardship reviews examine the non-medical quality of benefit payment decisions. However, these reviews are not focused on CDRs, and SSA does not report results from them for CDRs specifically. SSA also explained that it does not track the number and rate of date errors because they are infrequent. However, SSA’s regulations do not prevent the agency from tracking date errors, and until it does, SSA cannot definitively determine the frequency of these errors. In addition, we found that considering date errors substantially reduced some states’ estimated CDR accuracy rates. Without tracking these errors, SSA cannot assess their effect and consider whether including them in its reported CDR accuracy rates has merit. Regarding our recommendation to adjust its approach to sampling CDRs to efficiently produce reliable accuracy rate estimates for continuances and cessations separately in each state, SSA disagreed and stated that some states do not generate enough CDR decisions, particularly cessations, to generate statistically valid samples. However, for states with CDR samples that are consistently too small to produce reliable results, SSA could, for example, pool decisions from more months than it currently does to generate statistically valid samples by state. Conversely, for states with CDR samples that are consistently larger than necessary to efficiently achieve reliable results, SSA could, for example, reduce sample sizes. Because CDR accuracy rates vary by state and cessations are consistently less accurate than continuances, we maintain that SSA should adjust its approach to sampling CDRs. Regarding our recommendation to conduct sensitivity analyses on SSI and DI’s contributions to CDR cost savings estimates and report the results reflecting a range of inputs, SSA agreed and stated that it will expand on its current sensitivity analyses as time and resources permit. Regarding our recommendation to better document the methods that factor into its estimates of CDR cost savings, SSA agreed and stated that it will improve and expand its existing documentation as time and resources permit. SSA also provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Acting Commissioner of Social Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report were to examine (1) how the Social Security Administration (SSA) selects which Continuing Disability Reviews (CDR) to conduct, (2) the extent to which SSA reviews the quality of CDR decisions, and (3) how SSA calculates cost savings from CDRs. To evaluate how SSA selects which CDRs to conduct, we reviewed relevant federal laws and interviewed SSA officials from the agency’s offices of Public Service and Operations Support, Budget, and Quality Improvement. We also reviewed internal SSA documents on the agency’s approach to prioritizing and processing CDRs, including its use of statistical models. To evaluate the statistical models that SSA uses to help prioritize CDRs, we reviewed internal SSA documents about the statistical models, including lists of variables, tests of model fit, and detailed technical reports provided by the external contractor that last re-estimated the models. The technical reports provided by the contractor explained how each of the models was developed and tested, including the data sources and variables that were considered and used, how SSA impairment codes were aggregated into impairment groups, and how the functional form and interaction terms were identified. In 2007, the contractor compared the performance of models for adult beneficiaries against that of SSA’s prior models—which had been estimated last in 2005—and only specifications and variables that improved model performance were retained. We evaluated the technical specifications and tests of model fit and predictive accuracy for the models for each of the beneficiary cohorts. To estimate the proportion of full medical reviews completed because of their score from SSA’s statistical models, we analyzed CDR annual report data from fiscal years 2003 through 2013. We assessed the reliability of these data by reviewing related documentation and interviewing knowledgeable agency officials, and we found these data sufficiently reliable for our purposes. To assist with the analysis, we obtained information from SSA’s Office of Public Service and Operations Support on the number of SSA policy priority cases processed annually. From the total number of full medical reviews completed during a fiscal year, we subtracted completed full medical reviews that were prioritized because they were statutorily required (e.g., reviews of SSI children at age 18 and reviews of children under 1 year old who are receiving SSI benefits due in part to low birth weight) and because of SSA policy (e.g., mailers with certain responses and first-time reviews for beneficiaries in the “medical improvement expected” diary category). To avoid potential double- counting, we did not count the sample of approximately 60,000 cases that SSA initiates annually to validate its statistical models among the policy priority cases because the sample consists of cases across all beneficiary groups, including statutorily required cases. As a result, our calculations may underestimate the number of required priority cases and overestimate the number of cases selected because of the statistical models. To illustrate the impact of further incorporating cost savings into the prioritization process, we obtained and analyzed data from SSA on the average savings per full medical review by beneficiary group in each of fiscal years 2012 and 2013. Using weighted averages of these data, we calculated the average savings for fiscal years 2012 and 2013 separately and combined. We also developed two hypothetical scenarios that pair near-identical beneficiaries with different ages or monthly benefit payments to demonstrate the effect of considering cost savings on an individual basis. We used information from the fiscal year 2013 statistical supplements on the DI and SSI programs to develop reasonable ages and benefit levels for the hypothetical beneficiaries. We calculated the expected savings in foregone benefits after cessation for each beneficiary by multiplying the monthly benefit by the number of months until the beneficiaries would have aged out of the disability programs. To understand the process that SSA uses to review the quality of CDR decisions, we reviewed relevant federal laws, regulations, policies, and procedures; interviewed SSA officials about these policies and procedures; and analyzed SSA’s CDR workload and decision accuracy data. This work included reviewing documentation of the Disability Case Adjudication and Review System (DICARS), the software program in which SSA completes quality reviews. We interviewed SSA officials about how the quality reviews are conducted and how the agency uses the results, and compared the agency’s policies and procedures to generally accepted statistical practices and federal internal control standards. We also interviewed state Disability Determination Services officials about factors that challenge CDR quality. We assessed the reliability of SSA’s CDR workload and decision accuracy data by performing data testing, reviewing related documentation, and interviewing agency officials, and we found the data to be sufficiently reliable for the purposes of this review. To evaluate the extent to which SSA reviews the quality of CDR decisions, we analyzed SSA’s CDR workload and decision accuracy data to determine whether its method for sampling CDRs and estimating CDR accuracy are consistent with generally accepted statistical practices and SSA’s reporting goals. SSA reports accuracy rate estimates for each state every month using the most recent 3 or 6 months of quality review data. Its goal is to produce estimates with 95 percent confidence intervals that are within plus or minus 5 percentage points of the estimate. We analyzed SSA’s CDR workload and accuracy data, consistent with SSA’s sampling and reporting methods, from June 2013 through April 2015. Specifically, we identified the number of continuance and cessation determinations in each state, the District of Columbia, and Puerto Rico; SSA’s accuracy rate estimates for these determinations for each 6-month period; and the 95 percent confidence interval margins of error for each estimate. To identify states that had estimates that consistently do not achieve the reporting goals, we compared the workload, accuracy rate estimates, and margins of errors to those specified in SSA’s sample design and reporting goals. We calculated margins of error for estimates in which SSA did not provide them. We used a statistical formula that produces appropriate margins of error, including when standard formula do not apply, to determine and examine a margin of error for all estimates. We chose this formula because in some cases CDR accuracy is so high or the sampling fraction is so large that the standard statistical formula used for these purposes would compute margins of error that are not appropriate, such as those resulting in confidence intervals above 100 percent. Since CDR accuracy cannot be greater than 100 percent, the standard formula is not appropriate. We also analyzed CDR workload data from fiscal years 2001 through 2014 to inform our evaluation of SSA’s sampling method. To determine the effect of date errors on accuracy rates, we analyzed data about CDR date errors and cessations. We considered data from fiscal years 2010 through 2014 to determine the frequency with which date errors occur. We calculated fiscal year 2014 cessation accuracy rate estimates for each state by combining the number of cessation decision errors and the number of date errors and dividing the total by the number of cessations SSA reviewed. SSA’s date error data were not broken down by decision type (i.e., continuance or cessation), but we assigned these errors to cessations because of input from SSA and our corroborating analysis. According to SSA officials, the most common date error on a CDR is a cessation date error and other date errors, such as incorrectly inputting an onset date, can occur in a cessation or continuance but are rare. Our analysis corroborated this information. For example, in fiscal year 2014, of 127 date errors identified in CDRs nationally, 125 of them were cessation date errors. In addition, we calculated margins of error for each estimate to assess the statistical reliability of each estimate. We used the statistical formula that produces appropriate margins of error, consistent with our approach to calculating margins of error in our analysis of SSA’s sampling method. To evaluate SSA’s approach to calculating cost savings from CDRs, we compared SSA’s estimation process to actuarial standards of practice and federal guidelines for benefit-cost analyses of federal programs. Specifically, we interviewed SSA actuaries about the models and methods they used to perform the cost-savings calculation for fiscal year 2012. We also reviewed portions of the programming code related to these models to corroborate the information from the actuaries. In addition, we examined the assumptions that SSA uses to calculate the present value of future benefits saved from ceasing a person’s benefits as the result of a CDR by examining where and how SSA incorporates assumptions into its calculation process. Finally, we reviewed the fiscal year 2012 CDR cessation data and information SSA provided to the Centers for Medicare & Medicaid Services (CMS) that informed CMS’s estimates of Medicare and Medicaid savings resulting from CDR cessations, but we did not review CMS’s models. We conducted this performance audit from December 2014 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Erin Godtland (Assistant Director), Joel Green (Analyst-in-Charge), Susan Aschoff, James Bennett, Grace Cho, Alexander Galuten, Emei Li, Isabella Johnson, Rhiannon Patterson, Almeta Spencer, Daren Sweeney, Jeff Tessin, Kristen Timko, Frank Todisco, Sonya Vartivarian, and Shana Wallace made key contributions to this report. | To help ensure that only eligible individuals receive disability benefits, SSA conducts periodic CDRs to assess beneficiaries' medical condition. CDRs have historically saved the government money. However, in recent years, SSA has had difficulty conducting timely CDRs resulting in a backlog of over 900,000 CDRs in fiscal year 2014. With this backdrop, GAO was asked to study SSA's ability to conduct and manage timely, high-quality CDRs. This report evaluates, among other things, how SSA selects which CDRs to conduct and the extent to which SSA reviews the quality of CDR decisions. GAO analyzed CDR data for fiscal years 2003 through 2013 (the most recent year for which complete data were available); assessed SSA's models used to prioritize CDRs; reviewed relevant federal laws, regulations, and SSA documentation about CDR prioritization and accuracy review procedures; and interviewed SSA and state Disability Determination Services officials. The Social Security Administration (SSA) selects cases for continuing disability reviews (CDR) using several inputs, but it does not do so in a manner that maximizes potential savings. SSA first prioritizes CDRs required by law or agency policy such as those for children under 1 year old who are receiving benefits due in part to low birth weight. Then SSA uses statistical models to identify the remaining CDRs to be conducted each year. The models also determine which cases will receive an in-depth review of medical records by the Disability Determination Services—the state agencies that conduct CDRs—versus a lower-cost questionnaire sent directly to the beneficiary. As shown in the figure below, a growing number of cases have been set aside for future review (backlogged) over the last 10 years. Although SSA somewhat considers potential cost savings when selecting cases for in-depth reviews, its approach does not maximize potential savings for the government. For example, estimated average savings from conducting CDRs are higher for some groups of Disability Insurance (DI) beneficiaries than others, but SSA's selection process does not differentiate among these groups. As a result, it may be missing opportunities to efficiently and effectively use federal resources. SSA reviews a sample of CDRs for quality, but its analysis and reporting of errors is not comprehensive. Specifically, SSA randomly selects CDR decisions to check for a variety of potential errors. For example, SSA regularly monitors and reports on the frequency of errors that affect whether benefits are continued or ceased. However, contrary to federal internal control standards, SSA does not systematically analyze errors to detect and address root causes. Consequently, SSA lacks information that could help improve the quality of the reviews conducted by the Disability Determination Services. Further, in determining CDR accuracy rates, SSA does not count date errors, including incorrect cessation dates, which can affect disability benefit payments. As a result, decision makers do not have a complete picture of the CDR errors that affect disability payments. GAO recommends SSA, among other things, further consider cost savings as part of its prioritization of CDRs, analyze the root causes of CDRs with errors, and track date errors. SSA agreed with most of GAO's recommendations, but disagreed that there is a need to track date errors and to adjust its approach to sampling CDRs for quality review. GAO maintains actions are warranted and feasible as discussed in the report. |
HCFA’s vision, which we support, is for a single, unified system to replace the nine current systems now used by Medicare, the nation’s largest health insurer, serving about 37 million Americans. The goals of MTS are to better protect program funds from waste, fraud, and abuse; allow better oversight of Medicare contractors’ operations; improve service to beneficiaries and providers; and reduce administrative expenses. At present, HCFA expects MTS to be fully operational in September 1999, and to process over 1 billion claims and pay $288 billion in benefits per year by 2000. These are ambitious goals, and we realize that developing such a system is complex and challenging. Currently, when legislative or administrative initiatives result in revised payment or coverage policies, each of the nine automated systems maintained by Medicare contractors to process claims must be modified. An integrated system would eliminate the need for such cumbersome and costly multiple processes. In January 1994, HCFA awarded a contract to GTE Government Systems Corporation to design, develop, and implement the new automated system for processing claims. Two related contracts were awarded: to Intermetrics, Inc., in April 1994 for what is known as independent verification and validation, or IV&V—a separate technical check on GTE’s work; and to SETA Corporation in September 1995 for systems testing. Over the last 12 years, the federal government has spent more than $200 billion on information technology, and we have evaluated hundreds of these projects. On the basis of this work, we have determined that two basic, recurring problems constrain the ability of organizations to successfully develop large systems: (1) failure to adequately select, plan, prioritize, and control information system projects; and (2) failure to take advantage of business process improvements that can significantly reduce costs, improve productivity, and provide better services to customers. These problems have often led to meager results in federal agency efforts to design, develop, and acquire complex information systems. For example, after investing over 12 years of effort, the Federal Aviation Administration (FAA) chose to cut its losses in its problem-plagued Advanced Automation System by cancelling or extensively restructuring elements of this modernization of the nation’s air traffic control system. The reasons for FAA’s problems included the failure to (1) accurately estimate the project’s technical complexity and resource requirements, (2) finalize system requirements, and (3) adequately oversee contractor activities. Similarly, our work on IRS’ Tax Systems Modernization, designed to automate selected tax-processing functions, identified several weaknesses. For example, IRS lacked (1) a disciplined process for managing definition of requirements, and (2) a management process for controlling software development. These problems caused significant rework and delays. Last year, to help federal agencies improve their chances of success, we completed a study of how successful private and public organizations reached their goals of acquiring information systems that significantly improved their ability to carry out their missions. Our report describes an integrated set of fundamental management practices that were instrumental in producing success. The active involvement of senior managers, focusing on minimizing project risks and maximizing return on investment, was essential. To accomplish these objectives, senior managers in successful organizations consistently followed these practices—which have become known as best practices—to ensure that they received information needed to make timely and appropriate decisions. Among others, one key practice is for executives to manage information systems as investments rather than expenses. This requires using disciplined investment control processes that provide quantitative and qualitative information that senior managers can use to continuously monitor costs, benefits, schedules, and risks; and to ensure that structured systems-development methodologies are used throughout the system’s life cycle. A consensus has emerged within the administration and the Congress that better investment decisions on information technology projects are needed to help the government improve service. Important changes recently made to several laws and executive policy guidance are instituting best-practice approaches of leading organizations into the federal government. This month, the Office of Management and Budget will issue guidance that describes an analytical framework for making information technology investment decisions. Developed in cooperation with GAO, this guidance calls for agencies to implement management practices to select, control, and evaluate information technology investments throughout their life cycles. HCFA has not yet instituted a set of well-defined investment control processes to measure the quality of development efforts and monitor progress and problems. This situation has contributed to a series of problems related to requirements-definition, schedule, and costs; these problems raise concerns that MTS may suffer the same fate as many other complex systems—extensive delays, large cost increases, and the inability to achieve potential benefits. First, HCFA has not sufficiently followed sound practices in defining MTS project requirements. As a result, HCFA has twice redirected the approach and, 2 years into the contract, requirements definition at the appropriate level of specificity has not been completed. Requirements, which are defined during the analysis phase of a project, document the detailed functions and processes the system is expected to perform and the performance level to be achieved. They are intended to correct deficiencies in the current system and take advantage of opportunities to improve program economy, efficiency, and service. Because requirements provide the foundation for designing, developing, testing, and implementing the system, it is critical that they be precisely defined to avoid ambiguity and overlap, and that they completely and logically describe all features of the planned system. Using an appropriate methodology to define requirements significantly reduces risk that requirements defects will cause technical problems. Originally, HCFA’s plans called for GTE to document the current systems’ requirements, while HCFA staff defined new or future requirements for MTS. However, in September 1994, HCFA concluded that GTE’s analysis of the current systems did not contain enough detail to fully describe the current systems’ requirements. HCFA then directed GTE to provide additional detail. In September 1995, HCFA concluded that the products GTE was developing were too detailed, and again directed GTE to refocus its efforts—this time, however, on assisting HCFA staff in defining future MTS requirements. On the basis of our experience in evaluating other systems, such multiple redirections in the analysis phase of a major project indicate that HCFA’s process to control requirements lacks discipline. HCFA currently lacks an effective process for managing requirements, and has not provided adequate guidance to staff responsible for defining requirements. These deficiencies have also been cited by the IV&V contractor as an area of significant risk. Because of problems in completing the definition of requirements, and HCFA’s plans to implement a fully functional MTS in September 1999, HCFA is proceeding into the next phase of system development, the design phase, before requirements have been completed. HCFA plans to select an MTS design alternative by the end of this calendar year, but requirements are not scheduled to be completed until September 1996. Because design alternatives are used to determine how the system will be structured, if the alternatives do not reflect key requirements, the system’s future capabilities may be seriously constrained. The IV&V contractor pointed out that HCFA’s plan to select the system design in parallel with defining system requirements also increases risks that the system will not meet important goals. HCFA officials told us they believe that MTS requirements are sufficiently defined to prepare high-level system-design alternatives, but the IV&V contractor disagrees. To support critical design decisions, requirements need to be sufficiently detailed to include such functions and processes as performance levels and response times. When we reviewed HCFA’s preliminary set of requirements, we found that many of them did not contain enough detail. Second, HCFA’s development schedule for MTS contains significant overlap—or concurrency—among the various system-development phases: analysis, design, programming, testing, validation, and implementation. As shown in figure 1, the April 1994 MTS schedule—an early estimate by HCFA—is used only to illustrate the sequential nature of these phases. The November 1995 schedule shows extensive concurrency; for example, the analysis and design phases are occurring simultaneously during the period from July 1994 to September 1996. In our January 1994 report on MTS, we stated that if a contractor advances too far into a succeeding system-development phase before sufficient progress has been made in the previous phase, the risk that technical problems will occur is significantly increased. Senior HCFA officials recently told us that the MTS schedule contains concurrency because it is important to deploy the system before the end of the century; otherwise, significant costs would be incurred to modify existing systems. What is needed is quantifiable information on this cost, compared with an assessment of the risks of concurrency. HCFA has not, however, implemented a formal process to assess and manage system-development risks. The IV&V contractor has also cited this lack of a formal risk-assessment process as a problem. In addition, while HCFA’s MTS schedule has been revised several times because of the redirection of requirements definition in the analysis phase, the initial and final system-implementation dates have remained largely unchanged. As a result, the time scheduled to complete the rest of the system-development phases to meet those dates is now significantly compressed. For example, because HCFA did not adjust the initial operating capability date, it is now scheduled, at one point in a 1-year period, to work concurrently on the remaining development phases—design, programming, testing, and validation. On the basis of our previous work on large systems-development efforts, we believe that failure to allow for sufficient time to complete system-development phases increases risk and will likely result in reduced systems capability. Moreover, HCFA has not developed an integrated schedule that reflects both HCFA and contractor activities, work products, and time frames needed to perform these activities. Such a schedule provides an important tool for closely monitoring progress and problems in completing various activities. Without detailed insight about the actual status of all development activities, management will not have the information it needs to make timely decisions. HCFA’s IV&V contractor also cited concerns about the lack of an integrated schedule baseline for MTS. HCFA officials agreed that such a schedule is important. Finally, HCFA has not sufficiently developed disciplined processes to adequately monitor progress in achieving cost and benefit objectives, which are important to managing projects as investments. The estimated MTS project costs, pegged by HCFA at $151 million in 1992, have not been updated since then, and HCFA is not tracking internal costs associated with the project, such as personnel, training, and travel. According to HCFA officials, they plan to update their cost estimate next year, to reflect their current understanding of MTS’ capabilities. Similarly, except for estimated administrative savings of $200 million a year during the first 6 years of operation (1997-2002), HCFA has not yet quantified other important expected benefits of MTS, such as targets for reducing fraud, waste, and abuse, and improving services to beneficiaries and providers. Without current information on costs and potential benefits, HCFA executives will not be in the best position to realistically monitor performance or identify and maximize the system’s true return on investment. We have seen an inescapable pattern in agencies’ development of information systems: even on a small scale, those that are not developed according to sound practices encounter major, expensive problems later on. The larger the project, the bigger the risk. It takes serious, sustained effort and disciplined management processes to effectively manage system development. Effective oversight greatly reduces exposure to risk; without it, risk is dramatically and needlessly increased. The risks we see in the development of MTS can be substantially reduced if HCFA management implements some of the best practices that have been proven effective in other organizations: managing systems as investments, changing information management practices, creating line manager ownership, better managing resources, and measuring performance. HCFA still has time to correct these deficiencies. We are encouraged by HCFA’s expression of interest in learning about how to implement the best practices in systems development used by successful organizations, and look forward to working with them. This concludes our statement, Mr. Chairmen. We will be happy to respond to any questions you or other members of the subcommittees may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Health Care Financing Administration's (HCFA) approach to managing the Medicare Transaction System (MTS). GAO noted that: (1) MTS is designed to unify the nine Medicare claims-processing systems, improve Medicare contractor oversight, improve services to beneficiaries and providers, reduce administrative expenses, and better protect Medicare program funds from waste, fraud, and abuse; (2) although HCFA plans to mitigate large scale problems by implementing MTS in increments and design MTS to allow for future modifications, the lack of an effective management approach exposes the system to undue risks; (3) HCFA has not adequately defined MTS project requirements, has not identified significant system-development overlap, and lacks reliable cost and benefit information; and (4) HCFA could substantially reduce MTS development risks by implementing some of the best practices that have been proven effective in other organizations, such as changing HCFA information management practices, creating line manager ownership, better managing MTS resources, and measuring MTS project performance. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.